WorldWideScience

Sample records for event-level metadata service

  1. Building a scalable event-level metadata service for ATLAS

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Goosens, L; Viegas, F T A; McGlone, H

    2008-01-01

    The ATLAS TAG Database is a multi-terabyte event-level metadata selection system, intended to allow discovery, selection of and navigation to events of interest to an analysis. The TAG Database encompasses file- and relational-database-resident event-level metadata, distributed across all ATLAS Tiers. An oracle hosted global TAG relational database, containing all ATLAS events, implemented in Oracle, will exist at Tier O. Implementing a system that is both performant and manageable at this scale is a challenge. A 1 TB relational TAG Database has been deployed at Tier 0 using simulated tag data. The database contains one billion events, each described by two hundred event metadata attributes, and is currently undergoing extensive testing in terms of queries, population and manageability. These 1 TB tests aim to demonstrate and optimise the performance and scalability of an Oracle TAG Database on a global scale. Partitioning and indexing strategies are crucial to well-performing queries and manageability of the database and have implications for database population and distribution, so these are investigated. Physics query patterns are anticipated, but a crucial feature of the system must be to support a broad range of queries across all attributes. Concurrently, event tags from ATLAS Computing System Commissioning distributed simulations are accumulated in an Oracle-hosted database at CERN, providing an event-level selection service valuable for user experience and gathering information about physics query patterns. In this paper we describe the status of the Global TAG relational database scalability work and highlight areas of future direction

  2. A Programmatic View of Metadata, Metadata Services, and Metadata Flow in ATLAS

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS is considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. Trigger information and data from the Large Hadron Collider itself provide cases in point, but examples abound. Metadata about logical or physics constructs, such as data-taking periods and runs and luminosity blocks and events and algorithms, often need to be mapped to deployment and production constructs, such as datasets and jobs and files and software versions, and vice versa. Metadata at one level of granularity may have implications at another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and ...

  3. Data Bookkeeping Service 3 - Providing event metadata in CMS

    CERN Document Server

    Giffels, Manuel; Riley, Daniel

    2014-01-01

    The Data Bookkeeping Service 3 provides a catalog of event metadata for Monte Carlo and recorded data of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN, Geneva. It comprises all necessary information for tracking datasets, their processing history and associations between runs, files and datasets, on a large scale of about $200,000$ datasets and more than $40$ million files, which adds up in around $700$ GB of metadata. The DBS is an essential part of the CMS Data Management and Workload Management (DMWM) systems, all kind of data-processing like Monte Carlo production, processing of recorded event data as well as physics analysis done by the users are heavily relying on the information stored in DBS.

  4. Data Bookkeeping Service 3 - Providing Event Metadata in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Giffels, Manuel [CERN; Guo, Y. [Fermilab; Riley, Daniel [Cornell U.

    2014-01-01

    The Data Bookkeeping Service 3 provides a catalog of event metadata for Monte Carlo and recorded data of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN, Geneva. It comprises all necessary information for tracking datasets, their processing history and associations between runs, files and datasets, on a large scale of about 200, 000 datasets and more than 40 million files, which adds up in around 700 GB of metadata. The DBS is an essential part of the CMS Data Management and Workload Management (DMWM) systems [1], all kind of data-processing like Monte Carlo production, processing of recorded event data as well as physics analysis done by the users are heavily relying on the information stored in DBS.

  5. A programmatic view of metadata, metadata services, and metadata flow in ATLAS

    International Nuclear Information System (INIS)

    Malon, D; Albrand, S; Gallas, E; Stewart, G

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS are considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and integrated view to physicists, and support both human use and programmatic access. In this paper we consider ATLAS metadata, metadata services, and metadata flow principally from the illustrative perspective of how disparate metadata are made available to executing jobs and, conversely, how metadata generated by such jobs are returned. We describe how metadata are read, how metadata are cached, and how metadata generated by jobs and the tasks of which they are a part are communicated, associated with data products, and preserved. We also discuss the principles that guide decision-making about metadata storage, replication, and access.

  6. The ATLAS EventIndex: data flow and inclusion of other metadata

    CERN Document Server

    Prokoshin, Fedor; The ATLAS collaboration; Cardenas Zarate, Simon Ernesto; Favareto, Andrea; Fernandez Casani, Alvaro; Gallas, Elizabeth; Garcia Montoro, Carlos; Gonzalez de la Hoz, Santiago; Hrivnac, Julius; Malon, David; Salt, Jose; Sanchez, Javier; Toebbicke, Rainer; Yuan, Ruijun

    2016-01-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information obtained from the ATLAS detector. The basic unit of this information is event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex are the event picking, providing information for the Event Service and data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the GRID, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalog AMI and the Rucio data man...

  7. Study on high-level waste geological disposal metadata model

    International Nuclear Information System (INIS)

    Ding Xiaobin; Wang Changhong; Zhu Hehua; Li Xiaojun

    2008-01-01

    This paper expatiated the concept of metadata and its researches within china and abroad, then explain why start the study on the metadata model of high-level nuclear waste deep geological disposal project. As reference to GML, the author first set up DML under the framework of digital underground space engineering. Based on DML, a standardized metadata employed in high-level nuclear waste deep geological disposal project is presented. Then, a Metadata Model with the utilization of internet is put forward. With the standardized data and CSW services, this model may solve the problem in the data sharing and exchanging of different data form A metadata editor is build up in order to search and maintain metadata based on this model. (authors)

  8. Geospatial metadata retrieval from web services

    Directory of Open Access Journals (Sweden)

    Ivanildo Barbosa

    Full Text Available Nowadays, producers of geospatial data in either raster or vector formats are able to make them available on the World Wide Web by deploying web services that enable users to access and query on those contents even without specific software for geoprocessing. Several providers around the world have deployed instances of WMS (Web Map Service, WFS (Web Feature Service and WCS (Web Coverage Service, all of them specified by the Open Geospatial Consortium (OGC. In consequence, metadata about the available contents can be retrieved to be compared with similar offline datasets from other sources. This paper presents a brief summary and describes the matching process between the specifications for OGC web services (WMS, WFS and WCS and the specifications for metadata required by the ISO 19115 - adopted as reference for several national metadata profiles, including the Brazilian one. This process focuses on retrieving metadata about the identification and data quality packages as well as indicates the directions to retrieve metadata related to other packages. Therefore, users are able to assess whether the provided contents fit to their purposes.

  9. An integrated overview of metadata in ATLAS

    International Nuclear Information System (INIS)

    Gallas, E J; Malon, D; Hawkings, R J; Albrand, S; Torrence, E

    2010-01-01

    Metadata (data about data) arise in many contexts, from many diverse sources, and at many levels in ATLAS. Familiar examples include run-level, luminosity-block-level, and event-level metadata, and, related to processing and organization, dataset-level and file-level metadata, but these categories are neither exhaustive nor orthogonal. Some metadata are known a priori, in advance of data taking or simulation; other metadata are known only after processing, and occasionally, quite late (e.g., detector status or quality updates that may appear after initial reconstruction is complete). Metadata that may seem relevant only internally to the distributed computing infrastructure under ordinary conditions may become relevant to physics analysis under error conditions ('What can I discover about data I failed to process?'). This talk provides an overview of metadata and metadata handling in ATLAS, and describes ongoing work to deliver integrated metadata services in support of physics analysis.

  10. Developing Cyberinfrastructure Tools and Services for Metadata Quality Evaluation

    Science.gov (United States)

    Mecum, B.; Gordon, S.; Habermann, T.; Jones, M. B.; Leinfelder, B.; Powers, L. A.; Slaughter, P.

    2016-12-01

    Metadata and data quality are at the core of reusable and reproducible science. While great progress has been made over the years, much of the metadata collected only addresses data discovery, covering concepts such as titles and keywords. Improving metadata beyond the discoverability plateau means documenting detailed concepts within the data such as sampling protocols, instrumentation used, and variables measured. Given that metadata commonly do not describe their data at this level, how might we improve the state of things? Giving scientists and data managers easy to use tools to evaluate metadata quality that utilize community-driven recommendations is the key to producing high-quality metadata. To achieve this goal, we created a set of cyberinfrastructure tools and services that integrate with existing metadata and data curation workflows which can be used to improve metadata and data quality across the sciences. These tools work across metadata dialects (e.g., ISO19115, FGDC, EML, etc.) and can be used to assess aspects of quality beyond what is internal to the metadata such as the congruence between the metadata and the data it describes. The system makes use of a user-friendly mechanism for expressing a suite of checks as code in popular data science programming languages such as Python and R. This reduces the burden on scientists and data managers to learn yet another language. We demonstrated these services and tools in three ways. First, we evaluated a large corpus of datasets in the DataONE federation of data repositories against a metadata recommendation modeled after existing recommendations such as the LTER best practices and the Attribute Convention for Dataset Discovery (ACDD). Second, we showed how this service can be used to display metadata and data quality information to data producers during the data submission and metadata creation process, and to data consumers through data catalog search and access tools. Third, we showed how the centrally

  11. Lightweight Advertising and Scalable Discovery of Services, Datasets, and Events Using Feedcasts

    Science.gov (United States)

    Wilson, B. D.; Ramachandran, R.; Movva, S.

    2010-12-01

    Broadcast feeds (Atom or RSS) are a mechanism for advertising the existence of new data objects on the web, with metadata and links to further information. Users then subscribe to the feed to receive updates. This concept has already been used to advertise the new granules of science data as they are produced (datacasting), with browse images and metadata, and to advertise bundles of web services (service casting). Structured metadata is introduced into the XML feed format by embedding new XML tags (in defined namespaces), using typed links, and reusing built-in Atom feed elements. This “infocasting” concept can be extended to include many other science artifacts, including data collections, workflow documents, topical geophysical events (hurricanes, forest fires, etc.), natural hazard warnings, and short articles describing a new science result. The common theme is that each infocast contains machine-readable, structured metadata describing the object and enabling further manipulation. For example, service casts contain type links pointing to the service interface description (e.g., WSDL for SOAP services), service endpoint, and human-readable documentation. Our Infocasting project has three main goals: (1) define and evangelize micro-formats (metadata standards) so that providers can easily advertise their web services, datasets, and topical geophysical events by adding structured information to broadcast feeds; (2) develop authoring tools so that anyone can easily author such service advertisements, data casts, and event descriptions; and (3) provide a one-stop, Google-like search box in the browser that allows discovery of service, data and event casts visible on the web, and services & data registered in the GEOSS repository and other NASA repositories (GCMD & ECHO). To demonstrate the event casting idea, a series of micro-articles—with accompanying event casts containing links to relevant datasets, web services, and science analysis workflows--will be

  12. Metadata aided run selection at ATLAS

    International Nuclear Information System (INIS)

    Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  13. Metadata and Service at the GFZ ISDC Portal

    Science.gov (United States)

    Ritschel, B.

    2008-05-01

    The online service portal of the GFZ Potsdam Information System and Data Center (ISDC) is an access point for all manner of geoscientific geodata, its corresponding metadata, scientific documentation and software tools. At present almost 2000 national and international users and user groups have the opportunity to request Earth science data from a portfolio of 275 different products types and more than 20 Million single data files with an added volume of approximately 12 TByte. The majority of the data and information, the portal currently offers to the public, are global geomonitoring products such as satellite orbit and Earth gravity field data as well as geomagnetic and atmospheric data for the exploration. These products for Earths changing system are provided via state-of-the art retrieval techniques. The data product catalog system behind these techniques is based on the extensive usage of standardized metadata, which are describing the different geoscientific product types and data products in an uniform way. Where as all ISDC product types are specified by NASA's Directory Interchange Format (DIF), Version 9.0 Parent XML DIF metadata files, the individual data files are described by extended DIF metadata documents. Depending on the beginning of the scientific project, one part of data files are described by extended DIF, Version 6 metadata documents and the other part are specified by data Child XML DIF metadata documents. Both, the product type dependent parent DIF metadata documents and the data file dependent child DIF metadata documents are derived from a base-DIF.xsd xml schema file. The ISDC metadata philosophy defines a geoscientific product as a package consisting of mostly one or sometimes more than one data file plus one extended DIF metadata file. Because NASA's DIF metadata standard has been developed in order to specify a collection of data only, the extension of the DIF standard consists of new and specific attributes, which are necessary for

  14. The XML Metadata Editor of GFZ Data Services

    Science.gov (United States)

    Ulbricht, Damian; Elger, Kirsten; Tesei, Telemaco; Trippanera, Daniele

    2017-04-01

    Following the FAIR data principles, research data should be Findable, Accessible, Interoperable and Reuseable. Publishing data under these principles requires to assign persistent identifiers to the data and to generate rich machine-actionable metadata. To increase the interoperability, metadata should include shared vocabularies and crosslink the newly published (meta)data and related material. However, structured metadata formats tend to be complex and are not intended to be generated by individual scientists. Software solutions are needed that support scientists in providing metadata describing their data. To facilitate data publication activities of 'GFZ Data Services', we programmed an XML metadata editor that assists scientists to create metadata in different schemata popular in the earth sciences (ISO19115, DIF, DataCite), while being at the same time usable by and understandable for scientists. Emphasis is placed on removing barriers, in particular the editor is publicly available on the internet without registration [1] and the scientists are not requested to provide information that may be generated automatically (e.g. the URL of a specific licence or the contact information of the metadata distributor). Metadata are stored in browser cookies and a copy can be saved to the local hard disk. To improve usability, form fields are translated into the scientific language, e.g. 'creators' of the DataCite schema are called 'authors'. To assist filling in the form, we make use of drop down menus for small vocabulary lists and offer a search facility for large thesauri. Explanations to form fields and definitions of vocabulary terms are provided in pop-up windows and a full documentation is available for download via the help menu. In addition, multiple geospatial references can be entered via an interactive mapping tool, which helps to minimize problems with different conventions to provide latitudes and longitudes. Currently, we are extending the metadata editor

  15. The ATLAS EventIndex: data flow and inclusion of other metadata

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00064378; Cardenas Zarate, Simon Ernesto; Favareto, Andrea; Fernandez Casani, Alvaro; Gallas, Elizabeth; Garcia Montoro, Carlos; Gonzalez de la Hoz, Santiago; Hrivnac, Julius; Malon, David; Prokoshin, Fedor; Salt, Jose; Sanchez, Javier; Toebbicke, Rainer; Yuan, Ruijun

    2016-01-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information collected from the ATLAS detector. The basic unit of this information is the event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex is event picking, as well as data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the Grid, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalogue AMI and the Rucio data management system and information on p...

  16. ATLAS Metadata Task Force

    Energy Technology Data Exchange (ETDEWEB)

    ATLAS Collaboration; Costanzo, D.; Cranshaw, J.; Gadomski, S.; Jezequel, S.; Klimentov, A.; Lehmann Miotto, G.; Malon, D.; Mornacchi, G.; Nemethy, P.; Pauly, T.; von der Schmitt, H.; Barberis, D.; Gianotti, F.; Hinchliffe, I.; Mapelli, L.; Quarrie, D.; Stapnes, S.

    2007-04-04

    This document provides an overview of the metadata, which are needed to characterizeATLAS event data at different levels (a complete run, data streams within a run, luminosity blocks within a run, individual events).

  17. EUDAT B2FIND : A Cross-Discipline Metadata Service and Discovery Portal

    Science.gov (United States)

    Widmann, Heinrich; Thiemann, Hannes

    2016-04-01

    The European Data Infrastructure (EUDAT) project aims at a pan-European environment that supports a variety of multiple research communities and individuals to manage the rising tide of scientific data by advanced data management technologies. This led to the establishment of the community-driven Collaborative Data Infrastructure that implements common data services and storage resources to tackle the basic requirements and the specific challenges of international and interdisciplinary research data management. The metadata service B2FIND plays a central role in this context by providing a simple and user-friendly discovery portal to find research data collections stored in EUDAT data centers or in other repositories. For this we store the diverse metadata collected from heterogeneous sources in a comprehensive joint metadata catalogue and make them searchable in an open data portal. The implemented metadata ingestion workflow consists of three steps. First the metadata records - provided either by various research communities or via other EUDAT services - are harvested. Afterwards the raw metadata records are converted and mapped to unified key-value dictionaries as specified by the B2FIND schema. The semantic mapping of the non-uniform, community specific metadata to homogenous structured datasets is hereby the most subtle and challenging task. To assure and improve the quality of the metadata this mapping process is accompanied by • iterative and intense exchange with the community representatives, • usage of controlled vocabularies and community specific ontologies and • formal and semantic validation. Finally the mapped and checked records are uploaded as datasets to the catalogue, which is based on the open source data portal software CKAN. CKAN provides a rich RESTful JSON API and uses SOLR for dataset indexing that enables users to query and search in the catalogue. The homogenization of the community specific data models and vocabularies enables not

  18. Metadata Aided Run Selection at ATLAS

    CERN Document Server

    Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E

    2010-01-01

    Management of the large volume of data collected by any large scale sci- entific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user in- terfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called “runBrowser” makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions at...

  19. Metadata aided run selection at ATLAS

    CERN Document Server

    Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called “runBrowser” makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attrib...

  20. DESIGN AND PRACTICE ON METADATA SERVICE SYSTEM OF SURVEYING AND MAPPING RESULTS BASED ON GEONETWORK

    Directory of Open Access Journals (Sweden)

    Z. Zha

    2012-08-01

    Full Text Available Based on the analysis and research on the current geographic information sharing and metadata service,we design, develop and deploy a distributed metadata service system based on GeoNetwork covering more than 30 nodes in provincial units of China.. By identifying the advantages of GeoNetwork, we design a distributed metadata service system of national surveying and mapping results. It consists of 31 network nodes, a central node and a portal. Network nodes are the direct system metadata source, and are distributed arround the country. Each network node maintains a metadata service system, responsible for metadata uploading and management. The central node harvests metadata from network nodes using OGC CSW 2.0.2 standard interface. The portal shows all metadata in the central node, provides users with a variety of methods and interface for metadata search or querying. It also provides management capabilities on connecting the central node and the network nodes together. There are defects with GeoNetwork too. Accordingly, we made improvement and optimization on big-amount metadata uploading, synchronization and concurrent access. For metadata uploading and synchronization, by carefully analysis the database and index operation logs, we successfully avoid the performance bottlenecks. And with a batch operation and dynamic memory management solution, data throughput and system performance are significantly improved; For concurrent access, , through a request coding and results cache solution, query performance is greatly improved. To smoothly respond to huge concurrent requests, a web cluster solution is deployed. This paper also gives an experiment analysis and compares the system performance before and after improvement and optimization. Design and practical results have been applied in national metadata service system of surveying and mapping results. It proved that the improved GeoNetwork service architecture can effectively adaptive for

  1. Evolution of Web Services in EOSDIS: Search and Order Metadata Registry (ECHO)

    Science.gov (United States)

    Mitchell, Andrew; Ramapriyan, Hampapuram; Lowe, Dawn

    2009-01-01

    During 2005 through 2008, NASA defined and implemented a major evolutionary change in it Earth Observing system Data and Information System (EOSDIS) to modernize its capabilities. This implementation was based on a vision for 2015 developed during 2005. The EOSDIS 2015 Vision emphasizes increased end-to-end data system efficiency and operability; increased data usability; improved support for end users; and decreased operations costs. One key feature of the Evolution plan was achieving higher operational maturity (ingest, reconciliation, search and order, performance, error handling) for the NASA s Earth Observing System Clearinghouse (ECHO). The ECHO system is an operational metadata registry through which the scientific community can easily discover and exchange NASA's Earth science data and services. ECHO contains metadata for 2,726 data collections comprising over 87 million individual data granules and 34 million browse images, consisting of NASA s EOSDIS Data Centers and the United States Geological Survey's Landsat Project holdings. ECHO is a middleware component based on a Service Oriented Architecture (SOA). The system is comprised of a set of infrastructure services that enable the fundamental SOA functions: publish, discover, and access Earth science resources. It also provides additional services such as user management, data access control, and order management. The ECHO system has a data registry and a services registry. The data registry enables organizations to publish EOS and other Earth-science related data holdings to a common metadata model. These holdings are described through metadata in terms of datasets (types of data) and granules (specific data items of those types). ECHO also supports browse images, which provide a visual representation of the data. The published metadata can be mapped to and from existing standards (e.g., FGDC, ISO 19115). With ECHO, users can find the metadata stored in the data registry and then access the data either

  2. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    Directory of Open Access Journals (Sweden)

    Jianwei Liao

    2014-01-01

    Full Text Available This paper presents a novel metadata management mechanism on the metadata server (MDS for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  3. GEO Label Web Services for Dynamic and Effective Communication of Geospatial Metadata Quality

    Science.gov (United States)

    Lush, Victoria; Nüst, Daniel; Bastin, Lucy; Masó, Joan; Lumsden, Jo

    2014-05-01

    -like label, which are coloured according to metadata availability and are clickable to allow a user to engage with the original metadata and explore specific aspects in more detail. To support this graphical representation and allow for wider deployment architectures we have implemented two Web services, a PHP and a Java implementation, that generate GEO label representations by combining producer metadata (from standard catalogues or other published locations) with structured user feedback. Both services accept encoded URLs of publicly available metadata documents or metadata XML files as HTTP POST and GET requests and apply XPath and XSLT mappings to transform producer and feedback XML documents into clickable SVG GEO label representations. The label and services are underpinned by two XML-based quality models. The first is a producer model that extends ISO 19115 and 19157 to allow fuller citation of reference data, presentation of pixel- and dataset- level statistical quality information, and encoding of 'traceability' information on the lineage of an actual quality assessment. The second is a user quality model (realised as a feedback server and client) which allows reporting and query of ratings, usage reports, citations, comments and other domain knowledge. Both services are Open Source and are available on GitHub at https://github.com/lushv/geolabel-service and https://github.com/52North/GEO-label-java. The functionality of these services can be tested using our GEO label generation demos, available online at http://www.geolabel.net/demo.html and http://geoviqua.dev.52north.org/glbservice/index.jsf.

  4. Testing Metadata Existence of Web Map Services

    Directory of Open Access Journals (Sweden)

    Jan Růžička

    2011-05-01

    Full Text Available For a general user is quite common to use data sources available on WWW. Almost all GIS software allow to use data sources available via Web Map Service (ISO/OGC standard interface. The opportunity to use different sources and combine them brings a lot of problems that were discussed many times on conferences or journal papers. One of the problem is based on non existence of metadata for published sources. The question was: were the discussions effective? The article is partly based on comparison of situation for metadata between years 2007 and 2010. Second part of the article is focused only on 2010 year situation. The paper is created in a context of research of intelligent map systems, that can be used for an automatic or a semi-automatic map creation or a map evaluation.

  5. Event metadata records as a testbed for scalable data mining

    International Nuclear Information System (INIS)

    Gemmeren, P van; Malon, D

    2010-01-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  6. Composing Distributed Services for Selection and Retrieval of Event Data in the ATLAS Experiment

    CERN Document Server

    Vinek, E; The ATLAS collaboration

    2011-01-01

    TAGs are event-level metadata allowing a quick search for interesting events for further analysis, based on selection criteria defined by the user. They are stored in a file-based format as well as in relational databases. The overall TAG system encompasses a range of web services providing functionality for the required use cases. The data as well as the services are replicated to several ATLAS sites, i.e. inside each service group there exist several concrete deployments, differing only in site-related non-functional attributes. In order to satisfy a user's request, the above mentioned atomic data sources and web services have to be composed on demand to provide the required functionality. As several instances of each service exist, one service has to be selected out of each group. The overall goal is to maximize the system’s throughput, in order to give to as many users as possible efficient access to the TAGs, while meeting end-to-end quality of service (QoS) requirements. Many approaches can be found t...

  7. Composing Distributed Services for Selection and Retrieval of Event Data in the ATLAS Experiment

    CERN Document Server

    Vinek, E; The ATLAS collaboration; Zhang, Q

    2010-01-01

    TAGs are event-level metadata allowing a quick search for interesting events for further analysis, based on selection criteria defined by the user. They are stored in a file-based format as well as in relational databases. The overall TAG system encompasses a range of web services providing functionality for the required use cases. The data as well as the services are replicated to several ATLAS sites, i.e. inside each service group there exist several concrete deployments, differing only in site-related non-functional attributes. In order to satisfy a user’s request, the above mentioned atomic data sources and web services have to be composed on demand to provide the full functionality. As several instances of each service exist, one service has to be selected out of each group. The overall goal is to maximize the system’s throughput, in order to give to as many users as possible efficient access to the TAGs, while meeting end-to-end quality of service (QoS) requirements. Many approaches can be found to ...

  8. In Interactive, Web-Based Approach to Metadata Authoring

    Science.gov (United States)

    Pollack, Janine; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    NASA's Global Change Master Directory (GCMD) serves a growing number of users by assisting the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 8000 data set descriptions in Directory Interchange Format (DIF) and 200 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information, thus allowing researchers to discover data pertaining to a particular geographic location, as well as subject of interest. The GCMD strives to be the preeminent data locator for world-wide directory level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are not currently attracting. widespread usage. With usage being the prime indicator of utility, it has become apparent that current tools must be improved. As a result, the GCMD has released a new suite of web-based authoring tools that enable a user to create new data and service entries, as well as modify existing data entries. With these tools, a more interactive approach to metadata authoring is taken, as they feature a visual "checklist" of data/service fields that automatically update when a field is completed. In this way, the user can quickly gauge which of the required and optional fields have not been populated. With the release of these tools, the Earth science community will be further assisted in efficiently creating quality data and services metadata. Keywords: metadata, Earth science, metadata authoring tools

  9. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    Science.gov (United States)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible

  10. openPDS: protecting the privacy of metadata through SafeAnswers.

    Directory of Open Access Journals (Sweden)

    Yves-Alexandre de Montjoye

    Full Text Available The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1 we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2 we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research.

  11. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    Fulachier, Jerome; The ATLAS collaboration

    2016-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, Javascript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  12. Metadata Authoring with Versatility and Extensibility

    Science.gov (United States)

    Pollack, Janine; Olsen, Lola

    2004-01-01

    NASA's Global Change Master Directory (GCMD) assists the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 13,800 data set descriptions in Directory Interchange Format (DIF) and 700 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information and direct links to the data, thus allowing researchers to discover data pertaining to a geographic location of interest, then quickly acquire those data. The GCMD strives to be the preferred data locator for world-wide directory-level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are attracting widespread usage; however, a need for tools that are portable, customizable and versatile still exists. With tool usage directly influencing metadata population, it has become apparent that new tools are needed to fill these voids. As a result, the GCMD has released a new authoring tool allowing for both web-based and stand-alone authoring of descriptions. Furthermore, this tool incorporates the ability to plug-and-play the metadata format of choice, offering users options of DIF, SERF, FGDC, ISO or any other defined standard. Allowing data holders to work with their preferred format, as well as an option of a stand-alone application or web-based environment, docBUlLDER will assist the scientific community in efficiently creating quality data and services metadata.

  13. Automated Atmospheric Composition Dataset Level Metadata Discovery. Difficulties and Surprises

    Science.gov (United States)

    Strub, R. F.; Falke, S. R.; Kempler, S.; Fialkowski, E.; Goussev, O.; Lynnes, C.

    2015-12-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System - CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not

  14. Inheritance rules for Hierarchical Metadata Based on ISO 19115

    Science.gov (United States)

    Zabala, A.; Masó, J.; Pons, X.

    2012-04-01

    Mainly, ISO19115 has been used to describe metadata for datasets and services. Furthermore, ISO19115 standard (as well as the new draft ISO19115-1) includes a conceptual model that allows to describe metadata at different levels of granularity structured in hierarchical levels, both in aggregated resources such as particularly series, datasets, and also in more disaggregated resources such as types of entities (feature type), types of attributes (attribute type), entities (feature instances) and attributes (attribute instances). In theory, to apply a complete metadata structure to all hierarchical levels of metadata, from the whole series to an individual feature attributes, is possible, but to store all metadata at all levels is completely impractical. An inheritance mechanism is needed to store each metadata and quality information at the optimum hierarchical level and to allow an ease and efficient documentation of metadata in both an Earth observation scenario such as a multi-satellite mission multiband imagery, as well as in a complex vector topographical map that includes several feature types separated in layers (e.g. administrative limits, contour lines, edification polygons, road lines, etc). Moreover, and due to the traditional split of maps in tiles due to map handling at detailed scales or due to the satellite characteristics, each of the previous thematic layers (e.g. 1:5000 roads for a country) or band (Landsat-5 TM cover of the Earth) are tiled on several parts (sheets or scenes respectively). According to hierarchy in ISO 19115, the definition of general metadata can be supplemented by spatially specific metadata that, when required, either inherits or overrides the general case (G.1.3). Annex H of this standard states that only metadata exceptions are defined at lower levels, so it is not necessary to generate the full registry of metadata for each level but to link particular values to the general value that they inherit. Conceptually the metadata

  15. ATLAS Metadata Interface (AMI), a generic metadata framework

    Science.gov (United States)

    Fulachier, J.; Odier, J.; Lambert, F.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, JavaScript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  16. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    AUTHOR|(SzGeCERN)573735; The ATLAS collaboration; Odier, Jerome; Lambert, Fabian

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, JavaScript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  17. The RBV metadata catalog

    Science.gov (United States)

    Andre, Francois; Fleury, Laurence; Gaillardet, Jerome; Nord, Guillaume

    2015-04-01

    RBV (Réseau des Bassins Versants) is a French initiative to consolidate the national efforts made by more than 15 elementary observatories funded by various research institutions (CNRS, INRA, IRD, IRSTEA, Universities) that study river and drainage basins. The RBV Metadata Catalogue aims at giving an unified vision of the work produced by every observatory to both the members of the RBV network and any external person interested by this domain of research. Another goal is to share this information with other existing metadata portals. Metadata management is heterogeneous among observatories ranging from absence to mature harvestable catalogues. Here, we would like to explain the strategy used to design a state of the art catalogue facing this situation. Main features are as follows : - Multiple input methods: Metadata records in the catalog can either be entered with the graphical user interface, harvested from an existing catalogue or imported from information system through simplified web services. - Hierarchical levels: Metadata records may describe either an observatory, one of its experimental site or a single dataset produced by one instrument. - Multilingualism: Metadata can be easily entered in several configurable languages. - Compliance to standards : the backoffice part of the catalogue is based on a CSW metadata server (Geosource) which ensures ISO19115 compatibility and the ability of being harvested (globally or partially). On going tasks focus on the use of SKOS thesaurus and SensorML description of the sensors. - Ergonomy : The user interface is built with the GWT Framework to offer a rich client application with a fully ajaxified navigation. - Source code sharing : The work has led to the development of reusable components which can be used to quickly create new metadata forms in other GWT applications You can visit the catalogue (http://portailrbv.sedoo.fr/) or contact us by email rbv@sedoo.fr.

  18. How libraries use publisher metadata

    Directory of Open Access Journals (Sweden)

    Steve Shadle

    2013-11-01

    Full Text Available With the proliferation of electronic publishing, libraries are increasingly relying on publisher-supplied metadata to meet user needs for discovery in library systems. However, many publisher/content provider staff creating metadata are unaware of the end-user environment and how libraries use their metadata. This article provides an overview of the three primary discovery systems that are used by academic libraries, with examples illustrating how publisher-supplied metadata directly feeds into these systems and is used to support end-user discovery and access. Commonly seen metadata problems are discussed, with recommendations suggested. Based on a series of presentations given in Autumn 2012 to the staff of a large publisher, this article uses the University of Washington Libraries systems and services as illustrative examples. Judging by the feedback received from these presentations, publishers (specifically staff not familiar with the big picture of metadata standards work would benefit from a better understanding of the systems and services libraries provide using the data that is created and managed by publishers.

  19. Leveraging Metadata to Create Better Web Services

    Science.gov (United States)

    Mitchell, Erik

    2012-01-01

    Libraries have been increasingly concerned with data creation, management, and publication. This increase is partly driven by shifting metadata standards in libraries and partly by the growth of data and metadata repositories being managed by libraries. In order to manage these data sets, libraries are looking for new preservation and discovery…

  20. USGIN ISO metadata profile

    Science.gov (United States)

    Richard, S. M.

    2011-12-01

    The USGIN project has drafted and is using a specification for use of ISO 19115/19/39 metadata, recommendations for simple metadata content, and a proposal for a URI scheme to identify resources using resolvable http URI's(see http://lab.usgin.org/usgin-profiles). The principal target use case is a catalog in which resources can be registered and described by data providers for discovery by users. We are currently using the ESRI Geoportal (Open Source), with configuration files for the USGIN profile. The metadata offered by the catalog must provide sufficient content to guide search engines to locate requested resources, to describe the resource content, provenance, and quality so users can determine if the resource will serve for intended usage, and finally to enable human users and sofware clients to obtain or access the resource. In order to achieve an operational federated catalog system, provisions in the ISO specification must be restricted and usage clarified to reduce the heterogeneity of 'standard' metadata and service implementations such that a single client can search against different catalogs, and the metadata returned by catalogs can be parsed reliably to locate required information. Usage of the complex ISO 19139 XML schema allows for a great deal of structured metadata content, but the heterogenity in approaches to content encoding has hampered development of sophisticated client software that can take advantage of the rich metadata; the lack of such clients in turn reduces motivation for metadata producers to produce content-rich metadata. If the only significant use of the detailed, structured metadata is to format into text for people to read, then the detailed information could be put in free text elements and be just as useful. In order for complex metadata encoding and content to be useful, there must be clear and unambiguous conventions on the encoding that are utilized by the community that wishes to take advantage of advanced metadata

  1. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to overcome the dedicated resources available for ATLAS on the WLCG. Example of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at the Tier-2 and Tier-3 sites, opportunistic resources at the Open Science Grid, and ATLAS High Level Trigger farm between the data taking periods. Because of opportunistic resources specifics such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  2. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00066086; The ATLAS collaboration; Caballero, Jose; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  3. Interoperability Using Lightweight Metadata Standards: Service & Data Casting, OpenSearch, OPM Provenance, and Shared SciFlo Workflows

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2011-12-01

    Under several NASA grants, we are generating multi-sensor merged atmospheric datasets to enable the detection of instrument biases and studies of climate trends over decades of data. For example, under a NASA MEASURES grant we are producing a water vapor climatology from the A-Train instruments, stratified by the Cloudsat cloud classification for each geophysical scene. The generation and proper use of such multi-sensor climate data records (CDR's) requires a high level of openness, transparency, and traceability. To make the datasets self-documenting and provide access to full metadata and traceability, we have implemented a set of capabilities and services using known, interoperable protocols. These protocols include OpenSearch, OPeNDAP, Open Provenance Model, service & data casting technologies using Atom feeds, and REST-callable analysis workflows implemented as SciFlo (XML) documents. We advocate that our approach can serve as a blueprint for how to openly "document and serve" complex, multi-sensor CDR's with full traceability. The capabilities and services provided include: - Discovery of the collections by keyword search, exposed using OpenSearch protocol; - Space/time query across the CDR's granules and all of the input datasets via OpenSearch; - User-level configuration of the production workflows so that scientists can select additional physical variables from the A-Train to add to the next iteration of the merged datasets; - Efficient data merging using on-the-fly OPeNDAP variable slicing & spatial subsetting of data out of input netCDF and HDF files (without moving the entire files); - Self-documenting CDR's published in a highly usable netCDF4 format with groups used to organize the variables, CF-style attributes for each variable, numeric array compression, & links to OPM provenance; - Recording of processing provenance and data lineage into a query-able provenance trail in Open Provenance Model (OPM) format, auto-captured by the workflow engine

  4. Evolving Metadata in NASA Earth Science Data Systems

    Science.gov (United States)

    Mitchell, A.; Cechini, M. F.; Walter, J.

    2011-12-01

    NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 3500 data products ranging from various types of science disciplines. EOSDIS is currently comprised of 12 discipline specific data centers that are collocated with centers of science discipline expertise. Metadata is used in all aspects of NASA's Earth Science data lifecycle from the initial measurement gathering to the accessing of data products. Missions use metadata in their science data products when describing information such as the instrument/sensor, operational plan, and geographically region. Acting as the curator of the data products, data centers employ metadata for preservation, access and manipulation of data. EOSDIS provides a centralized metadata repository called the Earth Observing System (EOS) ClearingHouse (ECHO) for data discovery and access via a service-oriented-architecture (SOA) between data centers and science data users. ECHO receives inventory metadata from data centers who generate metadata files that complies with the ECHO Metadata Model. NASA's Earth Science Data and Information System (ESDIS) Project established a Tiger Team to study and make recommendations regarding the adoption of the international metadata standard ISO 19115 in EOSDIS. The result was a technical report recommending an evolution of NASA data systems towards a consistent application of ISO 19115 and related standards including the creation of a NASA-specific convention for core ISO 19115 elements. Part of

  5. Metadata

    CERN Document Server

    Zeng, Marcia Lei

    2016-01-01

    Metadata remains the solution for describing the explosively growing, complex world of digital information, and continues to be of paramount importance for information professionals. Providing a solid grounding in the variety and interrelationships among different metadata types, Zeng and Qin's thorough revision of their benchmark text offers a comprehensive look at the metadata schemas that exist in the world of library and information science and beyond, as well as the contexts in which they operate. Cementing its value as both an LIS text and a handy reference for professionals already in the field, this book: * Lays out the fundamentals of metadata, including principles of metadata, structures of metadata vocabularies, and metadata descriptions * Surveys metadata standards and their applications in distinct domains and for various communities of metadata practice * Examines metadata building blocks, from modelling to defining properties, and from designing application profiles to implementing value vocabu...

  6. Mercury- Distributed Metadata Management, Data Discovery and Access System

    Science.gov (United States)

    Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.

    2007-12-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  7. Digital Libraries that Demonstrate High Levels of Mutual Complementarity in Collection-level Metadata Give a Richer Representation of their Content and Improve Subject Access for Users

    Directory of Open Access Journals (Sweden)

    Aoife Lawton

    2014-12-01

    Full Text Available A Review of: Zavalina, O. L. (2013. Complementarity in subject metadata in large-scale digital libraries: A comparative analysis. Cataloging & Classification Quarterly, 52(1, 77-89. http://dx.doi.org/10.1080/01639374.2013.848316 Abstract Objective – To determine how well digital library content is represented through free-text and subject headings. Specifically to examine whether a combination of free-text description data and controlled vocabulary is more comprehensive than free-text description data alone in describing digital collections. Design – Qualitative content analysis and complementarity comparison. Setting – Three large scale cultural heritage digital libraries: one in Europe and two in the United States of America. Methods – The researcher retrieved XML files of complete metadata records for two of the digital libraries, while the third library openly exposed its full metadata. The systematic samples obtained for all three libraries enabled qualitative content analysis to uncover how metadata values relate to each other at the collection level. The researcher retrieved 99 collection-level metadata records in total for analysis. The breakdown was 39, 33, and 27 records per digital library. When comparing metadata in the free-text Description metadata element with data in four controlled vocabulary elements, Subject, Geographic Coverage, Temporal Coverage and Object Type, the researcher observed three types of complementarity: one-way, two-way and multiple-complementarity. The author refers to complementarity as “describing a collection’s subject matter with mutually complementary data values in controlled vocabulary and free-text subject metadata elements” (Zavalina, 2013, p. 77. For example, within a Temporal Coverage metadata element the term “19th century” would complement a Description metadata element “1850–1899” in the same record. Main Results – The researcher found a high level of one

  8. The metadata manual a practical workbook

    CERN Document Server

    Lubas, Rebecca; Schneider, Ingrid

    2013-01-01

    Cultural heritage professionals have high levels of training in metadata. However, the institutions in which they practice often depend on support staff, volunteers, and students in order to function. With limited time and funding for training in metadata creation for digital collections, there are often many questions about metadata without a reliable, direct source for answers. The Metadata Manual provides such a resource, answering basic metadata questions that may appear, and exploring metadata from a beginner's perspective. This title covers metadata basics, XML basics, Dublin Core, VRA C

  9. NAIP National Metadata

    Data.gov (United States)

    Farm Service Agency, Department of Agriculture — The NAIP National Metadata Map contains USGS Quarter Quad and NAIP Seamline boundaries for every year NAIP imagery has been collected. Clicking on the map also makes...

  10. MMI's Metadata and Vocabulary Solutions: 10 Years and Growing

    Science.gov (United States)

    Graybeal, J.; Gayanilo, F.; Rueda-Velasquez, C. A.

    2014-12-01

    The Marine Metadata Interoperability project (http://marinemetadata.org) held its public opening at AGU's 2004 Fall Meeting. For 10 years since that debut, the MMI guidance and vocabulary sites have served over 100,000 visitors, with 525 community members and continuous Steering Committee leadership. Originally funded by the National Science Foundation, over the years multiple organizations have supported the MMI mission: "Our goal is to support collaborative research in the marine science domain, by simplifying the incredibly complex world of metadata into specific, straightforward guidance. MMI encourages scientists and data managers at all levels to apply good metadata practices from the start of a project, by providing the best guidance and resources for data management, and developing advanced metadata tools and services needed by the community." Now hosted by the Harte Research Institute at Texas A&M University at Corpus Christi, MMI continues to provide guidance and services to the community, and is planning for marine science and technology needs for the next 10 years. In this presentation we will highlight our major accomplishments, describe our recent achievements and imminent goals, and propose a vision for improving marine data interoperability for the next 10 years, including Ontology Registry and Repository (http://mmisw.org/orr) advancements and applications (http://mmisw.org/cfsn).

  11. Handbook of metadata, semantics and ontologies

    CERN Document Server

    Sicilia, Miguel-Angel

    2013-01-01

    Metadata research has emerged as a discipline cross-cutting many domains, focused on the provision of distributed descriptions (often called annotations) to Web resources or applications. Such associated descriptions are supposed to serve as a foundation for advanced services in many application areas, including search and location, personalization, federation of repositories and automated delivery of information. Indeed, the Semantic Web is in itself a concrete technological framework for ontology-based metadata. For example, Web-based social networking requires metadata describing people and

  12. The ATLAS Event Index: The Architecture of the Core Engine

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration

    2017-01-01

    The global view of the ATLAS Event Index system has been presented in the 17th ACAT Conference. This article concentrates on the architecture of the system core component. This component handles the final stage of the event metadata import, it organizes its storage and provides a fast and feature-rich access to all information. A user is able to interrogate metadata in various ways, including by executing user-provided code on the data to make selections and to interpret the results. A wide spectrum of clients is available, from a set of linux-like commands to an interactive graphical Web Service. The stored event metadata contain the basic description of the related events, the references to the experiment event storage, the full trigger record and can be extended with other event characteristics. Derived collections of events can be created. Such collections can be annotated and tagged with further information.

  13. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    Science.gov (United States)

    Lambert, F.; Odier, J.; Fulachier, J.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  14. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins.

    CERN Document Server

    AUTHOR|(SzGeCERN)637120; The ATLAS collaboration; Odier, Jerome; Fulachier, Jerome

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  15. The ATLAS Event Index: The Architecture of the Core Engine

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration; Hrivnac, Julius

    2017-01-01

    The global view of the ATLAS Event Index system has been presented in the last ACAT. This talk will concentrate on the architecture of the system core component. This component handles the final stage of the event metadata import, it organizes its storage and provides a fast and feature-rich access to all information. A user is able to interrogate metadata in various ways, including by executing user-provided code on the data to make selections and to interpret the results. A wide spectrum of clients is available, from a set of linux-like commands to an interactive graphical Web Service. The stored event metadata contain the basic description of the related events, the references to the experiment event storage, the full trigger record and can be extended with other event characteristics. Derived collections of events can be created. Such collections can be annotated and tagged with further information. This talk will describe all system sub-components and their development evolution, which lead into the choi...

  16. ncISO Facilitating Metadata and Scientific Data Discovery

    Science.gov (United States)

    Neufeld, D.; Habermann, T.

    2011-12-01

    Increasing the usability and availability climate and oceanographic datasets for environmental research requires improved metadata and tools to rapidly locate and access relevant information for an area of interest. Because of the distributed nature of most environmental geospatial data, a common approach is to use catalog services that support queries on metadata harvested from remote map and data services. A key component to effectively using these catalog services is the availability of high quality metadata associated with the underlying data sets. In this presentation, we examine the use of ncISO, and Geoportal as open source tools that can be used to document and facilitate access to ocean and climate data available from Thematic Realtime Environmental Distributed Data Services (THREDDS) data services. Many atmospheric and oceanographic spatial data sets are stored in the Network Common Data Format (netCDF) and served through the Unidata THREDDS Data Server (TDS). NetCDF and THREDDS are becoming increasingly accepted in both the scientific and geographic research communities as demonstrated by the recent adoption of netCDF as an Open Geospatial Consortium (OGC) standard. One important source for ocean and atmospheric based data sets is NOAA's Unified Access Framework (UAF) which serves over 3000 gridded data sets from across NOAA and NOAA-affiliated partners. Due to the large number of datasets, browsing the data holdings to locate data is impractical. Working with Unidata, we have created a new service for the TDS called "ncISO", which allows automatic generation of ISO 19115-2 metadata from attributes and variables in TDS datasets. The ncISO metadata records can be harvested by catalog services such as ESSI-labs GI-Cat catalog service, and ESRI's Geoportal which supports query through a number of services, including OpenSearch and Catalog Services for the Web (CSW). ESRI's Geoportal Server provides a number of user friendly search capabilities for end users

  17. Science friction: data, metadata, and collaboration.

    Science.gov (United States)

    Edwards, Paul N; Mayernik, Matthew S; Batcheller, Archer L; Bowker, Geoffrey C; Borgman, Christine L

    2011-10-01

    When scientists from two or more disciplines work together on related problems, they often face what we call 'science friction'. As science becomes more data-driven, collaborative, and interdisciplinary, demand increases for interoperability among data, tools, and services. Metadata--usually viewed simply as 'data about data', describing objects such as books, journal articles, or datasets--serve key roles in interoperability. Yet we find that metadata may be a source of friction between scientific collaborators, impeding data sharing. We propose an alternative view of metadata, focusing on its role in an ephemeral process of scientific communication, rather than as an enduring outcome or product. We report examples of highly useful, yet ad hoc, incomplete, loosely structured, and mutable, descriptions of data found in our ethnographic studies of several large projects in the environmental sciences. Based on this evidence, we argue that while metadata products can be powerful resources, usually they must be supplemented with metadata processes. Metadata-as-process suggests the very large role of the ad hoc, the incomplete, and the unfinished in everyday scientific work.

  18. ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond

    CERN Document Server

    van Gemmeren, Peter; The ATLAS collaboration; Malon, David; Vaniachine, Alexandre

    2015-01-01

    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework’s state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires ...

  19. Metadata

    CERN Document Server

    Pomerantz, Jeffrey

    2015-01-01

    When "metadata" became breaking news, appearing in stories about surveillance by the National Security Agency, many members of the public encountered this once-obscure term from information science for the first time. Should people be reassured that the NSA was "only" collecting metadata about phone calls -- information about the caller, the recipient, the time, the duration, the location -- and not recordings of the conversations themselves? Or does phone call metadata reveal more than it seems? In this book, Jeffrey Pomerantz offers an accessible and concise introduction to metadata. In the era of ubiquitous computing, metadata has become infrastructural, like the electrical grid or the highway system. We interact with it or generate it every day. It is not, Pomerantz tell us, just "data about data." It is a means by which the complexity of an object is represented in a simpler form. For example, the title, the author, and the cover art are metadata about a book. When metadata does its job well, it fades i...

  20. Department of the Interior metadata implementation guide—Framework for developing the metadata component for data resource management

    Science.gov (United States)

    Obuch, Raymond C.; Carlino, Jennifer; Zhang, Lin; Blythe, Jonathan; Dietrich, Christopher; Hawkinson, Christine

    2018-04-12

    The Department of the Interior (DOI) is a Federal agency with over 90,000 employees across 10 bureaus and 8 agency offices. Its primary mission is to protect and manage the Nation’s natural resources and cultural heritage; provide scientific and other information about those resources; and honor its trust responsibilities or special commitments to American Indians, Alaska Natives, and affiliated island communities. Data and information are critical in day-to-day operational decision making and scientific research. DOI is committed to creating, documenting, managing, and sharing high-quality data and metadata in and across its various programs that support its mission. Documenting data through metadata is essential in realizing the value of data as an enterprise asset. The completeness, consistency, and timeliness of metadata affect users’ ability to search for and discover the most relevant data for the intended purpose; and facilitates the interoperability and usability of these data among DOI bureaus and offices. Fully documented metadata describe data usability, quality, accuracy, provenance, and meaning.Across DOI, there are different maturity levels and phases of information and metadata management implementations. The Department has organized a committee consisting of bureau-level points-of-contacts to collaborate on the development of more consistent, standardized, and more effective metadata management practices and guidance to support this shared mission and the information needs of the Department. DOI’s metadata implementation plans establish key roles and responsibilities associated with metadata management processes, procedures, and a series of actions defined in three major metadata implementation phases including: (1) Getting started—Planning Phase, (2) Implementing and Maintaining Operational Metadata Management Phase, and (3) the Next Steps towards Improving Metadata Management Phase. DOI’s phased approach for metadata management addresses

  1. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    CERN Document Server

    Lambert, Fabian; The ATLAS collaboration

    2016-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring system and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand. Moreover, we describe how to switch to a distant replica in case of downtime.

  2. Improving Metadata Compliance for Earth Science Data Records

    Science.gov (United States)

    Armstrong, E. M.; Chang, O.; Foster, D.

    2014-12-01

    One of the recurring challenges of creating earth science data records is to ensure a consistent level of metadata compliance at the granule level where important details of contents, provenance, producer, and data references are necessary to obtain a sufficient level of understanding. These details are important not just for individual data consumers but also for autonomous software systems. Two of the most popular metadata standards at the granule level are the Climate and Forecast (CF) Metadata Conventions and the Attribute Conventions for Dataset Discovery (ACDD). Many data producers have implemented one or both of these models including the Group for High Resolution Sea Surface Temperature (GHRSST) for their global SST products and the Ocean Biology Processing Group for NASA ocean color and SST products. While both the CF and ACDD models contain various level of metadata richness, the actual "required" attributes are quite small in number. Metadata at the granule level becomes much more useful when recommended or optional attributes are implemented that document spatial and temporal ranges, lineage and provenance, sources, keywords, and references etc. In this presentation we report on a new open source tool to check the compliance of netCDF and HDF5 granules to the CF and ACCD metadata models. The tool, written in Python, was originally implemented to support metadata compliance for netCDF records as part of the NOAA's Integrated Ocean Observing System. It outputs standardized scoring for metadata compliance for both CF and ACDD, produces an objective summary weight, and can be implemented for remote records via OPeNDAP calls. Originally a command-line tool, we have extended it to provide a user-friendly web interface. Reports on metadata testing are grouped in hierarchies that make it easier to track flaws and inconsistencies in the record. We have also extended it to support explicit metadata structures and semantic syntax for the GHRSST project that can be

  3. Geo-metadata design for the GIS of the pre-selected site for China's high-level radioactive waste repository

    International Nuclear Information System (INIS)

    Zhong Xia; Wang Ju; Huang Shutao; Wang Shuhong; Gao Min

    2008-01-01

    The information system for the geological disposal of high-level radioactive waste aims at the integrated management and full application of multi-sourceful information in the research for geological disposal of high-level radioactive waste. And the establishment and operation of the system need geo-metadata's support of multi-sourceful information. In the paper, on the basis of geo-data analysis for pre-selected site of disposal of high-level radioactive waste, we can apply the existing metadata standards. Also we can research and design the content information, management pattern and application for geo-metadata of the multi-sourceful information. (authors)

  4. Metadata in Scientific Dialects

    Science.gov (United States)

    Habermann, T.

    2011-12-01

    Discussions of standards in the scientific community have been compared to religious wars for many years. The only things scientists agree on in these battles are either "standards are not useful" or "everyone can benefit from using my standard". Instead of achieving the goal of facilitating interoperable communities, in many cases the standards have served to build yet another barrier between communities. Some important progress towards diminishing these obstacles has been made in the data layer with the merger of the NetCDF and HDF scientific data formats. The universal adoption of XML as the standard for representing metadata and the recent adoption of ISO metadata standards by many groups around the world suggests that similar convergence is underway in the metadata layer. At the same time, scientists and tools will likely need support for native tongues for some time. I will describe an approach that combines re-usable metadata "components" and restful web services that provide those components in many dialects. This approach uses advanced XML concepts of referencing and linking to construct complete records that include reusable components and builds on the ISO Standards as the "unabridged dictionary" that encompasses the content of many other dialects.

  5. The ATLAS Eventlndex: data flow and inclusion of other metadata

    Science.gov (United States)

    Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration

    2016-10-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information collected from the ATLAS detector. The basic unit of this information is the event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex is event picking, as well as data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the Grid, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalogue AMI and the Rucio data management system and information on production jobs from the ATLAS production system. The ATLAS production system is also used for the collection of event information from the Grid jobs. EventIndex developments started in 2012 and in the middle of 2015 the system was commissioned and started collecting event metadata, as a part of ATLAS Distributed Computing operations.

  6. Efficient processing of MPEG-21 metadata in the binary domain

    Science.gov (United States)

    Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas

    2005-10-01

    XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.

  7. Prediction of Solar Eruptions Using Filament Metadata

    Science.gov (United States)

    Aggarwal, Ashna; Schanche, Nicole; Reeves, Katharine K.; Kempton, Dustin; Angryk, Rafal

    2018-05-01

    We perform a statistical analysis of erupting and non-erupting solar filaments to determine the properties related to the eruption potential. In order to perform this study, we correlate filament eruptions documented in the Heliophysics Event Knowledgebase (HEK) with HEK filaments that have been grouped together using a spatiotemporal tracking algorithm. The HEK provides metadata about each filament instance, including values for length, area, tilt, and chirality. We add additional metadata properties such as the distance from the nearest active region and the magnetic field decay index. We compare trends in the metadata from erupting and non-erupting filament tracks to discover which properties present signs of an eruption. We find that a change in filament length over time is the most important factor in discriminating between erupting and non-erupting filament tracks, with erupting tracks being more likely to have decreasing length. We attempt to find an ensemble of predictive filament metadata using a Random Forest Classifier approach, but find the probability of correctly predicting an eruption with the current metadata is only slightly better than chance.

  8. Twenty-first century metadata operations challenges, opportunities, directions

    CERN Document Server

    Lee Eden, Bradford

    2014-01-01

    It has long been apparent to academic library administrators that the current technical services operations within libraries need to be redirected and refocused in terms of both format priorities and human resources. A number of developments and directions have made this reorganization imperative, many of which have been accelerated by the current economic crisis. All of the chapters detail some aspect of technical services reorganization due to downsizing and/or reallocation of human resources, retooling professional and support staff in higher level duties and/or non-MARC metadata, ""value-a

  9. Creating preservation metadata from XML-metadata profiles

    Science.gov (United States)

    Ulbricht, Damian; Bertelmann, Roland; Gebauer, Petra; Hasler, Tim; Klump, Jens; Kirchner, Ingo; Peters-Kottig, Wolfgang; Mettig, Nora; Rusch, Beate

    2014-05-01

    Registration of dataset DOIs at DataCite makes research data citable and comes with the obligation to keep data accessible in the future. In addition, many universities and research institutions measure data that is unique and not repeatable like the data produced by an observational network and they want to keep these data for future generations. In consequence, such data should be ingested in preservation systems, that automatically care for file format changes. Open source preservation software that is developed along the definitions of the ISO OAIS reference model is available but during ingest of data and metadata there are still problems to be solved. File format validation is difficult, because format validators are not only remarkably slow - due to variety in file formats different validators return conflicting identification profiles for identical data. These conflicts are hard to resolve. Preservation systems have a deficit in the support of custom metadata. Furthermore, data producers are sometimes not aware that quality metadata is a key issue for the re-use of data. In the project EWIG an university institute and a research institute work together with Zuse-Institute Berlin, that is acting as an infrastructure facility, to generate exemplary workflows for research data into OAIS compliant archives with emphasis on the geosciences. The Institute for Meteorology provides timeseries data from an urban monitoring network whereas GFZ Potsdam delivers file based data from research projects. To identify problems in existing preservation workflows the technical work is complemented by interviews with data practitioners. Policies for handling data and metadata are developed. Furthermore, university teaching material is created to raise the future scientists awareness of research data management. As a testbed for ingest workflows the digital preservation system Archivematica [1] is used. During the ingest process metadata is generated that is compliant to the

  10. ASDC Collaborations and Processes to Ensure Quality Metadata and Consistent Data Availability

    Science.gov (United States)

    Trapasso, T. J.

    2017-12-01

    With the introduction of new tools, faster computing, and less expensive storage, increased volumes of data are expected to be managed with existing or fewer resources. Metadata management is becoming a heightened challenge from the increase in data volume, resulting in more metadata records needed to be curated for each product. To address metadata availability and completeness, NASA ESDIS has taken significant strides with the creation of the United Metadata Model (UMM) and Common Metadata Repository (CMR). These UMM helps address hurdles experienced by the increasing number of metadata dialects and the CMR provides a primary repository for metadata so that required metadata fields can be served through a growing number of tools and services. However, metadata quality remains an issue as metadata is not always inherent to the end-user. In response to these challenges, the NASA Atmospheric Science Data Center (ASDC) created the Collaboratory for quAlity Metadata Preservation (CAMP) and defined the Product Lifecycle Process (PLP) to work congruently. CAMP is unique in that it provides science team members a UI to directly supply metadata that is complete, compliant, and accurate for their data products. This replaces back-and-forth communication that often results in misinterpreted metadata. Upon review by ASDC staff, metadata is submitted to CMR for broader distribution through Earthdata. Further, approval of science team metadata in CAMP automatically triggers the ASDC PLP workflow to ensure appropriate services are applied throughout the product lifecycle. This presentation will review the design elements of CAMP and PLP as well as demonstrate interfaces to each. It will show the benefits that CAMP and PLP provide to the ASDC that could potentially benefit additional NASA Earth Science Data and Information System (ESDIS) Distributed Active Archive Centers (DAACs).

  11. Dyniqx: a novel meta-search engine for metadata based cross search

    OpenAIRE

    Zhu, Jianhan; Song, Dawei; Eisenstadt, Marc; Barladeanu, Cristi; Rüger, Stefan

    2008-01-01

    The effect of metadata in collection fusion has not been sufficiently studied. In response to this, we present a novel meta-search engine called Dyniqx for metadata based cross search. Dyniqx exploits the availability of metadata in academic search services such as PubMed and Google Scholar etc for fusing search results from heterogeneous search engines. In addition, metadata from these search engines are used for generating dynamic query controls such as sliders and tick boxes etc which are ...

  12. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    Science.gov (United States)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  13. OAI-PMH repositories : quality issues regarding metadata and protocol compliance, tutorial 1

    CERN Multimedia

    CERN. Geneva; Cole, Tim

    2005-01-01

    This tutorial will provide an overview of emerging guidelines and best practices for OAI data providers and how they relate to expectations and needs of service providers. The audience should already be familiar with OAI protocol basics and have at least some experience with either data provider or service provider implementations. The speakers will present both protocol compliance best practices and general recommendations for creating and disseminating high-quality "shareable metadata". Protocol best practices discussion will include coverage of OAI identifiers, date-stamps, deleted records, sets, resumption tokens, about containers, branding, errors conditions, HTTP server issues, and repository lifecycle issues. Discussion of what makes for good, shareable metadata will cover topics including character encoding, namespace and XML schema issues, metadata crosswalk issues, support of multiple metadata formats, general metadata authoring recommendations, specific recommendations for use of Dublin Core elemen...

  14. An Assessment of the Evolving Common Metadata Repository Standards for Airborne Field Campaigns

    Science.gov (United States)

    Northup, E. A.; Chen, G.; Early, A. B.; Beach, A. L., III; Walter, J.; Conover, H.

    2016-12-01

    The NASA Earth Venture Program has led to a dramatic increase in airborne observations, requiring updated data management practices with clearly defined data standards and protocols for metadata. While the current data management practices demonstrate some success in serving airborne science team data user needs, existing metadata models and standards such as NASA's Unified Metadata Model (UMM) for Collections (UMM-C) present challenges with respect to accommodating certain features of airborne science metadata. UMM is the model implemented in the Common Metadata Repository (CMR), which catalogs all metadata records for NASA's Earth Observing System Data and Information System (EOSDIS). One example of these challenges is with representation of spatial and temporal metadata. In addition, many airborne missions target a particular geophysical event, such as a developing hurricane. In such cases, metadata about the event is also important for understanding the data. While coverage of satellite missions is highly predictable based on orbit characteristics, airborne missions feature complicated flight patterns where measurements can be spatially and temporally discontinuous. Therefore, existing metadata models will need to be expanded for airborne measurements and sampling strategies. An Airborne Metadata Working Group was established under the auspices of NASA's Earth Science Data Systems Working Group (ESDSWG) to identify specific features of airborne metadata that can not be currently represented in the UMM and to develop new recommendations. The group includes representation from airborne data users and providers. This presentation will discuss the challenges and recommendations in an effort to demonstrate how airborne metadata curation/management can be improved to streamline data ingest and discoverability to a broader user community.

  15. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    Science.gov (United States)

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  16. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    Science.gov (United States)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  17. Finding Atmospheric Composition (AC) Metadata

    Science.gov (United States)

    Strub, Richard F..; Falke, Stefan; Fiakowski, Ed; Kempler, Steve; Lynnes, Chris; Goussev, Oleg

    2015-01-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all

  18. Development of RESTful services and map-based user interface tools for access and delivery of data and metadata from the Marine-Geo Digital Library

    Science.gov (United States)

    Morton, J. J.; Ferrini, V. L.

    2015-12-01

    The Marine Geoscience Data System (MGDS, www.marine-geo.org) operates an interactive digital data repository and metadata catalog that provides access to a variety of marine geology and geophysical data from throughout the global oceans. Its Marine-Geo Digital Library includes common marine geophysical data types and supporting data and metadata, as well as complementary long-tail data. The Digital Library also includes community data collections and custom data portals for the GeoPRISMS, MARGINS and Ridge2000 programs, for active source reflection data (Academic Seismic Portal), and for marine data acquired by the US Antarctic Program (Antarctic and Southern Ocean Data Portal). Ensuring that these data are discoverable not only through our own interfaces but also through standards-compliant web services is critical for enabling investigators to find data of interest.Over the past two years, MGDS has developed several new RESTful web services that enable programmatic access to metadata and data holdings. These web services are compliant with the EarthCube GeoWS Building Blocks specifications and are currently used to drive our own user interfaces. New web applications have also been deployed to provide a more intuitive user experience for searching, accessing and browsing metadata and data. Our new map-based search interface combines components of the Google Maps API with our web services for dynamic searching and exploration of geospatially constrained data sets. Direct introspection of nearly all data formats for hundreds of thousands of data files curated in the Marine-Geo Digital Library has allowed for precise geographic bounds, which allow geographic searches to an extent not previously possible. All MGDS map interfaces utilize the web services of the Global Multi-Resolution Topography (GMRT) synthesis for displaying global basemap imagery and for dynamically provide depth values at the cursor location.

  19. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    Science.gov (United States)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  20. A data skimming service for locally resident analysis data

    International Nuclear Information System (INIS)

    Cranshaw, J; Gieraltowski, J; Malon, D; May, E; Gardner, R W; Mambelli, M

    2008-01-01

    A Data Skimming Service (DSS) is a site-level service for rapid event filtering and selection from locally resident datasets based on metadata queries to associated 'tag' databases. In US ATLAS, we expect most if not all of the AOD-based datasets to be replicated to each of the five Tier 2 regional facilities in the US Tier 1 'cloud' coordinated by Brookhaven National Laboratory. Entire datasets will consist of on the order of several terabytes of data, and providing easy, quick access to skimmed subsets of these data will be vital to physics working groups. Typically, physicists will be interested in portions of the complete datasets, selected according to event-level attributes (number of jets, missing Et, etc) and content (specific analysis objects for subsequent processing). In this paper we describe methods used to classify data (metadata tag generation) and to store these results in a local database. Next we discuss a general framework which includes methods for accessing this information, defining skims, specifying event output content, accessing locally available storage through a variety of interfaces (SRM, dCache/dccp, gridftp), accessing remote storage elements as specified, and user job submission tools through local or grid schedulers. The advantages of the DSS are the ability to quickly 'browse' datasets and design skims, for example, pre-adjusting cuts to get to a desired skim level with minimal use of compute resources, and to encode these analysis operations in a database for re-analysis and archival purposes. Additionally the framework has provisions to operate autonomously in the event that external, central resources are not available, and to provide, as a reduced package, a minimal skimming service tailored to the needs of small Tier 3 centres or individual users

  1. THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA

    Energy Technology Data Exchange (ETDEWEB)

    Devarakonda, Ranjeet [ORNL; Shrestha, Biva [ORNL; Palanisamy, Giri [ORNL; Hook, Leslie A [ORNL; Killeffer, Terri S [ORNL; Boden, Thomas A [ORNL; Cook, Robert B [ORNL; Zolly, Lisa [United States Geological Service (USGS); Hutchison, Viv [United States Geological Service (USGS); Frame, Mike [United States Geological Service (USGS); Cialella, Alice [Brookhaven National Laboratory (BNL); Lazer, Kathy [Brookhaven National Laboratory (BNL)

    2014-01-01

    Nobody is better suited to describe data than the scientist who created it. This description about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset [1]. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [2][4]. OME is part of ORNL s Mercury software fleet [2][3]. It was jointly developed to support projects funded by the United States Geological Survey (USGS), U.S. Department of Energy (DOE), National Aeronautics and Space Administration (NASA) and National Oceanic and Atmospheric Administration (NOAA). OME s architecture provides a customizable interface to support project-specific requirements. Using this new architecture, the ORNL team developed OME instances for USGS s Core Science Analytics, Synthesis, and Libraries (CSAS&L), DOE s Next Generation Ecosystem Experiments (NGEE) and Atmospheric Radiation Measurement (ARM) Program, and the international Surface Ocean Carbon Dioxide ATlas (SOCAT). Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. From the information on the form, the Metadata Editor can create an XML file on the server that the editor is installed or to the user s personal computer. Researchers can also use the ORNL Metadata Editor to modify existing XML metadata files. As an example, an NGEE Arctic scientist use OME to register

  2. Evolution in Metadata Quality: Common Metadata Repository's Role in NASA Curation Efforts

    Science.gov (United States)

    Gilman, Jason; Shum, Dana; Baynes, Katie

    2016-01-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This poster covers how we use humanizers, a technique for dealing with the symptoms of metadata issues, as well as our plans for future metadata validation enhancements. The CMR currently indexes 35K collections and 300M granules.

  3. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.; Tzelnic, Percy; Ting, Dennis P. J.; Ionkov, Latchesar A.; Grider, Gary

    2017-12-26

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata request can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.

  4. New developments in file-based infrastructure for ATLAS event selection

    Energy Technology Data Exchange (ETDEWEB)

    Gemmeren, P van; Malon, D M [Argonne National Laboratory, Argonne, Illinois 60439 (United States); Nowak, M, E-mail: gemmeren@anl.go [Brookhaven National Laboratory, Upton, NY 11973-5000 (United States)

    2010-04-01

    In ATLAS software, TAGs are event metadata records that can be stored in various technologies, including ROOT files and relational databases. TAGs are used to identify and extract events that satisfy certain selection predicates, which can be coded as SQL-style queries. TAG collection files support in-file metadata to store information describing all events in the collection. Event Selector functionality has been augmented to provide such collection-level metadata to subsequent algorithms. The ATLAS I/O framework has been extended to allow computational processing of TAG attributes to select or reject events without reading the event data. This capability enables physicists to use more detailed selection criteria than are feasible in an SQL query. For example, the TAGs contain enough information not only to check the number of electrons, but also to calculate their distance to the closest jet-a calculation that would be difficult to express in SQL. Another new development allows ATLAS to write TAGs directly into event data files. This feature can improve performance by supporting advanced event selection capabilities, including computational processing of TAG information, without the need for external TAG file or database access.

  5. Harvesting NASA's Common Metadata Repository

    Science.gov (United States)

    Shum, D.; Mitchell, A. E.; Durbin, C.; Norton, J.

    2017-12-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  6. OlyMPUS - The Ontology-based Metadata Portal for Unified Semantics

    Science.gov (United States)

    Huffer, E.; Gleason, J. L.

    2015-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support data consumers and data providers, enabling the latter to register their data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS leverages the semantics and reasoning capabilities of ODISEES to provide data producers with a semi-automated interface for producing the semantically rich metadata needed to support ODISEES' data discovery and access services. It integrates the ODISEES metadata search system with multiple NASA data delivery tools to enable data consumers to create customized data sets for download to their computers, or for NASA Advanced Supercomputing (NAS) facility registered users, directly to NAS storage resources for access by applications running on NAS supercomputers. A core function of NASA's Earth Science Division is research and analysis that uses the full spectrum of data products available in NASA archives. Scientists need to perform complex analyses that identify correlations and non-obvious relationships across all types of Earth System phenomena. Comprehensive analytics are hindered, however, by the fact that many Earth science data products are disparate and hard to synthesize. Variations in how data are collected, processed, gridded, and stored, create challenges for data interoperability and synthesis, which are exacerbated by the sheer volume of available data. Robust, semantically rich metadata can support tools for data discovery and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Such capabilities are critical to enabling the research activities integral to NASA's strategic plans. However, as metadata requirements increase and competing standards emerge

  7. Visual pattern discovery in timed event data

    Science.gov (United States)

    Schaefer, Matthias; Wanner, Franz; Mansmann, Florian; Scheible, Christian; Stennett, Verity; Hasselrot, Anders T.; Keim, Daniel A.

    2011-01-01

    Business processes have tremendously changed the way large companies conduct their business: The integration of information systems into the workflows of their employees ensures a high service level and thus high customer satisfaction. One core aspect of business process engineering are events that steer the workflows and trigger internal processes. Strict requirements on interval-scaled temporal patterns, which are common in time series, are thereby released through the ordinal character of such events. It is this additional degree of freedom that opens unexplored possibilities for visualizing event data. In this paper, we present a flexible and novel system to find significant events, event clusters and event patterns. Each event is represented as a small rectangle, which is colored according to categorical, ordinal or intervalscaled metadata. Depending on the analysis task, different layout functions are used to highlight either the ordinal character of the data or temporal correlations. The system has built-in features for ordering customers or event groups according to the similarity of their event sequences, temporal gap alignment and stacking of co-occurring events. Two characteristically different case studies dealing with business process events and news articles demonstrate the capabilities of our system to explore event data.

  8. PERANCANGAN SISTEM METADATA UNTUK DATA WAREHOUSE DENGAN STUDI KASUS REVENUE TRACKING PADA PT. TELKOM DIVRE V JAWA TIMUR

    Directory of Open Access Journals (Sweden)

    Yudhi Purwananto

    2004-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Data warehouse merupakan media penyimpanan data dalam perusahaan yang diambil dari berbagai sistem dan dapat digunakan untuk berbagai keperluan seperti analisis dan pelaporan. Di PT Telkom Divre V Jawa Timur telah dibangun sebuah data warehouse yang disebut dengan Regional Database. Di Regional Database memerlukan sebuah komponen penting dalam data warehouse yaitu metadata. Definisi metadata secara sederhana adalah "data tentang data". Dalam penelitian ini dirancang sistem metadata dengan studi kasus Revenue Tracking sebagai komponen analisis dan pelaporan pada Regional Database. Metadata sangat perlu digunakan dalam pengelolaan dan memberikan informasi tentang data warehouse. Proses - proses di dalam data warehouse serta komponen - komponen yang berkaitan dengan data warehouse harus saling terintegrasi untuk mewujudkan karakteristik data warehouse yang subject-oriented, integrated, time-variant, dan non-volatile. Karena itu metadata juga harus memiliki kemampuan mempertukarkan informasi (exchange antar komponen dalam data warehouse tersebut. Web service digunakan sebagai mekanisme pertukaran ini. Web service menggunakan teknologi XML dan protokol HTTP dalam berkomunikasi. Dengan web service, setiap komponen

  9. Improvements to the Ontology-based Metadata Portal for Unified Semantics (OlyMPUS)

    Science.gov (United States)

    Linsinbigler, M. A.; Gleason, J. L.; Huffer, E.

    2016-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support Earth Science data consumers and data providers, enabling the latter to register data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS complements the ODISEES' data discovery system with an intelligent tool to enable data producers to auto-generate semantically enhanced metadata and upload it to the metadata repository that drives ODISEES. Like ODISEES, the OlyMPUS metadata provisioning tool leverages robust semantics, a NoSQL database and query engine, an automated reasoning engine that performs first- and second-order deductive inferencing, and uses a controlled vocabulary to support data interoperability and automated analytics. The ODISEES data discovery portal leverages this metadata to provide a seamless data discovery and access experience for data consumers who are interested in comparing and contrasting the multiple Earth science data products available across NASA data centers. Olympus will support scientists' services and tools for performing complex analyses and identifying correlations and non-obvious relationships across all types of Earth System phenomena using the full spectrum of NASA Earth Science data available. By providing an intelligent discovery portal that supplies users - both human users and machines - with detailed information about data products, their contents and their structure, ODISEES will reduce the level of effort required to identify and prepare large volumes of data for analysis. This poster will explain how OlyMPUS leverages deductive reasoning and other technologies to create an integrated environment for generating and exploiting semantically rich metadata.

  10. Making Interoperability Easier with the NASA Metadata Management Tool

    Science.gov (United States)

    Shum, D.; Reese, M.; Pilone, D.; Mitchell, A. E.

    2016-12-01

    ISO 19115 has enabled interoperability amongst tools, yet many users find it hard to build ISO metadata for their collections because it can be large and overly flexible for their needs. The Metadata Management Tool (MMT), part of NASA's Earth Observing System Data and Information System (EOSDIS), offers users a modern, easy to use browser based tool to develop ISO compliant metadata. Through a simplified UI experience, metadata curators can create and edit collections without any understanding of the complex ISO-19115 format, while still generating compliant metadata. The MMT is also able to assess the completeness of collection level metadata by evaluating it against a variety of metadata standards. The tool provides users with clear guidance as to how to change their metadata in order to improve their quality and compliance. It is based on NASA's Unified Metadata Model for Collections (UMM-C) which is a simpler metadata model which can be cleanly mapped to ISO 19115. This allows metadata authors and curators to meet ISO compliance requirements faster and more accurately. The MMT and UMM-C have been developed in an agile fashion, with recurring end user tests and reviews to continually refine the tool, the model and the ISO mappings. This process is allowing for continual improvement and evolution to meet the community's needs.

  11. Production experience with the ATLAS Event Service

    Science.gov (United States)

    Benjamin, D.; Calafiura, P.; Childers, T.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Compute Engine, and a growing number of HPC platforms. After briefly reviewing the concept and the architecture of the Event Service, we will report the status and experience gained in AES commissioning and production operations on supercomputers, and our plans for extending ES application beyond Geant4 simulation to other workflows, such as reconstruction and data analysis.

  12. Production experience with the ATLAS Event Service

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00066086; The ATLAS collaboration; Calafiura, Paolo; Childers, John Taylor; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Wenaus, Torre

    2017-01-01

    The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Comp...

  13. Production Experience with the ATLAS Event Service

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration

    2016-01-01

    The ATLAS Event Service (ES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the ES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Goggle Comput...

  14. Metadata Realities for Cyberinfrastructure: Data Authors as Metadata Creators

    Science.gov (United States)

    Mayernik, Matthew Stephen

    2011-01-01

    As digital data creation technologies become more prevalent, data and metadata management are necessary to make data available, usable, sharable, and storable. Researchers in many scientific settings, however, have little experience or expertise in data and metadata management. In this dissertation, I explore the everyday data and metadata…

  15. Improving Earth Science Metadata: Modernizing ncISO

    Science.gov (United States)

    O'Brien, K.; Schweitzer, R.; Neufeld, D.; Burger, E. F.; Signell, R. P.; Arms, S. C.; Wilcox, K.

    2016-12-01

    ncISO is a package of tools developed at NOAA's National Center for Environmental Information (NCEI) that facilitates the generation of ISO 19115-2 metadata from NetCDF data sources. The tool currently exists in two iterations: a command line utility and a web-accessible service within the THREDDS Data Server (TDS). Several projects, including NOAA's Unified Access Framework (UAF), depend upon ncISO to generate the ISO-compliant metadata from their data holdings and use the resulting information to populate discovery tools such as NCEI's ESRI Geoportal and NOAA's data.noaa.gov CKAN system. In addition to generating ISO 19115-2 metadata, the tool calculates a rubric score based on how well the dataset follows the Attribute Conventions for Dataset Discovery (ACDD). The result of this rubric calculation, along with information about what has been included and what is missing is displayed in an HTML document generated by the ncISO software package. Recently ncISO has fallen behind in terms of supporting updates to conventions such updates to the ACDD. With the blessing of the original programmer, NOAA's UAF has been working to modernize the ncISO software base. In addition to upgrading ncISO to utilize version1.3 of the ACDD, we have been working with partners at Unidata and IOOS to unify the tool's code base. In essence, we are merging the command line capabilities into the same software that will now be used by the TDS service, allowing easier updates when conventions such as ACDD are updated in the future. In this presentation, we will discuss the work the UAF project has done to support updated conventions within ncISO, as well as describe how the updated tool is helping to improve metadata throughout the earth and ocean sciences.

  16. Improving Scientific Metadata Interoperability And Data Discoverability using OAI-PMH

    Science.gov (United States)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James M.; Wilson, Bruce E.

    2010-12-01

    While general-purpose search engines (such as Google or Bing) are useful for finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. However, there are a number of different protocols for harvesting metadata, with some challenges for ensuring that updates are propagated and for collaborations with repositories using differing metadata standards. The Open Archive Initiative Protocol for Metadata Handling (OAI-PMH) is a standard that is seeing increased use as a means for exchanging structured metadata. OAI-PMH implementations must support Dublin Core as a metadata standard, with other metadata formats as optional. We have developed tools which enable our structured search tool (Mercury; http://mercury.ornl.gov) to consume metadata from OAI-PMH services in any of the metadata formats we support (Dublin Core, Darwin Core, FCDC CSDGM, GCMD DIF, EML, and ISO 19115/19137). We are also making ORNL DAAC metadata available through OAI-PMH for other metadata tools to utilize, such as the NASA Global Change Master Directory, GCMD). This paper describes Mercury capabilities with multiple metadata formats, in general, and, more specifically, the results of our OAI-PMH implementations and

  17. A Metadata-Rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2009-01-07

    Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  18. Generation of Multiple Metadata Formats from a Geospatial Data Repository

    Science.gov (United States)

    Hudspeth, W. B.; Benedict, K. K.; Scott, S.

    2012-12-01

    The Earth Data Analysis Center (EDAC) at the University of New Mexico is partnering with the CYBERShARE and Environmental Health Group from the Center for Environmental Resource Management (CERM), located at the University of Texas, El Paso (UTEP), the Biodiversity Institute at the University of Kansas (KU), and the New Mexico Geo- Epidemiology Research Network (GERN) to provide a technical infrastructure that enables investigation of a variety of climate-driven human/environmental systems. Two significant goals of this NASA-funded project are: a) to increase the use of NASA Earth observational data at EDAC by various modeling communities through enabling better discovery, access, and use of relevant information, and b) to expose these communities to the benefits of provenance for improving understanding and usability of heterogeneous data sources and derived model products. To realize these goals, EDAC has leveraged the core capabilities of its Geographic Storage, Transformation, and Retrieval Engine (Gstore) platform, developed with support of the NSF EPSCoR Program. The Gstore geospatial services platform provides general purpose web services based upon the REST service model, and is capable of data discovery, access, and publication functions, metadata delivery functions, data transformation, and auto-generated OGC services for those data products that can support those services. Central to the NASA ACCESS project is the delivery of geospatial metadata in a variety of formats, including ISO 19115-2/19139, FGDC CSDGM, and the Proof Markup Language (PML). This presentation details the extraction and persistence of relevant metadata in the Gstore data store, and their transformation into multiple metadata formats that are increasingly utilized by the geospatial community to document not only core library catalog elements (e.g. title, abstract, publication data, geographic extent, projection information, and database elements), but also the processing steps used to

  19. METADATA, DESKRIPSI SERTA TITIK AKSESNYA DAN INDOMARC

    Directory of Open Access Journals (Sweden)

    Sulistiyo Basuki

    2012-07-01

    Full Text Available lstilah metadata mulai sering muncul dalam literature tentang database management systems (DBMS pada tahun 1980 an. lstilah tersebut digunakan untuk menggambarkan informasi yang diperlukan untuk mencatat karakteristik informasi yang terdapat pada pangkalan data. Banyak sumber yang mengartikan istilah metadata. Metadata dapat diartikan sumber, menunjukan lokasi dokumen, serta memberikan ringkasan yang diperlukan untuk memanfaat-kannya. Secara umum ada 3 bagian yang digunakan untuk membuat metadata sebagai sebuah paket informasi, dan penyandian (encoding pembuatan deskripsi paket informasi, dan penyediaan akses terhadap deskripsi tersebut. Dalam makalah ini diuraikan mengenai konsep data dalam kaitannya dengan perpustakaan. Uraian meliputi definisi metadata; fungsi metadata; standar penyandian (encoding, cantuman bibliografis. surogat, metadata; penciptaan isi cantuman surogat; ancangan terhadap format metadata; serta metadata dan standar metadata.

  20. Big Data tools as applied to ATLAS event data

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225336; The ATLAS collaboration; Gardner, Robert; Bryant, Lincoln

    2017-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Logfiles, database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and associated analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data. Such modes would simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning environments and to...

  1. Mercury Toolset for Spatiotemporal Metadata

    Science.gov (United States)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris

    2010-06-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  2. Mercury Toolset for Spatiotemporal Metadata

    Science.gov (United States)

    Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James

    2010-01-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  3. Creating context for the experiment record. User-defined metadata: investigations into metadata usage in the LabTrove ELN.

    Science.gov (United States)

    Willoughby, Cerys; Bird, Colin L; Coles, Simon J; Frey, Jeremy G

    2014-12-22

    The drive toward more transparency in research, the growing willingness to make data openly available, and the reuse of data to maximize the return on research investment all increase the importance of being able to find information and make links to the underlying data. The use of metadata in Electronic Laboratory Notebooks (ELNs) to curate experiment data is an essential ingredient for facilitating discovery. The University of Southampton has developed a Web browser-based ELN that enables users to add their own metadata to notebook entries. A survey of these notebooks was completed to assess user behavior and patterns of metadata usage within ELNs, while user perceptions and expectations were gathered through interviews and user-testing activities within the community. The findings indicate that while some groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users are making little attempts to use it, thereby endangering their ability to recover data in the future. A survey of patterns of metadata use in these notebooks, together with feedback from the user community, indicated that while a few groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users adopt a "minimum required" approach to metadata. To investigate whether the patterns of metadata use in LabTrove were unusual, a series of surveys were undertaken to investigate metadata usage in a variety of platforms supporting user-defined metadata. These surveys also provided the opportunity to investigate whether interface designs in these other environments might inform strategies for encouraging metadata creation and more effective use of metadata in LabTrove.

  4. An Oracle-based event index for ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00083337; The ATLAS collaboration; Dimitrov, Gancho

    2017-01-01

    The ATLAS Eventlndex System has amassed a set of key quantities for a large number of ATLAS events into a Hadoop based infrastructure for the purpose of providing the experiment with a number of event-wise services. Collecting this data in one place provides the opportunity to investigate various storage formats and technologies and assess which best serve the various use cases as well as consider what other benefits alternative storage systems provide. In this presentation we describe how the data are imported into an Oracle RDBMS (relational database management system), the services we have built based on this architecture, and our experience with it. We’ve indexed about 26 billion real data events thus far and have designed the system to accommodate future data which has expected rates of 5 and 20 billion events per year. We have found this system offers outstanding performance for some fundamental use cases. In addition, profiting from the co-location of this data with other complementary metadata in AT...

  5. Viewing and Editing Earth Science Metadata MOBE: Metadata Object Browser and Editor in Java

    Science.gov (United States)

    Chase, A.; Helly, J.

    2002-12-01

    Metadata is an important, yet often neglected aspect of successful archival efforts. However, to generate robust, useful metadata is often a time consuming and tedious task. We have been approaching this problem from two directions: first by automating metadata creation, pulling from known sources of data, and in addition, what this (paper/poster?) details, developing friendly software for human interaction with the metadata. MOBE and COBE(Metadata Object Browser and Editor, and Canonical Object Browser and Editor respectively), are Java applications for editing and viewing metadata and digital objects. MOBE has already been designed and deployed, currently being integrated into other areas of the SIOExplorer project. COBE is in the design and development stage, being created with the same considerations in mind as those for MOBE. Metadata creation, viewing, data object creation, and data object viewing, when taken on a small scale are all relatively simple tasks. Computer science however, has an infamous reputation for transforming the simple into complex. As a system scales upwards to become more robust, new features arise and additional functionality is added to the software being written to manage the system. The software that emerges from such an evolution, though powerful, is often complex and difficult to use. With MOBE the focus is on a tool that does a small number of tasks very well. The result has been an application that enables users to manipulate metadata in an intuitive and effective way. This allows for a tool that serves its purpose without introducing additional cognitive load onto the user, an end goal we continue to pursue.

  6. Latest developments for the IAGOS database: Interoperability and metadata

    Science.gov (United States)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for

  7. Big Data Analytics Tools as Applied to ATLAS Event Data

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of big data, statistical and machine learning tools...

  8. Multi-facetted Metadata - Describing datasets with different metadata schemas at the same time

    Science.gov (United States)

    Ulbricht, Damian; Klump, Jens; Bertelmann, Roland

    2013-04-01

    Inspired by the wish to re-use research data a lot of work is done to bring data systems of the earth sciences together. Discovery metadata is disseminated to data portals to allow building of customized indexes of catalogued dataset items. Data that were once acquired in the context of a scientific project are open for reappraisal and can now be used by scientists that were not part of the original research team. To make data re-use easier, measurement methods and measurement parameters must be documented in an application metadata schema and described in a written publication. Linking datasets to publications - as DataCite [1] does - requires again a specific metadata schema and every new use context of the measured data may require yet another metadata schema sharing only a subset of information with the meta information already present. To cope with the problem of metadata schema diversity in our common data repository at GFZ Potsdam we established a solution to store file-based research data and describe these with an arbitrary number of metadata schemas. Core component of the data repository is an eSciDoc infrastructure that provides versioned container objects, called eSciDoc [2] "items". The eSciDoc content model allows assigning files to "items" and adding any number of metadata records to these "items". The eSciDoc items can be submitted, revised, and finally published, which makes the data and metadata available through the internet worldwide. GFZ Potsdam uses eSciDoc to support its scientific publishing workflow, including mechanisms for data review in peer review processes by providing temporary web links for external reviewers that do not have credentials to access the data. Based on the eSciDoc API, panMetaDocs [3] provides a web portal for data management in research projects. PanMetaDocs, which is based on panMetaWorks [4], is a PHP based web application that allows to describe data with any XML-based schema. It uses the eSciDoc infrastructures

  9. The ATLAS Event Service: A New Approach to Event Processing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00070566; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; van Gemmeren, Peter; Wenaus, Torre

    2015-01-01

    The ATLAS Event Service (ES) implements a new fine grained approach to HEP event processing, designed to be agile and efficient in exploiting transient, short-lived resources such as HPC hole-filling, spot market commercial clouds, and volunteer computing. Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours/day). Input data flows utilize remote data repositories with no data locality or pre­staging requirements, minimizing the use of costly storage in favor of strongly leveraging powerful networks. Object stores provide a highly scalable means of remotely storing the quasi-continuous, fine grained outputs that give ES based applications a very light data footprint on a processing resource, and ensure negligible losses should the resource suddenly vanish. We will describe the motivations for the ES system, its unique features and capabi...

  10. Tethys Acoustic Metadata Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Tethys database houses the metadata associated with the acoustic data collection efforts by the Passive Acoustic Group. These metadata include dates, locations...

  11. The Genomic Observatories Metadatabase (GeOMe): A new repository for field and sampling event metadata associated with genetic samples

    Science.gov (United States)

    Deck, John; Gaither, Michelle R.; Ewing, Rodney; Bird, Christopher E.; Davies, Neil; Meyer, Christopher; Riginos, Cynthia; Toonen, Robert J.; Crandall, Eric D.

    2017-01-01

    The Genomic Observatories Metadatabase (GeOMe, http://www.geome-db.org/) is an open access repository for geographic and ecological metadata associated with biosamples and genetic data. Whereas public databases have served as vital repositories for nucleotide sequences, they do not accession all the metadata required for ecological or evolutionary analyses. GeOMe fills this need, providing a user-friendly, web-based interface for both data contributors and data recipients. The interface allows data contributors to create a customized yet standard-compliant spreadsheet that captures the temporal and geospatial context of each biosample. These metadata are then validated and permanently linked to archived genetic data stored in the National Center for Biotechnology Information’s (NCBI’s) Sequence Read Archive (SRA) via unique persistent identifiers. By linking ecologically and evolutionarily relevant metadata with publicly archived sequence data in a structured manner, GeOMe sets a gold standard for data management in biodiversity science. PMID:28771471

  12. The Genomic Observatories Metadatabase (GeOMe: A new repository for field and sampling event metadata associated with genetic samples.

    Directory of Open Access Journals (Sweden)

    John Deck

    2017-08-01

    Full Text Available The Genomic Observatories Metadatabase (GeOMe, http://www.geome-db.org/ is an open access repository for geographic and ecological metadata associated with biosamples and genetic data. Whereas public databases have served as vital repositories for nucleotide sequences, they do not accession all the metadata required for ecological or evolutionary analyses. GeOMe fills this need, providing a user-friendly, web-based interface for both data contributors and data recipients. The interface allows data contributors to create a customized yet standard-compliant spreadsheet that captures the temporal and geospatial context of each biosample. These metadata are then validated and permanently linked to archived genetic data stored in the National Center for Biotechnology Information's (NCBI's Sequence Read Archive (SRA via unique persistent identifiers. By linking ecologically and evolutionarily relevant metadata with publicly archived sequence data in a structured manner, GeOMe sets a gold standard for data management in biodiversity science.

  13. NCI's national environmental research data collection: metadata management built on standards and preparing for the semantic web

    Science.gov (United States)

    Wang, Jingbo; Bastrakova, Irina; Evans, Ben; Gohar, Kashif; Santana, Fabiana; Wyborn, Lesley

    2015-04-01

    National Computational Infrastructure (NCI) manages national environmental research data collections (10+ PB) as part of its specialized high performance data node of the Research Data Storage Infrastructure (RDSI) program. We manage 40+ data collections using NCI's Data Management Plan (DMP), which is compatible with the ISO 19100 metadata standards. We utilize ISO standards to make sure our metadata is transferable and interoperable for sharing and harvesting. The DMP is used along with metadata from the data itself, to create a hierarchy of data collection, dataset and time series catalogues that is then exposed through GeoNetwork for standard discoverability. This hierarchy catalogues are linked using a parent-child relationship. The hierarchical infrastructure of our GeoNetwork catalogues system aims to address both discoverability and in-house administrative use-cases. At NCI, we are currently improving the metadata interoperability in our catalogue by linking with standardized community vocabulary services. These emerging vocabulary services are being established to help harmonise data from different national and international scientific communities. One such vocabulary service is currently being established by the Australian National Data Services (ANDS). Data citation is another important aspect of the NCI data infrastructure, which allows tracking of data usage and infrastructure investment, encourage data sharing, and increasing trust in research that is reliant on these data collections. We incorporate the standard vocabularies into the data citation metadata so that the data citation become machine readable and semantically friendly for web-search purpose as well. By standardizing our metadata structure across our entire data corpus, we are laying the foundation to enable the application of appropriate semantic mechanisms to enhance discovery and analysis of NCI's national environmental research data information. We expect that this will further

  14. Metadata Life Cycles, Use Cases and Hierarchies

    Directory of Open Access Journals (Sweden)

    Ted Habermann

    2018-05-01

    Full Text Available The historic view of metadata as “data about data” is expanding to include data about other items that must be created, used, and understood throughout the data and project life cycles. In this context, metadata might better be defined as the structured and standard part of documentation, and the metadata life cycle can be described as the metadata content that is required for documentation in each phase of the project and data life cycles. This incremental approach to metadata creation is similar to the spiral model used in software development. Each phase also has distinct users and specific questions to which they need answers. In many cases, the metadata life cycle involves hierarchies where latter phases have increased numbers of items. The relationships between metadata in different phases can be captured through structure in the metadata standard, or through conventions for identifiers. Metadata creation and management can be streamlined and simplified by re-using metadata across many records. Many of these ideas have been developed to various degrees in several Geoscience disciplines and are being used in metadata for documenting the integrated life cycle of environmental research in the Arctic, including projects, collection sites, and datasets.

  15. Active Marine Station Metadata

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Active Marine Station Metadata is a daily metadata report for active marine bouy and C-MAN (Coastal Marine Automated Network) platforms from the National Data...

  16. Critical Metadata for Spectroscopy Field Campaigns

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2014-04-01

    Full Text Available A field spectroscopy metadata standard is defined as those data elements that explicitly document the spectroscopy dataset and field protocols, sampling strategies, instrument properties and environmental and logistical variables. Standards for field spectroscopy metadata affect the quality, completeness, reliability, and usability of datasets created in situ. Currently there is no standardized methodology for documentation of in situ spectroscopy data or metadata. This paper presents results of an international experiment comprising a web-based survey and expert panel evaluation that investigated critical metadata in field spectroscopy. The survey participants were a diverse group of scientists experienced in gathering spectroscopy data across a wide range of disciplines. Overall, respondents were in agreement about a core metadataset for generic campaign metadata, allowing for a prioritization of critical metadata elements to be proposed including those relating to viewing geometry, location, general target and sampling properties, illumination, instrument properties, reference standards, calibration, hyperspectral signal properties, atmospheric conditions, and general project details. Consensus was greatest among individual expert groups in specific application domains. The results allow the identification of a core set of metadata fields that enforce long term data storage and serve as a foundation for a metadata standard. This paper is part one in a series about the core elements of a robust and flexible field spectroscopy metadata standard.

  17. A New Look at Data Usage by Using Metadata Attributes as Indicators of Data Quality

    Science.gov (United States)

    Won, Y. I.; Wanchoo, L.; Behnke, J.

    2016-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) stores and distributes data from EOS satellites, as well as ancillary, airborne, in-situ, and socio-economic data. Twelve EOSDIS data centers support different scientific disciplines by providing products and services tailored to specific science communities. Although discipline oriented, these data centers provide common data management functions of ingest, archive and distribution, as well as documentation of their data and services on their web-sites. The Earth Science Data and Information System (ESDIS) Project collects these metrics from the EOSDIS data centers on a daily basis through a tool called the ESDIS Metrics System (EMS). These metrics are used in this study. The implementation of the Earthdata Login - formerly known as the User Registration System (URS) - across the various NASA data centers provides the EMS additional information about users obtaining data products from EOSDIS data centers. These additional user attributes collected by the Earthdata login, such as the user's primary area of study can augment the understanding of data usage, which in turn can help the EOSDIS program better understand the users' needs. This study will review the key metrics (users, distributed volume, and files) in multiple ways to gain an understanding of the significance of the metadata. Characterizing the usability of data by key metadata elements such as discipline and study area, will assist in understanding how the users have evolved over time. The data usage pattern based on version numbers may also provide some insight into the level of data quality. In addition, the data metrics by various services such as the Open-source Project for a Network Data Access Protocol (OPeNDAP), Web Map Service (WMS), Web Coverage Service (WCS), and subsets, will address how these services have extended the usage of data. Over-all, this study will present the usage of data and metadata by metrics analyses and will

  18. Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning tools like Spark, Jupyter, R, S...

  19. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    Science.gov (United States)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial

  20. Overview of long-term field experiments in Germany - metadata visualization

    Science.gov (United States)

    Muqit Zoarder, Md Abdul; Heinrich, Uwe; Svoboda, Nikolai; Grosse, Meike; Hierold, Wilfried

    2017-04-01

    BonaRes ("soil as a sustainable resource for the bioeconomy") is conducting to collect data and metadata of agricultural long-term field experiments (LTFE) of Germany. It is funded by the German Federal Ministry of Education and Research (BMBF) under the umbrella of the National Research Strategy BioEconomy 2030. BonaRes consists of ten interdisciplinary research project consortia and the 'BonaRes - Centre for Soil Research'. BonaRes Data Centre is responsible for collecting all LTFE data and regarding metadata into an enterprise database upon higher level of security and visualization of the data and metadata through data portal. In the frame of the BonaRes project, we are compiling an overview of long-term field experiments in Germany that is based on a literature review, the results of the online survey and direct contacts with LTFE operators. Information about research topic, contact person, website, experiment setup and analyzed parameters are collected. Based on the collected LTFE data, an enterprise geodatabase is developed and a GIS-based web-information system about LTFE in Germany is also settled. Various aspects of the LTFE, like experiment type, land-use type, agricultural category and duration of experiment, are presented in thematic maps. This information system is dynamically linked to the database, which means changes in the data directly affect the presentation. An easy data searching option using LTFE name, -location or -operators and the dynamic layer selection ensure a user-friendly web application. Dispersion and visualization of the overlapping LTFE points on the overview map are also challenging and we make it automatized at very zoom level which is also a consistent part of this application. The application provides both, spatial location and meta-information of LTFEs, which is backed-up by an enterprise geodatabase, GIS server for hosting map services and Java script API for web application development.

  1. Metabolonote: A wiki-based database for managing hierarchical metadata of metabolome analyses

    Directory of Open Access Journals (Sweden)

    Takeshi eAra

    2015-04-01

    Full Text Available Metabolomics—technology for comprehensive detection of small molecules in an organism—lags behind the other omics in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata, existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called TogoMD, with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data, but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitates the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  2. XML for catalogers and metadata librarians

    CERN Document Server

    Cole, Timothy W

    2013-01-01

    How are today's librarians to manage and describe the everexpanding volumes of resources, in both digital and print formats? The use of XML in cataloging and metadata workflows can improve metadata quality, the consistency of cataloging workflows, and adherence to standards. This book is intended to enable current and future catalogers and metadata librarians to progress beyond a bare surfacelevel acquaintance with XML, thereby enabling them to integrate XML technologies more fully into their cataloging workflows. Building on the wealth of work on library descriptive practices, cataloging, and metadata, XML for Catalogers and Metadata Librarians explores the use of XML to serialize, process, share, and manage library catalog and metadata records. The authors' expert treatment of the topic is written to be accessible to those with little or no prior practical knowledge of or experience with how XML is used. Readers will gain an educated appreciation of the nuances of XML and grasp the benefit of more advanced ...

  3. Security in a Replicated Metadata Catalogue

    CERN Document Server

    Koblitz, B

    2007-01-01

    The gLite-AMGA metadata has been developed by NA4 to provide simple relational metadata access for the EGEE user community. As advanced features, which will be the focus of this presentation, AMGA provides very fine-grained security also in connection with the built-in support for replication and federation of metadata. AMGA is extensively used by the biomedical community to store medical images metadata, digital libraries, in HEP for logging and bookkeeping data and in the climate community. The biomedical community intends to deploy a distributed metadata system for medical images consisting of various sites, which range from hospitals to computing centres. Only safe sharing of the highly sensitive metadata as provided in AMGA makes such a scenario possible. Other scenarios are digital libraries, which federate copyright protected (meta-) data into a common catalogue. The biomedical and digital libraries have been deployed using a centralized structure already for some time. They now intend to decentralize ...

  4. An Intelligent Web Digital Image Metadata Service Platform for Social Curation Commerce Environment

    Directory of Open Access Journals (Sweden)

    Seong-Yong Hong

    2015-01-01

    Full Text Available Information management includes multimedia data management, knowledge management, collaboration, and agents, all of which are supporting technologies for XML. XML technologies have an impact on multimedia databases as well as collaborative technologies and knowledge management. That is, e-commerce documents are encoded in XML and are gaining much popularity for business-to-business or business-to-consumer transactions. Recently, the internet sites, such as e-commerce sites and shopping mall sites, deal with a lot of image and multimedia information. This paper proposes an intelligent web digital image information retrieval platform, which adopts XML technology for social curation commerce environment. To support object-based content retrieval on product catalog images containing multiple objects, we describe multilevel metadata structures representing the local features, global features, and semantics of image data. To enable semantic-based and content-based retrieval on such image data, we design an XML-Schema for the proposed metadata. We also describe how to automatically transform the retrieval results into the forms suitable for the various user environments, such as web browser or mobile device, using XSLT. The proposed scheme can be utilized to enable efficient e-catalog metadata sharing between systems, and it will contribute to the improvement of the retrieval correctness and the user’s satisfaction on semantic-based web digital image information retrieval.

  5. Mdmap: A Tool for Metadata Collection and Matching

    Directory of Open Access Journals (Sweden)

    Rico Simke

    2014-10-01

    Full Text Available This paper describes a front-end for the semi-automatic collection, matching, and generation of bibliographic metadata obtained from different sources for use within a digitization architecture. The Library of a Billion Words project is building an infrastructure for digitizing text that requires high-quality bibliographic metadata, but currently only sparse metadata from digitized editions is available. The project’s approach is to collect metadata for each digitized item from as many sources as possible. An expert user can then use an intuitive front-end tool to choose matching metadata. The collected metadata are centrally displayed in an interactive grid view. The user can choose which metadata they want to assign to a certain edition, and export these data as MARCXML. This paper presents a new approach to bibliographic work and metadata correction. We try to achieve a high quality of the metadata by generating a large amount of metadata to choose from, as well as by giving librarians an intuitive tool to manage their data.

  6. The National Extreme Events Data and Research Center (NEED)

    Science.gov (United States)

    Gulledge, J.; Kaiser, D. P.; Wilbanks, T. J.; Boden, T.; Devarakonda, R.

    2014-12-01

    The Climate Change Science Institute at Oak Ridge National Laboratory (ORNL) is establishing the National Extreme Events Data and Research Center (NEED), with the goal of transforming how the United States studies and prepares for extreme weather events in the context of a changing climate. NEED will encourage the myriad, distributed extreme events research communities to move toward the adoption of common practices and will develop a new database compiling global historical data on weather- and climate-related extreme events (e.g., heat waves, droughts, hurricanes, etc.) and related information about impacts, costs, recovery, and available research. Currently, extreme event information is not easy to access and is largely incompatible and inconsistent across web sites. NEED's database development will take into account differences in time frames, spatial scales, treatments of uncertainty, and other parameters and variables, and leverage informatics tools developed at ORNL (i.e., the Metadata Editor [1] and Mercury [2]) to generate standardized, robust documentation for each database along with a web-searchable catalog. In addition, NEED will facilitate convergence on commonly accepted definitions and standards for extreme events data and will enable integrated analyses of coupled threats, such as hurricanes/sea-level rise/flooding and droughts/wildfires. Our goal and vision is that NEED will become the premiere integrated resource for the general study of extreme events. References: [1] Devarakonda, Ranjeet, et al. "OME: Tool for generating and managing metadata to handle BigData." Big Data (Big Data), 2014 IEEE International Conference on. IEEE, 2014. [2] Devarakonda, Ranjeet, et al. "Mercury: reusable metadata management, data discovery and access system." Earth Science Informatics 3.1-2 (2010): 87-94.

  7. The essential guide to metadata for books

    CERN Document Server

    Register, Renee

    2013-01-01

    In The Essential Guide to Metadata for Books, you will learn exactly what you need to know to effectively generate, handle and disseminate metadata for books and ebooks. This comprehensive but digestible document will explain the life-cycle of book metadata, industry standards, XML, ONIX and the essential elements of metadata. It will also show you how effective, well-organized metadata can improve your efforts to sell a book, especially when it comes to marketing, discoverability and converting at the point of sale. This information-packed document also includes a glossary of terms

  8. Metadata Effectiveness in Internet Discovery: An Analysis of Digital Collection Metadata Elements and Internet Search Engine Keywords

    Science.gov (United States)

    Yang, Le

    2016-01-01

    This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…

  9. An on-line Integrated Bookkeeping: electronic run log book and Meta-Data Repository for ATLAS

    CERN Document Server

    Barczyc, M.; Caprini, M.; Da Silva Conceicao, J.; Dobson, M.; Flammer, J.; Burckhart-Chromek, D.; Caprini, M.; Conceicao, J.D.S.; Dobson, M.; Flammer, J.; Jones, R.; Kazarov, A.; Kolos, S.; Kazarov, A.; Kolos, S.; Liko, D.; Mapelli, L.; Soloviev, I.; Hart, R.; Amorim, A.; Mapelli, L.; Soloviev, I.; Amorim, A.; Klose, D.; Lima, J.; Lucio, L.; Pedro, L.; Wolters, H.; Badescu, E.; Alexandrov, I.; Kotov, V.; Mineev, M.; Ryabov, Yu.

    2003-01-01

    In the context of the ATLAS experiment there is growing evidence of the importance of different kinds of Meta-data including all the important details of the detector and data acquisition that are vital for the analysis of the acquired data. The Online BookKeeper (OBK) is a component of ATLAS online software that stores all information collected while running the experiment, including the Meta-data associated with the event acquisition, triggering and storage. The facilities for acquisition of control data within the on-line software framework, together with a full functional Web interface, make the OBK a powerful tool containing all information needed for event analysis, including an electronic log book. In this paper we explain how OBK plays a role as one of the main collectors and managers of Meta-data produced on-line, and we'll also focus on the Web facilities already available. The usage of the web interface as an electronic run logbook is also explained, together with the future extensions. We describe...

  10. DEVELOPMENT OF A METADATA MANAGEMENT SYSTEM FOR AN INTERDISCIPLINARY RESEARCH PROJECT

    Directory of Open Access Journals (Sweden)

    C. Curdt

    2012-07-01

    Full Text Available In every interdisciplinary, long-term research project it is essential to manage and archive all heterogeneous research data, produced by the project participants during the project funding. This has to include sustainable storage, description with metadata, easy and secure provision, back up, and visualisation of all data. To ensure the accurate description of all project data with corresponding metadata, the design and implementation of a metadata management system is a significant duty. Thus, the sustainable use and search of all research results during and after the end of the project is particularly dependent on the implementation of a metadata management system. Therefore, this paper will describe the practical experiences gained during the development of a scientific research data management system (called the TR32DB including the corresponding metadata management system for the multidisciplinary research project Transregional Collaborative Research Centre 32 (CRC/TR32 'Patterns in Soil-Vegetation-Atmosphere Systems'. The entire system was developed according to the requirements of the funding agency, the user and project requirements, as well as according to recent standards and principles. The TR32DB is basically a combination of data storage, database, and web-interface. The metadata management system was designed, realized, and implemented to describe and access all project data via accurate metadata. Since the quantity and sort of descriptive metadata depends on the kind of data, a user-friendly multi-level approach was chosen to cover these requirements. Thus, the self-developed CRC/TR32 metadata framework is designed. It is a combination of general, CRC/TR32 specific, as well as data type specific properties.

  11. Service Processes as a Sequence of Events

    NARCIS (Netherlands)

    P.C. Verhoef (Peter); G. Antonides (Gerrit); A.N. de Hoog

    2002-01-01

    textabstractIn this paper the service process is considered as a sequence of events. Using theory from economics and psychology a model is formulated that explains how the utility of each event affects the overall evaluation of the service process. In this model we especially account for the

  12. DataNet: A flexible metadata overlay over file resources

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Managing and sharing data stored in files results in a challenge due to data amounts produced by various scientific experiments [1]. While solutions such as Globus Online [2] focus on file transfer and synchronization, in this work we propose an additional layer of metadata over file resources which helps to categorize and structure the data, as well as to make it efficient in integration with web-based research gateways. A basic concept of the proposed solution [3] is a data model consisting of entities built from primitive types such as numbers, texts and also from files and relationships among different entities. This allows for building complex data structure definitions and mix metadata and file data into a single model tailored for a given scientific field. A data model becomes actionable after being deployed as a data repository which is done automatically by the proposed framework by using one of the available PaaS (platform-as-a-service) platforms and is exposed to the world as a REST service, which...

  13. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    Science.gov (United States)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web

  14. Optimising metadata workflows in a distributed information environment

    OpenAIRE

    Robertson, R. John; Barton, Jane

    2005-01-01

    The different purposes present within a distributed information environment create the potential for repositories to enhance their metadata by capitalising on the diversity of metadata available for any given object. This paper presents three conceptual reference models required to achieve this optimisation of metadata workflow: the ecology of repositories, the object lifecycle model, and the metadata lifecycle model. It suggests a methodology for developing the metadata lifecycle model, and ...

  15. An Oracle-based event index for ATLAS

    Science.gov (United States)

    Gallas, E. J.; Dimitrov, G.; Vasileva, P.; Baranowski, Z.; Canali, L.; Dumitru, A.; Formica, A.; ATLAS Collaboration

    2017-10-01

    The ATLAS Eventlndex System has amassed a set of key quantities for a large number of ATLAS events into a Hadoop based infrastructure for the purpose of providing the experiment with a number of event-wise services. Collecting this data in one place provides the opportunity to investigate various storage formats and technologies and assess which best serve the various use cases as well as consider what other benefits alternative storage systems provide. In this presentation we describe how the data are imported into an Oracle RDBMS (relational database management system), the services we have built based on this architecture, and our experience with it. We’ve indexed about 26 billion real data events thus far and have designed the system to accommodate future data which has expected rates of 5 and 20 billion events per year. We have found this system offers outstanding performance for some fundamental use cases. In addition, profiting from the co-location of this data with other complementary metadata in ATLAS, the system has been easily extended to perform essential assessments of data integrity and completeness and to identify event duplication, including at what step in processing the duplication occurred.

  16. Metadata and network API aspects of a framework for storing and retrieving civil infrastructure monitoring data

    Science.gov (United States)

    Wong, John-Michael; Stojadinovic, Bozidar

    2005-05-01

    A framework has been defined for storing and retrieving civil infrastructure monitoring data over a network. The framework consists of two primary components: metadata and network communications. The metadata component provides the descriptions and data definitions necessary for cataloging and searching monitoring data. The communications component provides Java classes for remotely accessing the data. Packages of Enterprise JavaBeans and data handling utility classes are written to use the underlying metadata information to build real-time monitoring applications. The utility of the framework was evaluated using wireless accelerometers on a shaking table earthquake simulation test of a reinforced concrete bridge column. The NEESgrid data and metadata repository services were used as a backend storage implementation. A web interface was created to demonstrate the utility of the data model and provides an example health monitoring application.

  17. Web Services and Data Enhancements at the Northern California Earthquake Data Center

    Science.gov (United States)

    Neuhauser, D. S.; Zuzlewski, S.; Lombard, P. N.; Allen, R. M.

    2013-12-01

    The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC offers the following web services that are compliant with the International Federation of Digital Seismograph Networks (FDSN) web services specifications: (1) fdsn-dataselect: time series data delivered in MiniSEED format, (2) fdsn-station: station and channel metadata and time series availability delivered in StationXML format, (3) fdsn-event: earthquake event information delivered in QuakeML format. In addition, the NCEDC offers the the following IRIS-compatible web services: (1) sacpz: provide channel gains, poles, and zeros in SAC format, (2) resp: provide channel response information in RESP format, (3) dataless: provide station and channel metadata in Dataless SEED format. The NCEDC is also developing a web service to deliver timeseries from pre-assembled event waveform gathers. The NCEDC has waveform gathers for ~750,000 northern and central California events from 1984 to the present, many of which were created by the USGS NCSN prior to the establishment of the joint NCSS (Northern California Seismic System). We are currently adding waveforms to these older event gathers with time series from the UCB networks and other networks with waveforms archived at the NCEDC, and ensuring that the waveform for each channel in the event gathers have the highest

  18. ORGANIZATION OF DIGITAL RESOURCES IN REPEC THROUGH REDIF METADATA

    Directory of Open Access Journals (Sweden)

    Salvador Enrique Vazquez Moctezuma

    2018-06-01

    Full Text Available Introduction: The disciplinary repository RePEc (Research Papers in Economics provides access to a wide range of preprints, journal articles, books, book chapters and software about economic and administrative sciences. This repository adds bibliographic records produced by different universities, institutes, editors and authors that work collaboratively following the norms of the documentary organization. Objective: In this paper, mainly, we identify and analyze the functioning of RePEc, which includes the organization of the files, which is characterized using the protocol Guildford and metadata ReDIF (Research Documentation Information Format templates own for the documentary description. Methodology: Part of this research was studied theoretically in the literature; another part was carried out by observing a series of features visible on the RePEc website and in the archives of a journal that collaborates in this repository. Results: The repository is a decentralized collaborative project and it also provides several services derived from the metadata analysis. Conclusions: We conclude that the ReDIF templates and the Guildford communication protocol are key elements for organizing records in RePEc, and there is a similarity with the Dublin Core metadata

  19. NCPP's Use of Standard Metadata to Promote Open and Transparent Climate Modeling

    Science.gov (United States)

    Treshansky, A.; Barsugli, J. J.; Guentchev, G.; Rood, R. B.; DeLuca, C.

    2012-12-01

    The National Climate Predictions and Projections (NCPP) Platform is developing comprehensive regional and local information about the evolving climate to inform decision making and adaptation planning. This includes both creating and providing tools to create metadata about the models and processes used to create its derived data products. NCPP is using the Common Information Model (CIM), an ontology developed by a broad set of international partners in climate research, as its metadata language. This use of a standard ensures interoperability within the climate community as well as permitting access to the ecosystem of tools and services emerging alongside the CIM. The CIM itself is divided into a general-purpose (UML & XML) schema which structures metadata documents, and a project or community-specific (XML) Controlled Vocabulary (CV) which constraints the content of metadata documents. NCPP has already modified the CIM Schema to accommodate downscaling models, simulations, and experiments. NCPP is currently developing a CV for use by the downscaling community. Incorporating downscaling into the CIM will lead to several benefits: easy access to the existing CIM Documents describing CMIP5 models and simulations that are being downscaled, access to software tools that have been developed in order to search, manipulate, and visualize CIM metadata, and coordination with national and international efforts such as ES-DOC that are working to make climate model descriptions and datasets interoperable. Providing detailed metadata descriptions which include the full provenance of derived data products will contribute to making that data (and, the models and processes which generated that data) more open and transparent to the user community.

  20. Utility of collecting metadata to manage a large scale conditions database in ATLAS

    International Nuclear Information System (INIS)

    Gallas, E J; Albrand, S; Borodin, M; Formica, A

    2014-01-01

    The ATLAS Conditions Database, based on the LCG Conditions Database infrastructure, contains a wide variety of information needed in online data taking and offline analysis. The total volume of ATLAS conditions data is in the multi-Terabyte range. Internally, the active data is divided into 65 separate schemas (each with hundreds of underlying tables) according to overall data taking type, detector subsystem, and whether the data is used offline or strictly online. While each schema has a common infrastructure, each schema's data is entirely independent of other schemas, except at the highest level, where sets of conditions from each subsystem are tagged globally for ATLAS event data reconstruction and reprocessing. The partitioned nature of the conditions infrastructure works well for most purposes, but metadata about each schema is problematic to collect in global tools from such a system because it is only accessible via LCG tools schema by schema. This makes it difficult to get an overview of all schemas, collect interesting and useful descriptive and structural metadata for the overall system, and connect it with other ATLAS systems. This type of global information is needed for time critical data preparation tasks for data processing and has become more critical as the system has grown in size and diversity. Therefore, a new system has been developed to collect metadata for the management of the ATLAS Conditions Database. The structure and implementation of this metadata repository will be described. In addition, we will report its usage since its inception during LHC Run 1, how it has been exploited in the process of conditions data evolution during LSI (the current LHC long shutdown) in preparation for Run 2, and long term plans to incorporate more of its information into future ATLAS Conditions Database tools and the overall ATLAS information infrastructure.

  1. INSPIRE: Managing Metadata in a Global Digital Library for High-Energy Physics

    CERN Document Server

    Martin Montull, Javier

    2011-01-01

    Four leading laboratories in the High-Energy Physics (HEP) field are collaborating to roll-out the next-generation scientific information portal: INSPIRE. The goal of this project is to replace the popular 40 year-old SPIRES database. INSPIRE already provides access to about 1 million records and includes services such as fulltext search, automatic keyword assignment, ingestion and automatic display of LaTeX, citation analysis, automatic author disambiguation, metadata harvesting, extraction of figures from fulltext and search in figure captions. In order to achieve high quality metadata both automatic processing and manual curation are needed. The different tools available in the system use modern web technologies to provide the curators of the maximum efficiency, while dealing with the MARC standard format. The project is under heavy development in order to provide new features including semantic analysis, crowdsourcing of metadata curation, user tagging, recommender systems, integration of OAIS standards a...

  2. Metadata Wizard: an easy-to-use tool for creating FGDC-CSDGM metadata for geospatial datasets in ESRI ArcGIS Desktop

    Science.gov (United States)

    Ignizio, Drew A.; O'Donnell, Michael S.; Talbert, Colin B.

    2014-01-01

    Creating compliant metadata for scientific data products is mandated for all federal Geographic Information Systems professionals and is a best practice for members of the geospatial data community. However, the complexity of the The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata, the limited availability of easy-to-use tools, and recent changes in the ESRI software environment continue to make metadata creation a challenge. Staff at the U.S. Geological Survey Fort Collins Science Center have developed a Python toolbox for ESRI ArcDesktop to facilitate a semi-automated workflow to create and update metadata records in ESRI’s 10.x software. The U.S. Geological Survey Metadata Wizard tool automatically populates several metadata elements: the spatial reference, spatial extent, geospatial presentation format, vector feature count or raster column/row count, native system/processing environment, and the metadata creation date. Once the software auto-populates these elements, users can easily add attribute definitions and other relevant information in a simple Graphical User Interface. The tool, which offers a simple design free of esoteric metadata language, has the potential to save many government and non-government organizations a significant amount of time and costs by facilitating the development of The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata compliant metadata for ESRI software users. A working version of the tool is now available for ESRI ArcDesktop, version 10.0, 10.1, and 10.2 (downloadable at http:/www.sciencebase.gov/metadatawizard).

  3. The Machinic Temporality of Metadata

    Directory of Open Access Journals (Sweden)

    Claudio Celis

    2015-03-01

    Full Text Available In 1990 Deleuze introduced the hypothesis that disciplinary societies are gradually being replaced by a new logic of power: control. Accordingly, Matteo Pasquinelli has recently argued that we are moving towards societies of metadata, which correspond to a new stage of what Deleuze called control societies. Societies of metadata are characterised for the central role that meta-information acquires both as a source of surplus value and as an apparatus of social control. The aim of this article is to develop Pasquinelli’s thesis by examining the temporal scope of these emerging societies of metadata. In particular, this article employs Guattari’s distinction between human and machinic times. Through these two concepts, this article attempts to show how societies of metadata combine the two poles of capitalist power formations as identified by Deleuze and Guattari, i.e. social subjection and machinic enslavement. It begins by presenting the notion of metadata in order to identify some of the defining traits of contemporary capitalism. It then examines Berardi’s account of the temporality of the attention economy from the perspective of the asymmetric relation between cyber-time and human time. The third section challenges Berardi’s definition of the temporality of the attention economy by using Guattari’s notions of human and machinic times. Parts four and five fall back upon Deleuze and Guattari’s notions of machinic surplus labour and machinic enslavement, respectively. The concluding section tries to show that machinic and human times constitute two poles of contemporary power formations that articulate the temporal dimension of societies of metadata.

  4. A Flexible Online Metadata Editing and Management System

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Raul [Arizona State University; Pan, Jerry Yun [ORNL; Gries, Corinna [Arizona State University; Inigo, Gil San [University of New Mexico, Albuquerque; Palanisamy, Giri [ORNL

    2010-01-01

    A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fields user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to

  5. Incorporating ISO Metadata Using HDF Product Designer

    Science.gov (United States)

    Jelenak, Aleksandar; Kozimor, John; Habermann, Ted

    2016-01-01

    The need to store in HDF5 files increasing amounts of metadata of various complexity is greatly overcoming the capabilities of the Earth science metadata conventions currently in use. Data producers until now did not have much choice but to come up with ad hoc solutions to this challenge. Such solutions, in turn, pose a wide range of issues for data managers, distributors, and, ultimately, data users. The HDF Group is experimenting on a novel approach of using ISO 19115 metadata objects as a catch-all container for all the metadata that cannot be fitted into the current Earth science data conventions. This presentation will showcase how the HDF Product Designer software can be utilized to help data producers include various ISO metadata objects in their products.

  6. Evaluating the privacy properties of telephone metadata

    Science.gov (United States)

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C.

    2016-01-01

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences. PMID:27185922

  7. From European Standard to User Service

    DEFF Research Database (Denmark)

    Jacobi, Ole Illum; Lind, Morten

    1997-01-01

    Today’s public administration and planning need access to proper spatial information. The tremendous growth in the area of maps and other geographically referenced databases increases the needs of the user as well as the supplier of information for an overview of the jungle of spatial data....... The answer to this need is a metadata service that gives relevant and up-to-date, at-your-fingertips information on available geographical datasets.As a result of the work in the standardization organizations, we are now, luckily, able to take the first steps towards an implementation of metadata services...... in the design of the next generation of metadata services.On the basis of recent Danish experiences with implementation of the CEN/TC 287 standard into a WWW Geographical Information metadata service, we will present and discuss some general issues: The conceptual strategy, the implementation of dataset...

  8. U.S. EPA Metadata Editor (EME)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Metadata Editor (EME) allows users to create geospatial metadata that meets EPA's requirements. The tool has been developed as a desktop application that...

  9. The PDS4 Metadata Management System

    Science.gov (United States)

    Raugh, A. C.; Hughes, J. S.

    2018-04-01

    We present the key features of the Planetary Data System (PDS) PDS4 Information Model as an extendable metadata management system for planetary metadata related to data structure, analysis/interpretation, and provenance.

  10. NERIES: Seismic Data Gateways and User Composed Datasets Metadata Management

    Science.gov (United States)

    Spinuso, Alessandro; Trani, Luca; Kamb, Linus; Frobert, Laurent

    2010-05-01

    One of the NERIES EC project main objectives is to establish and improve the networking of seismic waveform data exchange and access among four main data centers in Europe: INGV, GFZ, ORFEUS and IPGP. Besides the implementation of the data backbone, several investigations and developments have been conducted in order to offer to the users the data available from this network, either programmatically or interactively. One of the challenges is to understand how to enable users` activities such as discovering, aggregating, describing and sharing datasets to obtain a decrease in the replication of similar data queries towards the network, exempting the data centers to guess and create useful pre-packed products. We`ve started to transfer this task more and more towards the users community, where the users` composed data products could be extensively re-used. The main link to the data is represented by a centralized webservice (SeismoLink) acting like a single access point to the whole data network. Users can download either waveform data or seismic station inventories directly from their own software routines by connecting to this webservice, which routes the request to the data centers. The provenance of the data is maintained and transferred to the users in the form of URIs, that identify the dataset and implicitly refer to the data provider. SeismoLink, combined with other webservices (eg EMSC-QuakeML earthquakes catalog service), is used from a community gateway such as the NERIES web portal (http://www.seismicportal.eu). Here the user interacts with a map based portlet which allows the dynamic composition of a data product, binding seismic event`s parameters with a set of seismic stations. The requested data is collected by the back-end processes of the portal, preserved and offered to the user in a personal data cart, where metadata can be generated interactively on-demand. The metadata, expressed in RDF, can also be remotely ingested. They offer rating

  11. Data, Metadata, and Ted

    OpenAIRE

    Borgman, Christine L.

    2014-01-01

    Ted Nelson coined the term “hypertext” and developed Xanadu in a universe parallel to the one in which librarians, archivists, and documentalists were creating metadata to establish cross-connections among the myriad topics of this world. When these universes collided, comets exploded as ontologies proliferated. Black holes were formed as data disappeared through lack of description. Today these universes coexist, each informing the other, if not always happily: the formal rules of metadata, ...

  12. The Value of Data and Metadata Standardization for Interoperability in Giovanni

    Science.gov (United States)

    Smit, C.; Hegde, M.; Strub, R. F.; Bryant, K.; Li, A.; Petrenko, M.

    2017-12-01

    Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization. This poster gives examples of how our metadata and data standards, both external and internal, have both simplified our code base and improved our users' experiences.

  13. CMR Catalog Service for the Web

    Science.gov (United States)

    Newman, Doug; Mitchell, Andrew

    2016-01-01

    With the impending retirement of Global Change Master Directory (GCMD) Application Programming Interfaces (APIs) the Common Metadata Repository (CMR) was charged with providing a collection-level Catalog Service for the Web (CSW) that provided the same level of functionality as GCMD. This talk describes the capabilities of the CMR CSW API with particular reference to the support of the Committee on Earth Observation Satellites (CEOS) Working Group on Information Systems and Services (WGISS) Integrated Catalog (CWIC).

  14. Pembuatan Aplikasi Metadata Generator untuk Koleksi Peninggalan Warisan Budaya

    Directory of Open Access Journals (Sweden)

    Wimba Agra Wicesa

    2017-03-01

    Full Text Available Warisan budaya merupakan suatu aset penting yang digunakan sebagai sumber informasi dalam mempelajari ilmu sejarah. Mengelola data warisan budaya menjadi suatu hal yang harus diperhatikan guna menjaga keutuhan data warisan budaya di masa depan. Menciptakan sebuah metadata warisan budaya merupakan salah satu langkah yang dapat diambil untuk menjaga nilai dari sebuah artefak. Dengan menggunakan konsep metadata, informasi dari setiap objek warisan budaya tersebut menjadi mudah untuk dibaca, dikelola, maupun dicari kembali meskipun telah tersimpan lama. Selain itu dengan menggunakan konsep metadata, informasi tentang warisan budaya dapat digunakan oleh banyak sistem. Metadata warisan budaya merupakan metadata yang cukup besar. Sehingga untuk membangun metada warisan budaya dibutuhkan waktu yang cukup lama. Selain itu kesalahan (human error juga dapat menghambat proses pembangunan metadata warisan budaya. Proses pembangkitan metadata warisan budaya melalui Aplikasi Metadata Generator menjadi lebih cepat dan mudah karena dilakukan secara otomatis oleh sistem. Aplikasi ini juga dapat menekan human error sehingga proses pembangkitan menjadi lebih efisien.

  15. QualityML: a dictionary for quality metadata encoding

    Science.gov (United States)

    Ninyerola, Miquel; Sevillano, Eva; Serral, Ivette; Pons, Xavier; Zabala, Alaitz; Bastin, Lucy; Masó, Joan

    2014-05-01

    The scenario of rapidly growing geodata catalogues requires tools focused on facilitate users the choice of products. Having quality fields populated in metadata allow the users to rank and then select the best fit-for-purpose products. In this direction, we have developed the QualityML (http://qualityml.geoviqua.org), a dictionary that contains hierarchically structured concepts to precisely define and relate quality levels: from quality classes to quality measurements. Generically, a quality element is the path that goes from the higher level (quality class) to the lowest levels (statistics or quality metrics). This path is used to encode quality of datasets in the corresponding metadata schemas. The benefits of having encoded quality, in the case of data producers, are related with improvements in their product discovery and better transmission of their characteristics. In the case of data users, particularly decision-makers, they would find quality and uncertainty measures to take the best decisions as well as perform dataset intercomparison. Also it allows other components (such as visualization, discovery, or comparison tools) to be quality-aware and interoperable. On one hand, the QualityML is a profile of the ISO geospatial metadata standards providing a set of rules for precisely documenting quality indicator parameters that is structured in 6 levels. On the other hand, QualityML includes semantics and vocabularies for the quality concepts. Whenever possible, if uses statistic expressions from the UncertML dictionary (http://www.uncertml.org) encoding. However it also extends UncertML to provide list of alternative metrics that are commonly used to quantify quality. A specific example, based on a temperature dataset, is shown below. The annual mean temperature map has been validated with independent in-situ measurements to obtain a global error of 0.5 ° C. Level 0: Quality class (e.g., Thematic accuracy) Level 1: Quality indicator (e.g., Quantitative

  16. Handling multiple metadata streams regarding digital learning material

    NARCIS (Netherlands)

    Roes, J.B.M.; Vuuren, J. van; Verbeij, N.; Nijstad, H.

    2010-01-01

    This paper presents the outcome of a study performed in the Netherlands on handling multiple metadata streams regarding digital learning material. The paper describes the present metadata architecture in the Netherlands, the present suppliers and users of metadata and digital learning materials. It

  17. MetaRNA-Seq: An Interactive Tool to Browse and Annotate Metadata from RNA-Seq Studies

    Directory of Open Access Journals (Sweden)

    Pankaj Kumar

    2015-01-01

    Full Text Available The number of RNA-Seq studies has grown in recent years. The design of RNA-Seq studies varies from very simple (e.g., two-condition case-control to very complicated (e.g., time series involving multiple samples at each time point with separate drug treatments. Most of these publically available RNA-Seq studies are deposited in NCBI databases, but their metadata are scattered throughout four different databases: Sequence Read Archive (SRA, Biosample, Bioprojects, and Gene Expression Omnibus (GEO. Although the NCBI web interface is able to provide all of the metadata information, it often requires significant effort to retrieve study- or project-level information by traversing through multiple hyperlinks and going to another page. Moreover, project- and study-level metadata lack manual or automatic curation by categories, such as disease type, time series, case-control, or replicate type, which are vital to comprehending any RNA-Seq study. Here we describe “MetaRNA-Seq,” a new tool for interactively browsing, searching, and annotating RNA-Seq metadata with the capability of semiautomatic curation at the study level.

  18. Using a linked data approach to aid development of a metadata portal to support Marine Strategy Framework Directive (MSFD) implementation

    Science.gov (United States)

    Wood, Chris

    2016-04-01

    Under the Marine Strategy Framework Directive (MSFD), EU Member States are mandated to achieve or maintain 'Good Environmental Status' (GES) in their marine areas by 2020, through a series of Programme of Measures (PoMs). The Celtic Seas Partnership (CSP), an EU LIFE+ project, aims to support policy makers, special-interest groups, users of the marine environment, and other interested stakeholders on MSFD implementation in the Celtic Seas geographical area. As part of this support, a metadata portal has been built to provide a signposting service to datasets that are relevant to MSFD within the Celtic Seas. To ensure that the metadata has the widest possible reach, a linked data approach was employed to construct the database. Although the metadata are stored in a traditional RDBS, the metadata are exposed as linked data via the D2RQ platform, allowing virtual RDF graphs to be generated. SPARQL queries can be executed against the end-point allowing any user to manipulate the metadata. D2RQ's mapping language, based on turtle, was used to map a wide range of relevant ontologies to the metadata (e.g. The Provenance Ontology (prov-o), Ocean Data Ontology (odo), Dublin Core Elements and Terms (dc & dcterms), Friend of a Friend (foaf), and Geospatial ontologies (geo)) allowing users to browse the metadata, either via SPARQL queries or by using D2RQ's HTML interface. The metadata were further enhanced by mapping relevant parameters to the NERC Vocabulary Server, itself built on a SPARQL endpoint. Additionally, a custom web front-end was built to enable users to browse the metadata and express queries through an intuitive graphical user interface that requires no prior knowledge of SPARQL. As well as providing means to browse the data via MSFD-related parameters (Descriptor, Criteria, and Indicator), the metadata records include the dataset's country of origin, the list of organisations involved in the management of the data, and links to any relevant INSPIRE

  19. On the Origin of Metadata

    Directory of Open Access Journals (Sweden)

    Sam Coppens

    2012-12-01

    Full Text Available Metadata has been around and has evolved for centuries, albeit not recognized as such. Medieval manuscripts typically had illuminations at the start of each chapter, being both a kind of signature for the author writing the script and a pictorial chapter anchor for the illiterates at the time. Nowadays, there is so much fragmented information on the Internet that users sometimes fail to distinguish the real facts from some bended truth, let alone being able to interconnect different facts. Here, the metadata can both act as noise-reductors for detailed recommendations to the end-users, as it can be the catalyst to interconnect related information. Over time, metadata thus not only has had different modes of information, but furthermore, metadata’s relation of information to meaning, i.e., “semantics”, evolved. Darwin’s evolutionary propositions, from “species have an unlimited reproductive capacity”, over “natural selection”, to “the cooperation of mutations leads to adaptation to the environment” show remarkable parallels to both metadata’s different modes of information and to its relation of information to meaning over time. In this paper, we will show that the evolution of the use of (metadata can be mapped to Darwin’s nine evolutionary propositions. As mankind and its behavior are products of an evolutionary process, the evolutionary process of metadata with its different modes of information is on the verge of a new-semantic-era.

  20. ONEMercury: Towards Automatic Annotation of Earth Science Metadata

    Science.gov (United States)

    Tuarob, S.; Pouchard, L. C.; Noy, N.; Horsburgh, J. S.; Palanisamy, G.

    2012-12-01

    Earth sciences have become more data-intensive, requiring access to heterogeneous data collected from multiple places, times, and thematic scales. For example, research on climate change may involve exploring and analyzing observational data such as the migration of animals and temperature shifts across the earth, as well as various model-observation inter-comparison studies. Recently, DataONE, a federated data network built to facilitate access to and preservation of environmental and ecological data, has come to exist. ONEMercury has recently been implemented as part of the DataONE project to serve as a portal for discovering and accessing environmental and observational data across the globe. ONEMercury harvests metadata from the data hosted by multiple data repositories and makes it searchable via a common search interface built upon cutting edge search engine technology, allowing users to interact with the system, intelligently filter the search results on the fly, and fetch the data from distributed data sources. Linking data from heterogeneous sources always has a cost. A problem that ONEMercury faces is the different levels of annotation in the harvested metadata records. Poorly annotated records tend to be missed during the search process as they lack meaningful keywords. Furthermore, such records would not be compatible with the advanced search functionality offered by ONEMercury as the interface requires a metadata record be semantically annotated. The explosion of the number of metadata records harvested from an increasing number of data repositories makes it impossible to annotate the harvested records manually, urging the need for a tool capable of automatically annotating poorly curated metadata records. In this paper, we propose a topic-model (TM) based approach for automatic metadata annotation. Our approach mines topics in the set of well annotated records and suggests keywords for poorly annotated records based on topic similarity. We utilize the

  1. Data Discovery and Access via the Heliophysics Events Knowledgebase (HEK)

    Science.gov (United States)

    Somani, A.; Hurlburt, N. E.; Schrijver, C. J.; Cheung, M.; Freeland, S.; Slater, G. L.; Seguin, R.; Timmons, R.; Green, S.; Chang, L.; Kobashi, A.; Jaffey, A.

    2011-12-01

    The HEK is a integrated system which helps direct scientists to solar events and data from a variety of providers. The system is fully operational and adoption of HEK has been growing since the launch of NASA's SDO mission. In this presentation we describe the different components that comprise HEK. The Heliophysics Events Registry (HER) and Heliophysics Coverage Registry (HCR) form the two major databases behind the system. The HCR allows the user to search on coverage event metadata for a variety of instruments. The HER allows the user to search on annotated event metadata for a variety of instruments. Both the HCR and HER are accessible via a web API which can return search results in machine readable formats (e.g. XML and JSON). A variety of SolarSoft services are also provided to allow users to search the HEK as well as obtain and manipulate data. Other components include - the Event Detection System (EDS) continually runs feature finding algorithms on SDO data to populate the HER with relevant events, - A web form for users to request SDO data cutouts for multiple AIA channels as well as HMI line-of-sight magnetograms, - iSolSearch, which allows a user to browse events in the HER and search for specific events over a specific time interval, all within a graphical web page, - Panorama, which is the software tool used for rapid visualization of large volumes of solar image data in multiple channels/wavelengths. The user can also easily create WYSIWYG movies and launch the Annotator tool to describe events and features. - EVACS, which provides a JOGL powered client for the HER and HCR. EVACS displays the searched for events on a full disk magnetogram of the sun while displaying more detailed information for events.

  2. From CLARIN Component Metadata to Linked Open Data

    NARCIS (Netherlands)

    Durco, M.; Windhouwer, Menzo

    2014-01-01

    In the European CLARIN infrastructure a growing number of resources are described with Component Metadata. In this paper we describe a transformation to make this metadata available as linked data. After this first step it becomes possible to connect the CLARIN Component Metadata with other valuable

  3. Collection Metadata Solutions for Digital Library Applications

    Science.gov (United States)

    Hill, Linda L.; Janee, Greg; Dolin, Ron; Frew, James; Larsgaard, Mary

    1999-01-01

    Within a digital library, collections may range from an ad hoc set of objects that serve a temporary purpose to established library collections intended to persist through time. The objects in these collections vary widely, from library and data center holdings to pointers to real-world objects, such as geographic places, and the various metadata schemas that describe them. The key to integrated use of such a variety of collections in a digital library is collection metadata that represents the inherent and contextual characteristics of a collection. The Alexandria Digital Library (ADL) Project has designed and implemented collection metadata for several purposes: in XML form, the collection metadata "registers" the collection with the user interface client; in HTML form, it is used for user documentation; eventually, it will be used to describe the collection to network search agents; and it is used for internal collection management, including mapping the object metadata attributes to the common search parameters of the system.

  4. NASA Reverb: Standards-Driven Earth Science Data and Service Discovery

    Science.gov (United States)

    Cechini, M. F.; Mitchell, A.; Pilone, D.

    2011-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) is a core capability in NASA's Earth Science Data Systems Program. NASA's EOS ClearingHOuse (ECHO) is a metadata catalog for the EOSDIS, providing a centralized catalog of data products and registry of related data services. Working closely with the EOSDIS community, the ECHO team identified a need to develop the next generation EOS data and service discovery tool. This development effort relied on the following principles: + Metadata Driven User Interface - Users should be presented with data and service discovery capabilities based on dynamic processing of metadata describing the targeted data. + Integrated Data & Service Discovery - Users should be able to discovery data and associated data services that facilitate their research objectives. + Leverage Common Standards - Users should be able to discover and invoke services that utilize common interface standards. Metadata plays a vital role facilitating data discovery and access. As data providers enhance their metadata, more advanced search capabilities become available enriching a user's search experience. Maturing metadata formats such as ISO 19115 provide the necessary depth of metadata that facilitates advanced data discovery capabilities. Data discovery and access is not limited to simply the retrieval of data granules, but is growing into the more complex discovery of data services. These services include, but are not limited to, services facilitating additional data discovery, subsetting, reformatting, and re-projecting. The discovery and invocation of these data services is made significantly simpler through the use of consistent and interoperable standards. By utilizing an adopted standard, developing standard-specific adapters can be utilized to communicate with multiple services implementing a specific protocol. The emergence of metadata standards such as ISO 19119 plays a similarly important role in discovery as the 19115 standard

  5. Provenance metadata gathering and cataloguing of EFIT++ code execution

    Energy Technology Data Exchange (ETDEWEB)

    Lupelli, I., E-mail: ivan.lupelli@ccfe.ac.uk [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Muir, D.G.; Appel, L.; Akers, R.; Carr, M. [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Abreu, P. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal)

    2015-10-15

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  6. Provenance metadata gathering and cataloguing of EFIT++ code execution

    International Nuclear Information System (INIS)

    Lupelli, I.; Muir, D.G.; Appel, L.; Akers, R.; Carr, M.; Abreu, P.

    2015-01-01

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  7. Grid production with the ATLAS Event Service

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration

    2018-01-01

    ATLAS has developed and previously presented a new computing architecture, the Event Service, that allows real time delivery of fine grained workloads which process dispatched events (or event ranges) and immediately streams outputs. The principal aim was to profit from opportunistic resources such as commercial cloud, supercomputing, and volunteer computing, and otherwise unused cycles on clusters and grids. During the development and deployment phase, its utility also on the grid and conventional clusters for the exploitation of otherwise unused cycles became apparent. Here we describe our experience commissioning the Event Service on the grid in the ATLAS production system. We study the performance compared with standard simulation production. We describe the integration with the ATLAS data management system to ensure scalability and compatibility with object stores. Finally, we outline the remaining steps towards a fully commissioned system.

  8. Metadata to Support Data Warehouse Evolution

    Science.gov (United States)

    Solodovnikova, Darja

    The focus of this chapter is metadata necessary to support data warehouse evolution. We present the data warehouse framework that is able to track evolution process and adapt data warehouse schemata and data extraction, transformation, and loading (ETL) processes. We discuss the significant part of the framework, the metadata repository that stores information about the data warehouse, logical and physical schemata and their versions. We propose the physical implementation of multiversion data warehouse in a relational DBMS. For each modification of a data warehouse schema, we outline the changes that need to be made to the repository metadata and in the database.

  9. Streamlining geospatial metadata in the Semantic Web

    Science.gov (United States)

    Fugazza, Cristiano; Pepe, Monica; Oggioni, Alessandro; Tagliolato, Paolo; Carrara, Paola

    2016-04-01

    In the geospatial realm, data annotation and discovery rely on a number of ad-hoc formats and protocols. These have been created to enable domain-specific use cases generalized search is not feasible for. Metadata are at the heart of the discovery process and nevertheless they are often neglected or encoded in formats that either are not aimed at efficient retrieval of resources or are plainly outdated. Particularly, the quantum leap represented by the Linked Open Data (LOD) movement did not induce so far a consistent, interlinked baseline in the geospatial domain. In a nutshell, datasets, scientific literature related to them, and ultimately the researchers behind these products are only loosely connected; the corresponding metadata intelligible only to humans, duplicated on different systems, seldom consistently. Instead, our workflow for metadata management envisages i) editing via customizable web- based forms, ii) encoding of records in any XML application profile, iii) translation into RDF (involving the semantic lift of metadata records), and finally iv) storage of the metadata as RDF and back-translation into the original XML format with added semantics-aware features. Phase iii) hinges on relating resource metadata to RDF data structures that represent keywords from code lists and controlled vocabularies, toponyms, researchers, institutes, and virtually any description one can retrieve (or directly publish) in the LOD Cloud. In the context of a distributed Spatial Data Infrastructure (SDI) built on free and open-source software, we detail phases iii) and iv) of our workflow for the semantics-aware management of geospatial metadata.

  10. The Global Streamflow Indices and Metadata Archive (GSIM) - Part 1: The production of a daily streamflow archive and metadata

    Science.gov (United States)

    Do, Hong Xuan; Gudmundsson, Lukas; Leonard, Michael; Westra, Seth

    2018-04-01

    This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM), a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections). It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477" target="_blank">https://doi.pangaea.de/10.1594/PANGAEA.887477): (1) a GSIM catalogue collating basic metadata associated with each time series, (2) catchment boundaries for the contributing area of each gauge, and (3) catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.

  11. The Global Streamflow Indices and Metadata Archive (GSIM – Part 1: The production of a daily streamflow archive and metadata

    Directory of Open Access Journals (Sweden)

    H. X. Do

    2018-04-01

    Full Text Available This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM, a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections. It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477: (1 a GSIM catalogue collating basic metadata associated with each time series, (2 catchment boundaries for the contributing area of each gauge, and (3 catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.

  12. EPOS Data and Service Provision

    Science.gov (United States)

    Bailo, Daniele; Jeffery, Keith G.; Atakan, Kuvvet; Harrison, Matt

    2017-04-01

    EPOS is now in IP (implementation phase) after a successful PP (preparatory phase). EPOS consists of essentially two components, one ICS (Integrated Core Services) representing the integrating ICT (Information and Communication Technology) and many TCS (Thematic Core Services) representing the scientific domains. The architecture developed, demonstrated and agreed within the project during the PP is now being developed utilising co-design with the TCS teams and agile, spiral methods within the ICS team. The 'heart' of EPOS is the metadata catalog. This provides for the ICS a digital representation of the TCS assets (services, data, software, equipment, expertise…) thus facilitating access, interoperation and (re-)use. A major part of the work has been interactions with the TCS. The original intention to harvest information from the TCS required (and still requires) discussions to understand fully the TCS organisational structures linked with rights, security and privacy; their (meta)data syntax (structure) and semantics (meaning); their workflows and methods of working and the services offered. To complicate matters further the TCS are each at varying stages of development and the ICS design has to accommodate pre-existing, developing and expected future standards for metadata, data, software and processes. Through information documents, questionnaires and interviews/meetings the EPOS ICS team has collected DDSS (Data, Data Products, Software and Services) information from the TCS. The ICS team developed a simplified metadata model for presentation to the TCS and the ICS team will perform the mapping and conversion from this model to the internal detailed technical metadata model using (CERIF: a EU recommendation to Member States maintained, developed and promoted by euroCRIS www.eurocris.org ). At the time of writing the final modifications of the EPOS metadata model are being made, and the mappings to CERIF designed, prior to the main phase of (meta)data

  13. A network analysis using metadata to investigate innovation in clean-tech – Implications for energy policy

    International Nuclear Information System (INIS)

    Marra, Alessandro; Antonelli, Paola; Dell’Anna, Luca; Pozzi, Cesare

    2015-01-01

    Clean-technology (clean-tech) is a large and increasing sector. Research and development (R&D) is the lifeline of the industry and innovation is fostered by a plethora of high-tech start-ups and small and medium-sized enterprises (SMEs). Any empirical-based attempt to detect the pattern of technological innovation in the industry is challenging. This paper proposes an investigation of innovation in clean-tech using metadata provided by CrunchBase. Metadata reveal information on markets, products, services and technologies driving innovation in the clean-tech industry worldwide and for San Francisco, the leader in clean-tech innovation with more than two hundred specialised companies. A network analysis using metadata is the employed methodology and the main metrics of the resulting networks are discussed from an economic point of view. The purpose of the paper is to understand specifically specializations and technological complementarities underlying innovative companies, detect emerging industrial clusters at the global and local/metropolitan level and, finally, suggest a way to realize whether observed start-ups, SMEs and clusters follow a technological path of complementary innovation and market opportunity or, instead, present a risk of lock-in. The discussion of the results of the network analysis shows interesting implications for energy policy, particularly useful from an operational point of view. - Highlights: • Metadata provide information on companies' products and technologies. • A network analysis enables detection of specializations and complementarities. • An investigation of the network allows to identify emerging industrial clusters. • Metrics help to appreciate complementary innovation and market opportunity. • Results of the network analysis show interesting policy implications.

  14. A web-based, dynamic metadata interface to MDSplus

    International Nuclear Information System (INIS)

    Gardner, Henry J.; Karia, Raju; Manduchi, Gabriele

    2008-01-01

    We introduce the concept of a Fusion Data Grid and discuss the management of metadata within such a Grid. We describe a prototype application which serves fusion data over the internet together with metadata information which can be flexibly created and modified over time. The application interfaces with the MDSplus data acquisition system and it has been designed to capture metadata which is generated by scientists from the post-processing of experimental data. The implementation of dynamic metadata tables using the Java programming language together with an object-relational mapping system, Hibernate, is described in the Appendix

  15. ATLAS File and Dataset Metadata Collection and Use

    CERN Document Server

    Albrand, S; The ATLAS collaboration; Lambert, F; Gallas, E J

    2012-01-01

    The ATLAS Metadata Interface (“AMI”) was designed as a generic cataloguing system, and as such it has found many uses in the experiment including software release management, tracking of reconstructed event sizes and control of dataset nomenclature. The primary use of AMI is to provide a catalogue of datasets (file collections) which is searchable using physics criteria. In this paper we discuss the various mechanisms used for filling the AMI dataset and file catalogues. By correlating information from different sources we can derive aggregate information which is important for physics analysis; for example the total number of events contained in dataset, and possible reasons for missing events such as a lost file. Finally we will describe some specialized interfaces which were developed for the Data Preparation and reprocessing coordinators. These interfaces manipulate information from both the dataset domain held in AMI, and the run-indexed information held in the ATLAS COMA application (Conditions and ...

  16. Creating metadata that work for digital libraries and Google

    OpenAIRE

    Dawson, Alan

    2004-01-01

    For many years metadata has been recognised as a significant component of the digital information environment. Substantial work has gone into creating complex metadata schemes for describing digital content. Yet increasingly Web search engines, and Google in particular, are the primary means of discovering and selecting digital resources, although they make little use of metadata. This article considers how digital libraries can gain more value from their metadata by adapting it for Google us...

  17. Technologies for metadata management in scientific a

    OpenAIRE

    Castro-Romero, Alexander; González-Sanabria, Juan S.; Ballesteros-Ricaurte, Javier A.

    2015-01-01

    The use of Semantic Web technologies has been increasing, so it is common using them in different ways. This article evaluates how these technologies can contribute to improve the indexing in articles in scientific journals. Initially, there is a conceptual review about metadata. Later, studying the most important technologies for the use of metadata in Web and, this way, choosing one of them to apply it in the case of study of scientific articles indexing, in order to determine the metadata ...

  18. The role of metadata in managing large environmental science datasets. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Melton, R.B.; DeVaney, D.M. [eds.] [Pacific Northwest Lab., Richland, WA (United States); French, J. C. [Univ. of Virginia, (United States)

    1995-06-01

    The purpose of this workshop was to bring together computer science researchers and environmental sciences data management practitioners to consider the role of metadata in managing large environmental sciences datasets. The objectives included: establishing a common definition of metadata; identifying categories of metadata; defining problems in managing metadata; and defining problems related to linking metadata with primary data.

  19. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    Science.gov (United States)

    Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.

    2015-01-01

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402

  20. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, Tatiparthi B. K. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Thomas, Alex D. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Stamatis, Dimitri [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Bertsch, Jon [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Isbandi, Michelle [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Jansson, Jakob [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Mallajosyula, Jyothi [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Pagani, Ioanna [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Lobos, Elizabeth A. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Kyrpides, Nikos C. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); King Abdulaziz Univ., Jeddah (Saudi Arabia)

    2014-10-27

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.

  1. Mining Building Metadata by Data Stream Comparison

    DEFF Research Database (Denmark)

    Holmegaard, Emil; Kjærgaard, Mikkel Baun

    2016-01-01

    to handle data streams with only slightly similar patterns. We have evaluated Metafier with points and data from one building located in Denmark. We have evaluated Metafier with 903 points, and the overall accuracy, with only 3 known examples, was 94.71%. Furthermore we found that using DTW for mining...... ways to annotate sensor and actuation points. This makes it difficult to create intuitive queries for retrieving data streams from points. Another problem is the amount of insufficient or missing metadata. We introduce Metafier, a tool for extracting metadata from comparing data streams. Metafier...... enables a semi-automatic labeling of metadata to building instrumentation. Metafier annotates points with metadata by comparing the data from a set of validated points with unvalidated points. Metafier has three different algorithms to compare points with based on their data. The three algorithms...

  2. Advancements in Large-Scale Data/Metadata Management for Scientific Data.

    Science.gov (United States)

    Guntupally, K.; Devarakonda, R.; Palanisamy, G.; Frame, M. T.

    2017-12-01

    Scientific data often comes with complex and diverse metadata which are critical for data discovery and users. The Online Metadata Editor (OME) tool, which was developed by an Oak Ridge National Laboratory team, effectively manages diverse scientific datasets across several federal data centers, such as DOE's Atmospheric Radiation Measurement (ARM) Data Center and USGS's Core Science Analytics, Synthesis, and Libraries (CSAS&L) project. This presentation will focus mainly on recent developments and future strategies for refining OME tool within these centers. The ARM OME is a standard based tool (https://www.archive.arm.gov/armome) that allows scientists to create and maintain metadata about their data products. The tool has been improved with new workflows that help metadata coordinators and submitting investigators to submit and review their data more efficiently. The ARM Data Center's newly upgraded Data Discovery Tool (http://www.archive.arm.gov/discovery) uses rich metadata generated by the OME to enable search and discovery of thousands of datasets, while also providing a citation generator and modern order-delivery techniques like Globus (using GridFTP), Dropbox and THREDDS. The Data Discovery Tool also supports incremental indexing, which allows users to find new data as and when they are added. The USGS CSAS&L search catalog employs a custom version of the OME (https://www1.usgs.gov/csas/ome), which has been upgraded with high-level Federal Geographic Data Committee (FGDC) validations and the ability to reserve and mint Digital Object Identifiers (DOIs). The USGS's Science Data Catalog (SDC) (https://data.usgs.gov/datacatalog) allows users to discover a myriad of science data holdings through a web portal. Recent major upgrades to the SDC and ARM Data Discovery Tool include improved harvesting performance and migration using new search software, such as Apache Solr 6.0 for serving up data/metadata to scientific communities. Our presentation will highlight

  3. Development of an open metadata schema for prospective clinical research (openPCR) in China.

    Science.gov (United States)

    Xu, W; Guan, Z; Sun, J; Wang, Z; Geng, Y

    2014-01-01

    In China, deployment of electronic data capture (EDC) and clinical data management system (CDMS) for clinical research (CR) is in its very early stage, and about 90% of clinical studies collected and submitted clinical data manually. This work aims to build an open metadata schema for Prospective Clinical Research (openPCR) in China based on openEHR archetypes, in order to help Chinese researchers easily create specific data entry templates for registration, study design and clinical data collection. Singapore Framework for Dublin Core Application Profiles (DCAP) is used to develop openPCR and four steps such as defining the core functional requirements and deducing the core metadata items, developing archetype models, defining metadata terms and creating archetype records, and finally developing implementation syntax are followed. The core functional requirements are divided into three categories: requirements for research registration, requirements for trial design, and requirements for case report form (CRF). 74 metadata items are identified and their Chinese authority names are created. The minimum metadata set of openPCR includes 3 documents, 6 sections, 26 top level data groups, 32 lower data groups and 74 data elements. The top level container in openPCR is composed of public document, internal document and clinical document archetypes. A hierarchical structure of openPCR is established according to Data Structure of Electronic Health Record Architecture and Data Standard of China (Chinese EHR Standard). Metadata attributes are grouped into six parts: identification, definition, representation, relation, usage guides, and administration. OpenPCR is an open metadata schema based on research registration standards, standards of the Clinical Data Interchange Standards Consortium (CDISC) and Chinese healthcare related standards, and is to be publicly available throughout China. It considers future integration of EHR and CR by adopting data structure and data

  4. Handling Metadata in a Neurophysiology Laboratory

    Directory of Open Access Journals (Sweden)

    Lyuba Zehl

    2016-07-01

    Full Text Available To date, non-reproducibility of neurophysiological research is a matterof intense discussion in the scientific community. A crucial componentto enhance reproducibility is to comprehensively collect and storemetadata, that is all information about the experiment, the data,and the applied preprocessing steps on the data, such that they canbe accessed and shared in a consistent and simple manner. However,the complexity of experiments, the highly specialized analysis workflowsand a lack of knowledge on how to make use of supporting softwaretools often overburden researchers to perform such a detailed documentation.For this reason, the collected metadata are often incomplete, incomprehensiblefor outsiders or ambiguous. Based on our research experience in dealingwith diverse datasets, we here provide conceptual and technical guidanceto overcome the challenges associated with the collection, organization,and storage of metadata in a neurophysiology laboratory. Through theconcrete example of managing the metadata of a complex experimentthat yields multi-channel recordings from monkeys performing a behavioralmotor task, we practically demonstrate the implementation of theseapproaches and solutions with the intention that they may be generalizedto a specific project at hand. Moreover, we detail five use casesthat demonstrate the resulting benefits of constructing a well-organizedmetadata collection when processing or analyzing the recorded data,in particular when these are shared between laboratories in a modernscientific collaboration. Finally, we suggest an adaptable workflowto accumulate, structure and store metadata from different sourcesusing, by way of example, the odML metadata framework.

  5. Integrating Semantic Information in Metadata Descriptions for a Geoscience-wide Resource Inventory.

    Science.gov (United States)

    Zaslavsky, I.; Richard, S. M.; Gupta, A.; Valentine, D.; Whitenack, T.; Ozyurt, I. B.; Grethe, J. S.; Schachne, A.

    2016-12-01

    Integrating semantic information into legacy metadata catalogs is a challenging issue and so far has been mostly done on a limited scale. We present experience of CINERGI (Community Inventory of Earthcube Resources for Geoscience Interoperability), an NSF Earthcube Building Block project, in creating a large cross-disciplinary catalog of geoscience information resources to enable cross-domain discovery. The project developed a pipeline for automatically augmenting resource metadata, in particular generating keywords that describe metadata documents harvested from multiple geoscience information repositories or contributed by geoscientists through various channels including surveys and domain resource inventories. The pipeline examines available metadata descriptions using text parsing, vocabulary management and semantic annotation and graph navigation services of GeoSciGraph. GeoSciGraph, in turn, relies on a large cross-domain ontology of geoscience terms, which bridges several independently developed ontologies or taxonomies including SWEET, ENVO, YAGO, GeoSciML, GCMD, SWO, and CHEBI. The ontology content enables automatic extraction of keywords reflecting science domains, equipment used, geospatial features, measured properties, methods, processes, etc. We specifically focus on issues of cross-domain geoscience ontology creation, resolving several types of semantic conflicts among component ontologies or vocabularies, and constructing and managing facets for improved data discovery and navigation. The ontology and keyword generation rules are iteratively improved as pipeline results are presented to data managers for selective manual curation via a CINERGI Annotator user interface. We present lessons learned from applying CINERGI metadata augmentation pipeline to a number of federal agency and academic data registries, in the context of several use cases that require data discovery and integration across multiple earth science data catalogs of varying quality

  6. Revision of IRIS/IDA Seismic Station Metadata

    Science.gov (United States)

    Xu, W.; Davis, P.; Auerbach, D.; Klimczak, E.

    2017-12-01

    Trustworthy data quality assurance has always been one of the goals of seismic network operators and data management centers. This task is considerably complex and evolving due to the huge quantities as well as the rapidly changing characteristics and complexities of seismic data. Published metadata usually reflect instrument response characteristics and their accuracies, which includes zero frequency sensitivity for both seismometer and data logger as well as other, frequency-dependent elements. In this work, we are mainly focused studying the variation of the seismometer sensitivity with time of IRIS/IDA seismic recording systems with a goal to improve the metadata accuracy for the history of the network. There are several ways to measure the accuracy of seismometer sensitivity for the seismic stations in service. An effective practice recently developed is to collocate a reference seismometer in proximity to verify the in-situ sensors' calibration. For those stations with a secondary broadband seismometer, IRIS' MUSTANG metric computation system introduced a transfer function metric to reflect two sensors' gain ratios in the microseism frequency band. In addition, a simulation approach based on M2 tidal measurements has been proposed and proven to be effective. In this work, we compare and analyze the results from three different methods, and concluded that the collocated-sensor method is most stable and reliable with the minimum uncertainties all the time. However, for epochs without both the collocated sensor and secondary seismometer, we rely on the analysis results from tide method. For the data since 1992 on IDA stations, we computed over 600 revised seismometer sensitivities for all the IRIS/IDA network calibration epochs. Hopefully further revision procedures will help to guarantee that the data is accurately reflected by the metadata of these stations.

  7. An emergent theory of digital library metadata enrich then filter

    CERN Document Server

    Stevens, Brett

    2015-01-01

    An Emergent Theory of Digital Library Metadata is a reaction to the current digital library landscape that is being challenged with growing online collections and changing user expectations. The theory provides the conceptual underpinnings for a new approach which moves away from expert defined standardised metadata to a user driven approach with users as metadata co-creators. Moving away from definitive, authoritative, metadata to a system that reflects the diversity of users’ terminologies, it changes the current focus on metadata simplicity and efficiency to one of metadata enriching, which is a continuous and evolving process of data linking. From predefined description to information conceptualised, contextualised and filtered at the point of delivery. By presenting this shift, this book provides a coherent structure in which future technological developments can be considered.

  8. Design of a Scalable Event Notification Service: Interface and Architecture

    National Research Council Canada - National Science Library

    Carzaniga, Antonio; Rosenblum, David S; Wolf, Alexander L

    1998-01-01

    Event-based distributed systems are programmed to operate in response to events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems...

  9. Design and Implementation of a Metadata-rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  10. Optimal Guaranteed Service Time and Service Level Decision with Time and Service Level Sensitive Demand

    Directory of Open Access Journals (Sweden)

    Sangjun Park

    2014-01-01

    Full Text Available We consider a two-stage supply chain with one supplier and one retailer. The retailer sells a product to customer and the supplier provides a product in a make-to-order mode. In this case, the supplier’s decisions on service time and service level and the retailer’s decision on retail price have effects on customer demand. We develop optimization models to determine the optimal retail price, the optimal guaranteed service time, the optimal service level, and the optimal capacity to maximize the expected profit of the whole supply chain. The results of numerical experiments show that it is more profitable to determine the optimal price, the optimal guaranteed service time, and the optimal service level simultaneously and the proposed model is more profitable in service level sensitive market.

  11. The ATLAS Event Service: A new approach to event processing

    Science.gov (United States)

    Calafiura, P.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.

    2015-12-01

    The ATLAS Event Service (ES) implements a new fine grained approach to HEP event processing, designed to be agile and efficient in exploiting transient, short-lived resources such as HPC hole-filling, spot market commercial clouds, and volunteer computing. Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours/day). Input data flows utilize remote data repositories with no data locality or pre-staging requirements, minimizing the use of costly storage in favor of strongly leveraging powerful networks. Object stores provide a highly scalable means of remotely storing the quasi-continuous, fine grained outputs that give ES based applications a very light data footprint on a processing resource, and ensure negligible losses should the resource suddenly vanish. We will describe the motivations for the ES system, its unique features and capabilities, its architecture and the highly scalable tools and technologies employed in its implementation, and its applications in ATLAS processing on HPCs, commercial cloud resources, volunteer computing, and grid resources. Notice: This manuscript has been authored by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  12. Improving Access to NASA Earth Science Data through Collaborative Metadata Curation

    Science.gov (United States)

    Sisco, A. W.; Bugbee, K.; Shum, D.; Baynes, K.; Dixon, V.; Ramachandran, R.

    2017-12-01

    The NASA-developed Common Metadata Repository (CMR) is a high-performance metadata system that currently catalogs over 375 million Earth science metadata records. It serves as the authoritative metadata management system of NASA's Earth Observing System Data and Information System (EOSDIS), enabling NASA Earth science data to be discovered and accessed by a worldwide user community. The size of the EOSDIS data archive is steadily increasing, and the ability to manage and query this archive depends on the input of high quality metadata to the CMR. Metadata that does not provide adequate descriptive information diminishes the CMR's ability to effectively find and serve data to users. To address this issue, an innovative and collaborative review process is underway to systematically improve the completeness, consistency, and accuracy of metadata for approximately 7,000 data sets archived by NASA's twelve EOSDIS data centers, or Distributed Active Archive Centers (DAACs). The process involves automated and manual metadata assessment of both collection and granule records by a team of Earth science data specialists at NASA Marshall Space Flight Center. The team communicates results to DAAC personnel, who then make revisions and reingest improved metadata into the CMR. Implementation of this process relies on a network of interdisciplinary collaborators leveraging a variety of communication platforms and long-range planning strategies. Curating metadata at this scale and resolving metadata issues through community consensus improves the CMR's ability to serve current and future users and also introduces best practices for stewarding the next generation of Earth Observing System data. This presentation will detail the metadata curation process, its outcomes thus far, and also share the status of ongoing curation activities.

  13. Towards an Interoperable Field Spectroscopy Metadata Standard with Extended Support for Marine Specific Applications

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2015-11-01

    Full Text Available This paper presents an approach to developing robust metadata standards for specific applications that serves to ensure a high level of reliability and interoperability for a spectroscopy dataset. The challenges of designing a metadata standard that meets the unique requirements of specific user communities are examined, including in situ measurement of reflectance underwater, using coral as a case in point. Metadata schema mappings from seven existing metadata standards demonstrate that they consistently fail to meet the needs of field spectroscopy scientists for general and specific applications (μ = 22%, σ = 32% conformance with the core metadata requirements and μ = 19%, σ = 18% for the special case of a benthic (e.g., coral reflectance metadataset. Issues such as field measurement methods, instrument calibration, and data representativeness for marine field spectroscopy campaigns are investigated within the context of submerged benthic measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. A hybrid standard that serves as a “best of breed” incorporating useful modules and parameters within the standards is proposed. This paper is Part 3 in a series of papers in this journal, examining the issues central to a metadata standard for field spectroscopy datasets. The results presented in this paper are an important step towards field spectroscopy metadata standards that address the specific needs of field spectroscopy data stakeholders while facilitating dataset documentation, quality assurance, discoverability and data exchange within large-scale information sharing platforms.

  14. Making the Case for Embedded Metadata in Digital Images

    DEFF Research Database (Denmark)

    Smith, Kari R.; Saunders, Sarah; Kejser, U.B.

    2014-01-01

    This paper discusses the standards, methods, use cases, and opportunities for using embedded metadata in digital images. In this paper we explain the past and current work engaged with developing specifications, standards for embedding metadata of different types, and the practicalities of data...... exchange in heritage institutions and the culture sector. Our examples and findings support the case for embedded metadata in digital images and the opportunities for such use more broadly in non-heritage sectors as well. We encourage the adoption of embedded metadata by digital image content creators...... and curators as well as those developing software and hardware that support the creation or re-use of digital images. We conclude that the usability of born digital images as well as physical objects that are digitized can be extended and the files preserved more readily with embedded metadata....

  15. Interpreting the ASTM 'content standard for digital geospatial metadata'

    Science.gov (United States)

    Nebert, Douglas D.

    1996-01-01

    ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.

  16. Making the Case for Embedded Metadata in Digital Images

    DEFF Research Database (Denmark)

    Smith, Kari R.; Saunders, Sarah; Kejser, U.B.

    2014-01-01

    exchange in heritage institutions and the culture sector. Our examples and findings support the case for embedded metadata in digital images and the opportunities for such use more broadly in non-heritage sectors as well. We encourage the adoption of embedded metadata by digital image content creators......This paper discusses the standards, methods, use cases, and opportunities for using embedded metadata in digital images. In this paper we explain the past and current work engaged with developing specifications, standards for embedding metadata of different types, and the practicalities of data...... and curators as well as those developing software and hardware that support the creation or re-use of digital images. We conclude that the usability of born digital images as well as physical objects that are digitized can be extended and the files preserved more readily with embedded metadata....

  17. A Novel Architecture of Metadata Management System Based on Intelligent Cache

    Institute of Scientific and Technical Information of China (English)

    SONG Baoyan; ZHAO Hongwei; WANG Yan; GAO Nan; XU Jin

    2006-01-01

    This paper introduces a novel architecture of metadata management system based on intelligent cache called Metadata Intelligent Cache Controller (MICC). By using an intelligent cache to control the metadata system, MICC can deal with different scenarios such as splitting and merging of queries into sub-queries for available metadata sets in local, in order to reduce access time of remote queries. Application can find results patially from local cache and the remaining portion of the metadata that can be fetched from remote locations. Using the existing metadata, it can not only enhance the fault tolerance and load balancing of system effectively, but also improve the efficiency of access while ensuring the access quality.

  18. A Metadata Schema for Geospatial Resource Discovery Use Cases

    Directory of Open Access Journals (Sweden)

    Darren Hardy

    2014-07-01

    Full Text Available We introduce a metadata schema that focuses on GIS discovery use cases for patrons in a research library setting. Text search, faceted refinement, and spatial search and relevancy are among GeoBlacklight's primary use cases for federated geospatial holdings. The schema supports a variety of GIS data types and enables contextual, collection-oriented discovery applications as well as traditional portal applications. One key limitation of GIS resource discovery is the general lack of normative metadata practices, which has led to a proliferation of metadata schemas and duplicate records. The ISO 19115/19139 and FGDC standards specify metadata formats, but are intricate, lengthy, and not focused on discovery. Moreover, they require sophisticated authoring environments and cataloging expertise. Geographic metadata standards target preservation and quality measure use cases, but they do not provide for simple inter-institutional sharing of metadata for discovery use cases. To this end, our schema reuses elements from Dublin Core and GeoRSS to leverage their normative semantics, community best practices, open-source software implementations, and extensive examples already deployed in discovery contexts such as web search and mapping. Finally, we discuss a Solr implementation of the schema using a "geo" extension to MODS.

  19. Managing ebook metadata in academic libraries taming the tiger

    CERN Document Server

    Frederick, Donna E

    2016-01-01

    Managing ebook Metadata in Academic Libraries: Taming the Tiger tackles the topic of ebooks in academic libraries, a trend that has been welcomed by students, faculty, researchers, and library staff. However, at the same time, the reality of acquiring ebooks, making them discoverable, and managing them presents library staff with many new challenges. Traditional methods of cataloging and managing library resources are no longer relevant where the purchasing of ebooks in packages and demand driven acquisitions are the predominant models for acquiring new content. Most academic libraries have a complex metadata environment wherein multiple systems draw upon the same metadata for different purposes. This complexity makes the need for standards-based interoperable metadata more important than ever. In addition to complexity, the nature of the metadata environment itself typically varies slightly from library to library making it difficult to recommend a single set of practices and procedures which would be releva...

  20. Human-assisted sound event recognition for home service robots.

    Science.gov (United States)

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  1. International Metadata Initiatives: Lessons in Bibliographic Control.

    Science.gov (United States)

    Caplan, Priscilla

    This paper looks at a subset of metadata schemes, including the Text Encoding Initiative (TEI) header, the Encoded Archival Description (EAD), the Dublin Core Metadata Element Set (DCMES), and the Visual Resources Association (VRA) Core Categories for visual resources. It examines why they developed as they did, major point of difference from…

  2. Building a Disciplinary Metadata Standards Directory

    Directory of Open Access Journals (Sweden)

    Alexander Ball

    2014-07-01

    Full Text Available The Research Data Alliance (RDA Metadata Standards Directory Working Group (MSDWG is building a directory of descriptive, discipline-specific metadata standards. The purpose of the directory is to promote the discovery, access and use of such standards, thereby improving the state of research data interoperability and reducing duplicative standards development work.This work builds upon the UK Digital Curation Centre's Disciplinary Metadata Catalogue, a resource created with much the same aim in mind. The first stage of the MSDWG's work was to update and extend the information contained in the catalogue. In the current, second stage, a new platform is being developed in order to extend the functionality of the directory beyond that of the catalogue, and to make it easier to maintain and sustain. Future work will include making the directory more amenable to use by automated tools.

  3. Treating metadata as annotations: separating the content markup from the content

    Directory of Open Access Journals (Sweden)

    Fredrik Paulsson

    2007-11-01

    Full Text Available The use of digital learning resources creates an increasing need for semantic metadata, describing the whole resource, as well as parts of resources. Traditionally, schemas such as Text Encoding Initiative (TEI have been used to add semantic markup for parts of resources. This is not sufficient for use in a ”metadata ecology”, where metadata is distributed, coherent to different Application Profiles, and added by different actors. A new methodology, where metadata is “pointed in” as annotations, using XPointers, and RDF is proposed. A suggestion for how such infrastructure can be implemented, using existing open standards for metadata, and for the web is presented. We argue that such methodology and infrastructure is necessary to realize the decentralized metadata infrastructure needed for a “metadata ecology".

  4. A Generic Metadata Editor Supporting System Using Drupal CMS

    Science.gov (United States)

    Pan, J.; Banks, N. G.; Leggott, M.

    2011-12-01

    Metadata handling is a key factor in preserving and reusing scientific data. In recent years, standardized structural metadata has become widely used in Geoscience communities. However, there exist many different standards in Geosciences, such as the current version of the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata (FGDC CSDGM), the Ecological Markup Language (EML), the Geography Markup Language (GML), and the emerging ISO 19115 and related standards. In addition, there are many different subsets within the Geoscience subdomain such as the Biological Profile of the FGDC (CSDGM), or for geopolitical regions, such as the European Profile or the North American Profile in the ISO standards. It is therefore desirable to have a software foundation to support metadata creation and editing for multiple standards and profiles, without re-inventing the wheels. We have developed a software module as a generic, flexible software system to do just that: to facilitate the support for multiple metadata standards and profiles. The software consists of a set of modules for the Drupal Content Management System (CMS), with minimal inter-dependencies to other Drupal modules. There are two steps in using the system's metadata functions. First, an administrator can use the system to design a user form, based on an XML schema and its instances. The form definition is named and stored in the Drupal database as a XML blob content. Second, users in an editor role can then use the persisted XML definition to render an actual metadata entry form, for creating or editing a metadata record. Behind the scenes, the form definition XML is transformed into a PHP array, which is then rendered via Drupal Form API. When the form is submitted the posted values are used to modify a metadata record. Drupal hooks can be used to perform custom processing on metadata record before and after submission. It is trivial to store the metadata record as an actual XML file

  5. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  6. Treatment of complementary events in event trees in constructing linked fault trees for level 1 and level 2 PRA

    International Nuclear Information System (INIS)

    Jo, Y. G.

    2008-01-01

    Complementary events in the event trees for a PRA model should be treated properly in order to evaluate plant risk correctly. In this study, the characteristics of the following three different cut-set generation methods were investigated first in order to find the best practical way for treating complementary events: 1) exact method which treats complementary events logically, 2) no-delete term method which does not treat complementary events at all, and 3) delete term method which treats complementary events by deleting nonsense cut-sets which are generated as a result of ignoring complementary events. Then, practical methods for treating complementary events in constructing linked fault trees for level 1 and level 2 PRA in EPRI R and R workstation software environment, where CAFTA is the fault tree editor and FORTE is the cut-set engine, were suggested and demonstrated. The suggested methods deal with the following selected four typical cases: Case 1: an event tree event (E) is represented by a fault tree gate whose inputs consist of only fault tree gates, Case 2: E is represented by a single basic event, Case 3: E is represented by an OR fault tree gate which has a single basic event and a fault tree gate as inputs, and Case 4: E is represented by an AND fault tree gate which has a single basic event and a fault tree gate as inputs. In the suggested methods, first the high level logic structures of event tree events are examined and restructured, if needed. Then, the delete term method, the exact method, and the combination of the two methods are applied to Case 1, Case 2, and Cases 3 and 4, respectively. Also, it is recommended to treat complementary events, using the suggested methods, before level 1 and level 2 PRA fault trees are coupled. It should be noted that the selected four typical cases may not cover all different cases encountered in level 1 and level 2 PRA modeling. However, a process similar to the one suggested in this study may be used to find

  7. Metadata Design in the New PDS4 Standards - Something for Everybody

    Science.gov (United States)

    Raugh, Anne C.; Hughes, John S.

    2015-11-01

    The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data

  8. Impact of Service Quality of House Wedding Events on the Service Value and Satisfaction

    Directory of Open Access Journals (Sweden)

    Kim Soungyeon

    2014-01-01

    Full Text Available With the current popularity of the newly emerged house wedding events (HWE, this study aims to investigate the impact of service quality of house wedding events on service value and satisfaction of customers. Based on literature review, two main components of HWE service quality attributes were selected: physical service and human service. For service value, three components were chosen: social, aesthetic, and monetary value. The study conducted self-administered questionnaire survey from July 1st to August 31st 2013 targeting 300 male and female over age 20 with experiences of participating in HWE held in Seoul metropolitan areas. The final valid samples were 267 copies. Frequency analysis, exploratory factor analysis, reliability analysis and regression analysis were conducted on the collected data. The result of this study showed a significant positive impact of HWE’s service quality, which in turn influence satisfaction. In addition, the study found the direct impact of HWE’s service quality on customer satisfaction. Based on the results, this study also discussed practical implications.

  9. Forensic devices for activism: Metadata tracking and public proof

    Directory of Open Access Journals (Sweden)

    Lonneke van der Velden

    2015-10-01

    Full Text Available The central topic of this paper is a mobile phone application, ‘InformaCam’, which turns metadata from a surveillance risk into a method for the production of public proof. InformaCam allows one to manage and delete metadata from images and videos in order to diminish surveillance risks related to online tracking. Furthermore, it structures and stores the metadata in such a way that the documentary material becomes better accommodated to evidentiary settings, if needed. In this paper I propose InformaCam should be interpreted as a ‘forensic device’. By using the conceptualization of forensics and work on socio-technical devices the paper discusses how InformaCam, through a range of interventions, rearranges metadata into a technology of evidence. InformaCam explicitly recognizes mobile phones as context aware, uses their sensors, and structures metadata in order to facilitate data analysis after images are captured. Through these modifications it invents a form of ‘sensory data forensics'. By treating data in this particular way, surveillance resistance does more than seeking awareness. It becomes engaged with investigatory practices. Considering the extent by which states conduct metadata surveillance, the project can be seen as a timely response to the unequal distribution of power over data.

  10. PROGRAM SYSTEM AND INFORMATION METADATA BANK OF TERTIARY PROTEIN STRUCTURES

    Directory of Open Access Journals (Sweden)

    T. A. Nikitin

    2013-01-01

    Full Text Available The article deals with the architecture of metadata storage model for check results of three-dimensional protein structures. Concept database model was built. The service and procedure of database update as well as data transformation algorithms for protein structures and their quality were presented. Most important information about entries and their submission forms to store, access, and delivery to users were highlighted. Software suite was developed for the implementation of functional tasks using Java programming language in the NetBeans v.7.0 environment and JQL to query and interact with the database JavaDB. The service was tested and results have shown system effectiveness while protein structures filtration.

  11. Metadata Creation, Management and Search System for your Scientific Data

    Science.gov (United States)

    Devarakonda, R.; Palanisamy, G.

    2012-12-01

    Mercury Search Systems is a set of tools for creating, searching, and retrieving of biogeochemical metadata. Mercury toolset provides orders of magnitude improvements in search speed, support for any metadata format, integration with Google Maps for spatial queries, multi-facetted type search, search suggestions, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. Mercury's metadata editor provides a easy way for creating metadata and Mercury's search interface provides a single portal to search for data and information contained in disparate data management systems, each of which may use any metadata format including FGDC, ISO-19115, Dublin-Core, Darwin-Core, DIF, ECHO, and EML. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury is being used more than 14 different projects across 4 federal agencies. It was originally developed for NASA, with continuing development funded by NASA, USGS, and DOE for a consortium of projects. Mercury search won the NASA's Earth Science Data Systems Software Reuse Award in 2008. References: R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010);

  12. Survey data and metadata modelling using document-oriented NoSQL

    Science.gov (United States)

    Rahmatuti Maghfiroh, Lutfi; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Survey data that are collected from year to year have metadata change. However it need to be stored integratedly to get statistical data faster and easier. Data warehouse (DW) can be used to solve this limitation. However there is a change of variables in every period that can not be accommodated by DW. Traditional DW can not handle variable change via Slowly Changing Dimension (SCD). Previous research handle the change of variables in DW to manage metadata by using multiversion DW (MVDW). MVDW is designed using relational model. Some researches also found that developing nonrelational model in NoSQL database has reading time faster than the relational model. Therefore, we propose changes to metadata management by using NoSQL. This study proposes a model DW to manage change and algorithms to retrieve data with metadata changes. Evaluation of the proposed models and algorithms result in that database with the proposed design can retrieve data with metadata changes properly. This paper has contribution in comprehensive data analysis with metadata changes (especially data survey) in integrated storage.

  13. Using Metadata to Build Geographic Information Sharing Environment on Internet

    Directory of Open Access Journals (Sweden)

    Chih-hong Sun

    1999-12-01

    Full Text Available Internet provides a convenient environment to share geographic information. Web GIS (Geographic Information System even provides users a direct access environment to geographic databases through Internet. However, the complexity of geographic data makes it difficult for users to understand the real content and the limitation of geographic information. In some cases, users may misuse the geographic data and make wrong decisions. Meanwhile, geographic data are distributed across various government agencies, academic institutes, and private organizations, which make it even more difficult for users to fully understand the content of these complex data. To overcome these difficulties, this research uses metadata as a guiding mechanism for users to fully understand the content and the limitation of geographic data. We introduce three metadata standards commonly used for geographic data and metadata authoring tools available in the US. We also review the current development of geographic metadata standard in Taiwan. Two metadata authoring tools are developed in this research, which will enable users to build their own geographic metadata easily.[Article content in Chinese

  14. Development of health information search engine based on metadata and ontology.

    Science.gov (United States)

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  15. The CBM first-level event selector

    Energy Technology Data Exchange (ETDEWEB)

    Cuveland, Jan de; Lindenstruth, Volker [Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt (Germany)

    2015-07-01

    The CBM experiment currently under construction at GSI/FAIR is designed to study QCD predictions at high baryon densities. The CBM First-Level Event Selector (FLES) is the central event selection system of the experiment. Designed as a high-performance computer cluster, its task is an online analysis of the physics data including full event reconstruction at an incoming data rate exceeding 1 TByte/s. The CBM detector systems are free-running and self-triggered, delivering time-stamped data streams. As there is no inherent event separation, traditional approaches for global event building and event selection are not directly applicable. Instead of event building, the FLES combines the data from approximately 1000 input links to self-contained, overlapping processing intervals and distributes them to compute nodes. It employs a high-bandwidth InfiniBand network as well as dedicated custom FPGA input boards providing time-addressed access to buffered data. Subsequently, specialized event selection algorithms analyze these processing intervals in 4-D, identify events, and select those relevant for storage depending on the chosen CBM setup and selection scenario. This presentation outlines the design of the CBM First-level Event Selector and summarizes the results from first prototype systems.

  16. Collaborative Metadata Curation in Support of NASA Earth Science Data Stewardship

    Science.gov (United States)

    Sisco, Adam W.; Bugbee, Kaylin; le Roux, Jeanne; Staton, Patrick; Freitag, Brian; Dixon, Valerie

    2018-01-01

    Growing collection of NASA Earth science data is archived and distributed by EOSDIS’s 12 Distributed Active Archive Centers (DAACs). Each collection and granule is described by a metadata record housed in the Common Metadata Repository (CMR). Multiple metadata standards are in use, and core elements of each are mapped to and from a common model – the Unified Metadata Model (UMM). Work done by the Analysis and Review of CMR (ARC) Team.

  17. Rising cyclin-CDK levels order cell cycle events.

    Directory of Open Access Journals (Sweden)

    Catherine Oikonomou

    Full Text Available Diverse mitotic events can be triggered in the correct order and time by a single cyclin-CDK. A single regulator could confer order and timing on multiple events if later events require higher cyclin-CDK than earlier events, so that gradually rising cyclin-CDK levels can sequentially trigger responsive events: the "quantitative model" of ordering.This 'quantitative model' makes predictions for the effect of locking cyclin at fixed levels for a protracted period: at low cyclin levels, early events should occur rapidly, while late events should be slow, defective, or highly variable (depending on threshold mechanism. We titrated the budding yeast mitotic cyclin Clb2 within its endogenous expression range to a stable, fixed level and measured time to occurrence of three mitotic events: growth depolarization, spindle formation, and spindle elongation, as a function of fixed Clb2 level. These events require increasingly more Clb2 according to their normal order of occurrence. Events occur efficiently and with low variability at fixed Clb2 levels similar to those observed when the events normally occur. A second prediction of the model is that increasing the rate of cyclin accumulation should globally advance timing of all events. Moderate (<2-fold overexpression of Clb2 accelerates all events of mitosis, resulting in consistently rapid sequential cell cycles. However, this moderate overexpression also causes a significant frequency of premature mitoses leading to inviability, suggesting that Clb2 expression level is optimized to balance the fitness costs of variability and catastrophe.We conclude that mitotic events are regulated by discrete cyclin-CDK thresholds. These thresholds are sequentially triggered as cyclin increases, yielding reliable order and timing. In many biological processes a graded input must be translated into discrete outputs. In such systems, expression of the central regulator is likely to be tuned to an optimum level, as we

  18. Content-aware network storage system supporting metadata retrieval

    Science.gov (United States)

    Liu, Ke; Qin, Leihua; Zhou, Jingli; Nie, Xuejun

    2008-12-01

    Nowadays, content-based network storage has become the hot research spot of academy and corporation[1]. In order to solve the problem of hit rate decline causing by migration and achieve the content-based query, we exploit a new content-aware storage system which supports metadata retrieval to improve the query performance. Firstly, we extend the SCSI command descriptor block to enable system understand those self-defined query requests. Secondly, the extracted metadata is encoded by extensible markup language to improve the universality. Thirdly, according to the demand of information lifecycle management (ILM), we store those data in different storage level and use corresponding query strategy to retrieval them. Fourthly, as the file content identifier plays an important role in locating data and calculating block correlation, we use it to fetch files and sort query results through friendly user interface. Finally, the experiments indicate that the retrieval strategy and sort algorithm have enhanced the retrieval efficiency and precision.

  19. Conditions and configuration metadata for the ATLAS experiment

    International Nuclear Information System (INIS)

    Gallas, E J; Pachal, K E; Tseng, J C L; Albrand, S; Fulachier, J; Lambert, F; Zhang, Q

    2012-01-01

    In the ATLAS experiment, a system called COMA (Conditions/Configuration Metadata for ATLAS), has been developed to make globally important run-level metadata more readily accessible. It is based on a relational database storing directly extracted, refined, reduced, and derived information from system-specific database sources as well as information from non-database sources. This information facilitates a variety of unique dynamic interfaces and provides information to enhance the functionality of other systems. This presentation will give an overview of the components of the COMA system, enumerate its diverse data sources, and give examples of some of the interfaces it facilitates. We list important principles behind COMA schema and interface design, and how features of these principles create coherence and eliminate redundancy among the components of the overall system. In addition, we elucidate how interface logging data has been used to refine COMA content and improve the value and performance of end-user reports and browsers.

  20. Conditions and configuration metadata for the ATLAS experiment

    CERN Document Server

    Gallas, E J; Albrand, S; Fulachier, J; Lambert, F; Pachal, K E; Tseng, J C L; Zhang, Q

    2012-01-01

    In the ATLAS experiment, a system called COMA (Conditions/Configuration Metadata for ATLAS), has been developed to make globally important run-level metadata more readily accessible. It is based on a relational database storing directly extracted, refined, reduced, and derived information from system-specific database sources as well as information from non-database sources. This information facilitates a variety of unique dynamic interfaces and provides information to enhance the functionality of other systems. This presentation will give an overview of the components of the COMA system, enumerate its diverse data sources, and give examples of some of the interfaces it facilitates. We list important principles behind COMA schema and interface design, and how features of these principles create coherence and eliminate redundancy among the components of the overall system. In addition, we elucidate how interface logging data has been used to refine COMA content and improve the value and performance of end-user...

  1. EPA Metadata Style Guide Keywords and EPA Organization Names

    Science.gov (United States)

    The following keywords and EPA organization names listed below, along with EPA’s Metadata Style Guide, are intended to provide suggestions and guidance to assist with the standardization of metadata records.

  2. A Metadata Standard for Hydroinformatic Data Conforming to International Standards

    Science.gov (United States)

    Notay, Vikram; Carstens, Georg; Lehfeldt, Rainer

    2017-04-01

    worldwide, the profile provides a means to describe hydroinformatic data that conforms to existing metadata standards. Additionally, EU and German national standards, INSPIRE and GDI-DE have been considered to ensure interoperability on an international and national level. Finally, elements of the GovData profile of the Federal Government of Germany have been integrated to be able to participate in its Open Data initiative. All these factors make the metadata profile developed at BAW highly suitable for describing hydroinformatic data in particular and physical state variables in general. Further details about this metadata profile will be presented at the conference. Acknowledgements: The authors would like to thank Christoph Wosniok and Peter Schade for their contributions towards the development of this metadata standard.

  3. Reproductive Health Services Discrete-Event Simulation

    OpenAIRE

    Lee, Sungjoo; Giles, Denise F.; Goldsman, David; Cook, Douglas A.; Mishra, Ninad; McCarthy, Brian

    2006-01-01

    Low resource healthcare environments are often characteristic of patient flow patterns with varying patient risks, extensive patient waiting times, uneven workload distributions, and inefficient service delivery. Models from industrial and systems engineering allow for a greater examination of processes by applying discrete-event computer simulation techniques to evaluate and optimize hospital performance.

  4. Enriching The Metadata On CDS

    CERN Document Server

    Chhibber, Nalin

    2014-01-01

    The project report revolves around the open source software package called Invenio. It provides the tools for management of digital assets in a repository and drives CERN Document Server. Primary objective is to enhance the existing metadata in CDS with data from other libraries. An implicit part of this task is to manage disambiguation (within incoming data), removal of multiple entries and handle replications between new and existing records. All such elements and their corresponding changes are integrated within Invenio to make the upgraded metadata available on the CDS. Latter part of the report discuss some changes related to the Invenio code-base itself.

  5. EU Law and Mass Internet Metadata Surveillance in the Post-Snowden Era

    Directory of Open Access Journals (Sweden)

    Nora Ni Loideain

    2015-09-01

    Full Text Available Legal frameworks exist within democracies to prevent the misuse and abuse of personal data that law enforcement authorities obtain from private communication service providers. The fundamental rights to respect for private life and the protection of personal data underpin this framework within the European Union. Accordingly, the protection of the principles and safeguards required by these rights is key to ensuring that the oversight of State surveillance powers is robust and transparent. Furthermore, without the robust scrutiny of independent judicial review, the principles and safeguards guaranteed by these rights may become more illusory than real. Following the Edward Snowden revelations, major concerns have been raised worldwide regarding the legality, necessity and proportionality standards governing these laws. In 2014, the highest court in the EU struck down the legal framework that imposed a mandatory duty on communication service providers to undertake the mass retention of metadata for secret intelligence and law enforcement authorities across the EU. This article considers the influence of the Snowden revelations on this landmark judgment. Subsequently, the analysis explores the significance of this ruling for the future reform of EU law governing metadata surveillance and its contribution to the worldwide debate on indiscriminate and covert monitoring in the post-Snowden era.

  6. The Theory and Implementation for Metadata in Digital Library/Museum

    Directory of Open Access Journals (Sweden)

    Hsueh-hua Chen

    1998-12-01

    Full Text Available Digital Libraries and Museums (DL/M have become one of the important research issues of Library and Information Science as well as other related fields. This paper describes the basic concepts of DL/M and briefly introduces the development of Taiwan Digital Museum Project. Based on the features of various collections, wediscuss how to maintain, to manage and to exchange metadata, especially from the viewpoint of users. We propose the draft of metadata, MICI (Metadata Interchange for Chinese Information , developed by ROSS (Resources Organization and SearchingSpecification team. Finally, current problems and future development of metadata will be touched.[Article content in Chinese

  7. Web Approach for Ontology-Based Classification, Integration, and Interdisciplinary Usage of Geoscience Metadata

    Directory of Open Access Journals (Sweden)

    B Ritschel

    2012-10-01

    Full Text Available The Semantic Web is a W3C approach that integrates the different sources of semantics within documents and services using ontology-based techniques. The main objective of this approach in the geoscience domain is the improvement of understanding, integration, and usage of Earth and space science related web content in terms of data, information, and knowledge for machines and people. The modeling and representation of semantic attributes and relations within and among documents can be realized by human readable concept maps and machine readable OWL documents. The objectives for the usage of the Semantic Web approach in the GFZ data center ISDC project are the design of an extended classification of metadata documents for product types related to instruments, platforms, and projects as well as the integration of different types of metadata related to data product providers, users, and data centers. Sources of content and semantics for the description of Earth and space science product types and related classes are standardized metadata documents (e.g., DIF documents, publications, grey literature, and Web pages. Other sources are information provided by users, such as tagging data and social navigation information. The integration of controlled vocabularies as well as folksonomies plays an important role in the design of well formed ontologies.

  8. Metadata Exporter for Scientific Photography Management

    Science.gov (United States)

    Staudigel, D.; English, B.; Delaney, R.; Staudigel, H.; Koppers, A.; Hart, S.

    2005-12-01

    Photographs have become an increasingly important medium, especially with the advent of digital cameras. It has become inexpensive to take photographs and quickly post them on a website. However informative photos may be, they still need to be displayed in a convenient way, and be cataloged in such a manner that makes them easily locatable. Managing the great number of photographs that digital cameras allow and creating a format for efficient dissemination of the information related to the photos is a tedious task. Products such as Apple's iPhoto have greatly eased the task of managing photographs, However, they often have limitations. Un-customizable metadata fields and poor metadata extraction tools limit their scientific usefulness. A solution to this persistent problem is a customizable metadata exporter. On the ALIA expedition, we successfully managed the thousands of digital photos we took. We did this with iPhoto and a version of the exporter that is now available to the public under the name "CustomHTMLExport" (http://www.versiontracker.com/dyn/moreinfo/macosx/27777), currently undergoing formal beta testing This software allows the use of customized metadata fields (including description, time, date, GPS data, etc.), which is exported along with the photo. It can also produce webpages with this data straight from iPhoto, in a much more flexible way than is already allowed. With this tool it becomes very easy to manage and distribute scientific photos.

  9. Using Google Tag Manager and Google Analytics to track DSpace metadata fields as custom dimensions

    Directory of Open Access Journals (Sweden)

    Suzanna Conrad

    2015-01-01

    Full Text Available DSpace can be problematic for those interested in tracking download and pageview statistics granularly. Some libraries have implemented code to track events on websites and some have experimented with using Google Tag Manager to automate event tagging in DSpace. While these approaches make it possible to track download statistics, granular details such as authors, content types, titles, advisors, and other fields for which metadata exist are generally not tracked in DSpace or Google Analytics without coding. Moreover, it can be time consuming to track and assess pageview data and relate that data back to particular metadata fields. This article will detail the learning process of incorporating custom dimensions for tracking these detailed fields including trial and error attempts to use the data import function manually in Google Analytics, to automate the data import using Google APIs, and finally to automate the collection of dimension data in Google Tag Manager by mimicking SEO practices for capturing meta tags. This specific case study refers to using Google Tag Manager and Google Analytics with DSpace; however, this method may also be applied to other types of websites or systems.

  10. INSPIRE: Managing Metadata in a Global Digital Library for High-Energy Physics

    OpenAIRE

    Martin Montull, Javier

    2011-01-01

    Four leading laboratories in the High-Energy Physics (HEP) field are collaborating to roll-out the next-generation scientific information portal: INSPIRE. The goal of this project is to replace the popular 40 year-old SPIRES database. INSPIRE already provides access to about 1 million records and includes services such as fulltext search, automatic keyword assignment, ingestion and automatic display of LaTeX, citation analysis, automatic author disambiguation, metadata harvesting, extraction ...

  11. Automated Feature and Event Detection with SDO AIA and HMI Data

    Science.gov (United States)

    Davey, Alisdair; Martens, P. C. H.; Attrill, G. D. R.; Engell, A.; Farid, S.; Grigis, P. C.; Kasper, J.; Korreck, K.; Saar, S. H.; Su, Y.; Testa, P.; Wills-Davey, M.; Savcheva, A.; Bernasconi, P. N.; Raouafi, N.-E.; Delouille, V. A.; Hochedez, J. F..; Cirtain, J. W.; Deforest, C. E.; Angryk, R. A.; de Moortel, I.; Wiegelmann, T.; Georgouli, M. K.; McAteer, R. T. J.; Hurlburt, N.; Timmons, R.

    The Solar Dynamics Observatory (SDO) represents a new frontier in quantity and quality of solar data. At about 1.5 TB/day, the data will not be easily digestible by solar physicists using the same methods that have been employed for images from previous missions. In order for solar scientists to use the SDO data effectively they need meta-data that will allow them to identify and retrieve data sets that address their particular science questions. We are building a comprehensive computer vision pipeline for SDO, abstracting complete metadata on many of the features and events detectable on the Sun without human intervention. Our project unites more than a dozen individual, existing codes into a systematic tool that can be used by the entire solar community. The feature finding codes will run as part of the SDO Event Detection System (EDS) at the Joint Science Operations Center (JSOC; joint between Stanford and LMSAL). The metadata produced will be stored in the Heliophysics Event Knowledgebase (HEK), which will be accessible on-line for the rest of the world directly or via the Virtual Solar Observatory (VSO) . Solar scientists will be able to use the HEK to select event and feature data to download for science studies.

  12. Languages for Metadata

    NARCIS (Netherlands)

    Brussee, R.; Veenstra, M.; Blanken, Henk; de Vries, A.P.; Blok, H.E.; Feng, L.

    2007-01-01

    The term meta origins from the Greek word µ∈τα, meaning after. The word Metaphysics is the title of Aristotle’s book coming after his book on nature called Physics. This has given meta the modern connotation of a nature of a higher order or of a more fundamental kind [1]. Literally, metadata is

  13. Metadata capture in an electronic notebook: How to make it as simple as possible?

    Directory of Open Access Journals (Sweden)

    Menzel, Julia

    2015-09-01

    Full Text Available In the last few years electronic laboratory notebooks (ELNs have become popular. ELNs offer the great possibility to capture metadata automatically. Due to the high documentation effort metadata documentation is neglected in science. To close the gap between good data documentation and high documentation effort for the scientists a first user-friendly solution to capture metadata in an easy way was developed.At first, different protocols for the Western Blot were collected within the Collaborative Research Center 1002 and analyzed. Together with existing metadata standards identified in a literature search a first version of the metadata scheme was developed. Secondly, the metadata scheme was customized for future users including the implementation of default values for automated metadata documentation.Twelve protocols for the Western Blot were used to construct one standard protocol with ten different experimental steps. Three already existing metadata standards were used as models to construct the first version of the metadata scheme consisting of 133 data fields in ten experimental steps. Through a revision with future users the final metadata scheme was shortened to 90 items in three experimental steps. Using individualized default values 51.1% of the metadata can be captured with present values in the ELN.This lowers the data documentation effort. At the same time, researcher could benefit by providing standardized metadata for data sharing and re-use.

  14. Evolution of the ATLAS Metadata Interface (AMI)

    CERN Document Server

    Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian

    2015-01-01

    The ATLAS Metadata Interface (AMI) can be considered to be a mature application because it has existed for at least 10 years. Over the years, the number of users and the number of functions provided for these users has increased. It has been necessary to adapt the hardware infrastructure in a seamless way so that the Quality of Service remains high. We will describe the evolution of the application from the initial one, using single server with a MySQL backend database, to the current state, where we use a cluster of Virtual Machines on the French Tier 1 Cloud at Lyon, an ORACLE database backend also at Lyon, with replication to CERN using ORACLE streams behind a back-up server.

  15. The future of event-level information repositories, indexing, and selection in ATLAS

    International Nuclear Information System (INIS)

    Barberis, D; Cranshaw, J; Malon, D; Gemmeren, P Van; Zhang, Q; Dimitrov, G; Nairz, A; Sorokoletov, R; Doherty, T; Quilty, D; Gallas, E J; Hrivnac, J; Nowak, M

    2014-01-01

    ATLAS maintains a rich corpus of event-by-event information that provides a global view of the billions of events the collaboration has measured or simulated, along with sufficient auxiliary information to navigate to and retrieve data for any event at any production processing stage. This unique resource has been employed for a range of purposes, from monitoring, statistics, anomaly detection, and integrity checking, to event picking, subset selection, and sample extraction. Recent years of data-taking provide a foundation for assessment of how this resource has and has not been used in practice, of the uses for which it should be optimized, of how it should be deployed and provisioned for scalability to future data volumes, and of the areas in which enhancements to functionality would be most valuable. This paper describes how ATLAS event-level information repositories and selection infrastructure are evolving in light of this experience, and in view of their expected roles both in wide-area event delivery services and in an evolving ATLAS analysis model in which the importance of efficient selective access to data can only grow.

  16. Metadata Schema Used in OCLC Sampled Web Pages

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2005-12-01

    Full Text Available The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the others? To address these issues, this study analyzed 16,383 Web pages with meta tags extracted from 200,000 OCLC sampled Web pages in 2000. It found that only 8.19% Web pages used meta tags; description tags, keyword tags, and Dublin Core tags were the only three schemas used in the Web pages. This article revealed the use of meta tags in terms of their function distribution, syntax characteristics, granularity of the Web pages, and the length distribution and word number distribution of both description and keywords tags.

  17. Developing a Metadata Infrastructure to facilitate data driven science gateway and to provide Inspire/GEMINI compliance for CLIPC

    Science.gov (United States)

    Mihajlovski, Andrej; Plieger, Maarten; Som de Cerff, Wim; Page, Christian

    2016-04-01

    indicators Key is the availability of standardized metadata, describing indicator data and services. This will enable standardization and interoperability between the different distributed services of CLIPC. To disseminate CLIPC indicator data, transformed data products to enable impacts assessments and climate change impact indicators a standardized meta-data infrastructure is provided. The challenge is that compliance of existing metadata to INSPIRE ISO standards and GEMINI standards needs to be extended to further allow the web portal to be generated from the available metadata blueprint. The information provided in the headers of netCDF files available through multiple catalogues, allow us to generate ISO compliant meta data which is in turn used to generate web based interface content, as well as OGC compliant web services such as WCS and WMS for front end and WPS interactions for the scientific users to combine and generate new datasets. The goal of the metadata infrastructure is to provide a blueprint for creating a data driven science portal, generated from the underlying: GIS data, web services and processing infrastructure. In the presentation we will present the results and lessons learned.

  18. CHARMe Commentary metadata for Climate Science: collecting, linking and sharing user feedback on climate datasets

    Science.gov (United States)

    Blower, Jon; Lawrence, Bryan; Kershaw, Philip; Nagni, Maurizio

    2014-05-01

    The research process can be thought of as an iterative activity, initiated based on prior domain knowledge, as well on a number of external inputs, and producing a range of outputs including datasets, studies and peer reviewed publications. These outputs may describe the problem under study, the methodology used, the results obtained, etc. In any new publication, the author may cite or comment other papers or datasets in order to support their research hypothesis. However, as their work progresses, the researcher may draw from many other latent channels of information. These could include for example, a private conversation following a lecture or during a social dinner; an opinion expressed concerning some significant event such as an earthquake or for example a satellite failure. In addition, other sources of information of grey literature are important public such as informal papers such as the arxiv deposit, reports and studies. The climate science community is not an exception to this pattern; the CHARMe project, funded under the European FP7 framework, is developing an online system for collecting and sharing user feedback on climate datasets. This is to help users judge how suitable such climate data are for an intended application. The user feedback could be comments about assessments, citations, or provenance of the dataset, or other information such as descriptions of uncertainty or data quality. We define this as a distinct category of metadata called Commentary or C-metadata. We link C-metadata with target climate datasets using a Linked Data approach via the Open Annotation data model. In the context of Linked Data, C-metadata plays the role of a resource which, depending on its nature, may be accessed as simple text or as more structured content. The project is implementing a range of software tools to create, search or visualize C-metadata including a JavaScript plugin enabling this functionality to be integrated in situ with data provider portals

  19. Describing Geospatial Assets in the Web of Data: A Metadata Management Scenario

    Directory of Open Access Journals (Sweden)

    Cristiano Fugazza

    2016-12-01

    Full Text Available Metadata management is an essential enabling factor for geospatial assets because discovery, retrieval, and actual usage of the latter are tightly bound to the quality of these descriptions. Unfortunately, the multi-faceted landscape of metadata formats, requirements, and conventions makes it difficult to identify editing tools that can be easily tailored to the specificities of a given project, workgroup, and Community of Practice. Our solution is a template-driven metadata editing tool that can be customised to any XML-based schema. Its output is constituted by standards-compliant metadata records that also have a semantics-aware counterpart eliciting novel exploitation techniques. Moreover, external data sources can easily be plugged in to provide autocompletion functionalities on the basis of the data structures made available on the Web of Data. Beside presenting the essentials on customisation of the editor by means of two use cases, we extend the methodology to the whole life cycle of geospatial metadata. We demonstrate the novel capabilities enabled by RDF-based metadata representation with respect to traditional metadata management in the geospatial domain.

  20. Building a High Performance Metadata Broker using Clojure, NoSQL and Message Queues

    Science.gov (United States)

    Truslove, I.; Reed, S.

    2013-12-01

    In practice, Earth and Space Science Informatics often relies on getting more done with less: fewer hardware resources, less IT staff, fewer lines of code. As a capacity-building exercise focused on rapid development of high-performance geoinformatics software, the National Snow and Ice Data Center (NSIDC) built a prototype metadata brokering system using a new JVM language, modern database engines and virtualized or cloud computing resources. The metadata brokering system was developed with the overarching goals of (i) demonstrating a technically viable product with as little development effort as possible, (ii) using very new yet very popular tools and technologies in order to get the most value from the least legacy-encumbered code bases, and (iii) being a high-performance system by using scalable subcomponents, and implementation patterns typically used in web architectures. We implemented the system using the Clojure programming language (an interactive, dynamic, Lisp-like JVM language), Redis (a fast in-memory key-value store) as both the data store for original XML metadata content and as the provider for the message queueing service, and ElasticSearch for its search and indexing capabilities to generate search results. On evaluating the results of the prototyping process, we believe that the technical choices did in fact allow us to do more for less, due to the expressive nature of the Clojure programming language and its easy interoperability with Java libraries, and the successful reuse or re-application of high performance products or designs. This presentation will describe the architecture of the metadata brokering system, cover the tools and techniques used, and describe lessons learned, conclusions, and potential next steps.

  1. A Case for Data and Service Fusions

    Science.gov (United States)

    Huang, T.; Boening, C.; Quach, N. T.; Gill, K.; Zlotnicki, V.; Moore, B.; Tsontos, V. M.

    2015-12-01

    In this distributed, data-intensive era, developing any solution that requires multi-disciplinary data and service requires careful review of interfaces with data and service providers. Information is stored in many different locations and data services are distributed across the Internet. In design and development of mash-up heterogeneous data systems, the challenge is not entirely technological; it is our ability to document the external interface specifications and to create a coherent environment for our users. While is impressive to present a complex web of data, the true measure of our success is in the quality of the data we are serving, the throughput of our creation, and user experience. The presentation presents two current funded NASA projects that require integration of heterogeneous data and service that reside in different locations. The NASA Sea Level Change Portal is designed a "one-stop" source for current sea level change information. Behind this portal is an architecture that integrates data and services from various sources, which includes PI-generated products, satellite products from the DAACs, and metadata from ESDIS Common Metadata Repository (CMR) and other sources, and services reside in the data centers, universities, and ESDIS. The recently funded Distributed Oceanographic Matchup Service (DOMS) project is a project under the NASA Advance Information Technology (AIST) program. DOMS will integrate with satellite products managed by NASA Physical Oceanography Distributed Active Archive Center (PO.DAAC) and three different in-situ projects that are located in difference parts of the U.S. These projects are good examples of delivering content-rich solutions through mash-up of heterogeneous data and systems.

  2. The Earthscope USArray Array Network Facility (ANF): Metadata, Network and Data Monitoring, Quality Assurance During the Second Year of Operations

    Science.gov (United States)

    Eakins, J. A.; Vernon, F. L.; Martynov, V.; Newman, R. L.; Cox, T. A.; Lindquist, K. L.; Hindley, A.; Foley, S.

    2005-12-01

    The Array Network Facility (ANF) for the Earthscope USArray Transportable Array seismic network is responsible for: the delivery of all Transportable Array stations (400 at full deployment) and telemetered Flexible Array stations (up to 200) to the IRIS Data Management Center; station command and control; verification and distribution of metadata; providing useful remotely accessible world wide web interfaces for personnel at the Array Operations Facility (AOF) to access state of health information; and quality control for all data. To meet these goals, we use the Antelope software package to facilitate data collection and transfer, generation and merging of the metadata, real-time monitoring of dataloggers, generation of station noise spectra, and analyst review of individual events. Recently, an Antelope extension to the PHP scripting language has been implemented which facilitates the dynamic presentation of the real-time data to local web pages. Metadata transfers have been simplified by the use of orb transfer technologies at the ANF and receiver end points. Web services are being investigated as a means to make a potentially complicated set of operations easy to follow and reproduce for each newly installed or decommissioned station. As part of the quality control process, daily analyst review has highlighted areas where neither the regional network bulletins nor the USGS global bulletin have published solutions. Currently four regional networks (Anza, BDSN, SCSN, and UNR) contribute data to the Transportable Array with additional contributors expected. The first 100 stations (42 new Earthscope stations) were operational by September 2005 with all but one of the California stations installed. By year's end, weather permitting, the total number of stations deployed is expected to be around 145. Visit http://anf.ucsd.edu for more information on the project and current status.

  3. Improving linear accelerator service response with a real- time electronic event reporting system.

    Science.gov (United States)

    Hoisak, Jeremy D P; Pawlicki, Todd; Kim, Gwe-Ya; Fletcher, Richard; Moore, Kevin L

    2014-09-08

    To track linear accelerator performance issues, an online event recording system was developed in-house for use by therapists and physicists to log the details of technical problems arising on our institution's four linear accelerators. In use since October 2010, the system was designed so that all clinical physicists would receive email notification when an event was logged. Starting in October 2012, we initiated a pilot project in collaboration with our linear accelerator vendor to explore a new model of service and support, in which event notifications were also sent electronically directly to dedicated engineers at the vendor's technical help desk, who then initiated a response to technical issues. Previously, technical issues were reported by telephone to the vendor's call center, which then disseminated information and coordinated a response with the Technical Support help desk and local service engineers. The purpose of this work was to investigate the improvements to clinical operations resulting from this new service model. The new and old service models were quantitatively compared by reviewing event logs and the oncology information system database in the nine months prior to and after initiation of the project. Here, we focus on events that resulted in an inoperative linear accelerator ("down" machine). Machine downtime, vendor response time, treatment cancellations, and event resolution were evaluated and compared over two equivalent time periods. In 389 clinical days, there were 119 machine-down events: 59 events before and 60 after introduction of the new model. In the new model, median time to service response decreased from 45 to 8 min, service engineer dispatch time decreased 44%, downtime per event decreased from 45 to 20 min, and treatment cancellations decreased 68%. The decreased vendor response time and reduced number of on-site visits by a service engineer resulted in decreased downtime and decreased patient treatment cancellations.

  4. A document centric metadata registration tool constructing earth environmental data infrastructure

    Science.gov (United States)

    Ichino, M.; Kinutani, H.; Ono, M.; Shimizu, T.; Yoshikawa, M.; Masuda, K.; Fukuda, K.; Kawamoto, H.

    2009-12-01

    DIAS (Data Integration and Analysis System) is one of GEOSS activities in Japan. It is also a leading part of the GEOSS task with the same name defined in GEOSS Ten Year Implementation Plan. The main mission of DIAS is to construct data infrastructure that can effectively integrate earth environmental data such as observation data, numerical model outputs, and socio-economic data provided from the fields of climate, water cycle, ecosystem, ocean, biodiversity and agriculture. Some of DIAS's data products are available at the following web site of http://www.jamstec.go.jp/e/medid/dias. Most of earth environmental data commonly have spatial and temporal attributes such as the covering geographic scope or the created date. The metadata standards including these common attributes are published by the geographic information technical committee (TC211) in ISO (the International Organization for Standardization) as specifications of ISO 19115:2003 and 19139:2007. Accordingly, DIAS metadata is developed with basing on ISO/TC211 metadata standards. From the viewpoint of data users, metadata is useful not only for data retrieval and analysis but also for interoperability and information sharing among experts, beginners and nonprofessionals. On the other hand, from the viewpoint of data providers, two problems were pointed out after discussions. One is that data providers prefer to minimize another tasks and spending time for creating metadata. Another is that data providers want to manage and publish documents to explain their data sets more comprehensively. Because of solving these problems, we have been developing a document centric metadata registration tool. The features of our tool are that the generated documents are available instantly and there is no extra cost for data providers to generate metadata. Also, this tool is developed as a Web application. So, this tool does not demand any software for data providers if they have a web-browser. The interface of the tool

  5. Automated Metadata Extraction

    Science.gov (United States)

    2008-06-01

    Store [4]. The files purchased from the iTunes Music Store include the following metadata. • Name • Email address of purchaser • Year • Album ...6 3. Music : MP3 and AAC .........................................................................7 4. Tagged Image File Format...Expert Group (MPEG) set of standards for music encoding. Open Document Format (ODF) – an open, license-free, and clearly documented file format

  6. Learning Object Metadata in a Web-Based Learning Environment

    NARCIS (Netherlands)

    Avgeriou, Paris; Koutoumanos, Anastasios; Retalis, Symeon; Papaspyrou, Nikolaos

    2000-01-01

    The plethora and variance of learning resources embedded in modern web-based learning environments require a mechanism to enable their structured administration. This goal can be achieved by defining metadata on them and constructing a system that manages the metadata in the context of the learning

  7. Automating the Extraction of Metadata from Archaeological Data Using iRods Rules

    Directory of Open Access Journals (Sweden)

    David Walling

    2011-10-01

    Full Text Available The Texas Advanced Computing Center and the Institute for Classical Archaeology at the University of Texas at Austin developed a method that uses iRods rules and a Jython script to automate the extraction of metadata from digital archaeological data. The first step was to create a record-keeping system to classify the data. The record-keeping system employs file and directory hierarchy naming conventions designed specifically to maintain the relationship between the data objects and map the archaeological documentation process. The metadata implicit in the record-keeping system is automatically extracted upon ingest, combined with additional sources of metadata, and stored alongside the data in the iRods preservation environment. This method enables a more organized workflow for the researchers, helps them archive their data close to the moment of data creation, and avoids error prone manual metadata input. We describe the types of metadata extracted and provide technical details of the extraction process and storage of the data and metadata.

  8. Metadata Quality in Institutional Repositories May be Improved by Addressing Staffing Issues

    Directory of Open Access Journals (Sweden)

    Elizabeth Stovold

    2016-09-01

    Full Text Available A Review of: Moulaison, S. H., & Dykas, F. (2016. High-quality metadata and repository staffing: Perceptions of United States–based OpenDOAR participants. Cataloging & Classification Quarterly, 54(2, 101-116. http://dx.doi.org/10.1080/01639374.2015.1116480 Objective – To investigate the quality of institutional repository metadata, metadata practices, and identify barriers to quality. Design – Survey questionnaire. Setting – The OpenDOAR online registry of worldwide repositories. Subjects – A random sample of 50 from 358 administrators of institutional repositories in the United States of America listed in the OpenDOAR registry. Methods – The authors surveyed a random sample of administrators of American institutional repositories included in the OpenDOAR registry. The survey was distributed electronically. Recipients were asked to forward the email if they felt someone else was better suited to respond. There were questions about the demographics of the repository, the metadata creation environment, metadata quality, standards and practices, and obstacles to quality. Results were analyzed in Excel, and qualitative responses were coded by two researchers together. Main results – There was a 42% (n=21 response rate to the section on metadata quality, a 40% (n=20 response rate to the metadata creation section, and 40% (n=20 to the section on obstacles to quality. The majority of respondents rated their metadata quality as average (65%, n=13 or above average (30%, n=5. No one rated the quality as high or poor, while 10% (n=2 rated the quality as below average. The survey found that the majority of descriptive metadata was created by professional (84%, n=16 or paraprofessional (53%, n=10 library staff. Professional staff were commonly involved in creating administrative metadata, reviewing the metadata, and selecting standards and documentation. Department heads and advisory committees were also involved in standards and documentation

  9. Studies of Big Data metadata segmentation between relational and non-relational databases

    Science.gov (United States)

    Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.

    2015-12-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  10. Studies of Big Data metadata segmentation between relational and non-relational databases

    CERN Document Server

    Golosova, M V; Klimentov, A A; Ryabinkin, E A; Dimitrov, G; Potekhin, M

    2015-01-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  11. Migration of the ATLAS Metadata Interface (AMI) to Web 2.0 and cloud

    Science.gov (United States)

    Odier, J.; Albrand, S.; Fulachier, J.; Lambert, F.

    2015-12-01

    The ATLAS Metadata Interface (AMI), a mature application of more than 10 years of existence, is currently under adaptation to some recently available technologies. The web interfaces, which previously manipulated XML documents using XSL transformations, are being migrated to Asynchronous JavaScript (AJAX). Web development is considerably simplified by the introduction of a framework based on JQuery and Twitter Bootstrap. Finally, the AMI services are being migrated to an OpenStack cloud infrastructure.

  12. Integrated Array/Metadata Analytics

    Science.gov (United States)

    Misev, Dimitar; Baumann, Peter

    2015-04-01

    Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.

  13. Metadata and Ontologies in Learning Resources Design

    Science.gov (United States)

    Vidal C., Christian; Segura Navarrete, Alejandra; Menéndez D., Víctor; Zapata Gonzalez, Alfredo; Prieto M., Manuel

    Resource design and development requires knowledge about educational goals, instructional context and information about learner's characteristics among other. An important information source about this knowledge are metadata. However, metadata by themselves do not foresee all necessary information related to resource design. Here we argue the need to use different data and knowledge models to improve understanding the complex processes related to e-learning resources and their management. This paper presents the use of semantic web technologies, as ontologies, supporting the search and selection of resources used in design. Classification is done, based on instructional criteria derived from a knowledge acquisition process, using information provided by IEEE-LOM metadata standard. The knowledge obtained is represented in an ontology using OWL and SWRL. In this work we give evidence of the implementation of a Learning Object Classifier based on ontology. We demonstrate that the use of ontologies can support the design activities in e-learning.

  14. A case for user-generated sensor metadata

    Science.gov (United States)

    Nüst, Daniel

    2015-04-01

    Cheap and easy to use sensing technology and new developments in ICT towards a global network of sensors and actuators promise previously unthought of changes for our understanding of the environment. Large professional as well as amateur sensor networks exist, and they are used for specific yet diverse applications across domains such as hydrology, meteorology or early warning systems. However the impact this "abundance of sensors" had so far is somewhat disappointing. There is a gap between (community-driven) sensor networks that could provide very useful data and the users of the data. In our presentation, we argue this is due to a lack of metadata which allows determining the fitness of use of a dataset. Syntactic or semantic interoperability for sensor webs have made great progress and continue to be an active field of research, yet they often are quite complex, which is of course due to the complexity of the problem at hand. But still, we see the most generic information to determine fitness for use is a dataset's provenance, because it allows users to make up their own minds independently from existing classification schemes for data quality. In this work we will make the case how curated user-contributed metadata has the potential to improve this situation. This especially applies for scenarios in which an observed property is applicable in different domains, and for set-ups where the understanding about metadata concepts and (meta-)data quality differs between data provider and user. On the one hand a citizen does not understand the ISO provenance metadata. On the other hand a researcher might find issues in publicly accessible time series published by citizens, which the latter might not be aware of or care about. Because users will have to determine fitness for use for each application on their own anyway, we suggest an online collaboration platform for user-generated metadata based on an extremely simplified data model. In the most basic fashion

  15. Service Level Status

    CERN Multimedia

    Lopienski, S

    2007-01-01

    Nowadays, IT departments provide, and people use, computing services of an increasingly heterogeneous nature. There is thus a growing need for a status display that groups these different services and reports status and availability in a uniform way. The Service Level Status (SLS) system addresses these needs by providing a web-based display that dynamically shows availability, basic information and statistics about various IT services, as well as the dependencies between them.

  16. FSA 2002 Digital Orthophoto Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the 2002 FSA Color Orthophotos Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the quarter-quad...

  17. Evolution of the architecture of the ATLAS Metadata Interface (AMI)

    Science.gov (United States)

    Odier, J.; Aidel, O.; Albrand, S.; Fulachier, J.; Lambert, F.

    2015-12-01

    The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service re - mains high. We describe the AMI evolution since its beginning being served by a single MySQL backend database server to the current state having a cluster of virtual machines at French Tier1, an Oracle database at Lyon with complementary replication to the Oracle DB at CERN and AMI back-up server.

  18. Metadata as a means for correspondence on digital media

    NARCIS (Netherlands)

    Stouffs, R.; Kooistra, J.; Tuncer, B.

    2004-01-01

    Metadata derive their action from their association to data and from the relationship they maintain with this data. An interpretation of this action is that the metadata lays claim to the data collection to which it is associated, where the claim is successful if the data collection gains quality as

  19. Separation of metadata and pixel data to speed DICOM tag morphing.

    Science.gov (United States)

    Ismail, Mahmoud; Philbin, James

    2013-01-01

    The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing.

  20. Cytometry metadata in XML

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.

    2016-04-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). CytometryML will serve as a common metadata standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). The datatypes are primarily based on the Flow Cytometry and the Digital Imaging and Communication (DICOM) standards. A small section of the code was formatted with standard HTML formatting elements (p, h1, h2, etc.). Results:1) The part of MIFlowCyt that describes the Experimental Overview including the specimen and substantial parts of several other major elements has been implemented as CytometryML XML schemas (www.cytometryml.org). 2) The feasibility of using MIFlowCyt to provide the combination of an overview, table of contents, and/or an index of a scientific paper or a report has been demonstrated. Previously, a sample electronic publication, EPUB, was created that could contain both MIFlowCyt metadata as well as the binary data. Conclusions: The use of CytometryML technology together with XHTML5 and CSS permits the metadata to be directly formatted and together with the binary data to be stored in an EPUB container. This will facilitate: formatting, data- mining, presentation, data verification, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate a publication's adherence to the MIFlowCyt standard, promote interoperability and should also result in the textual and numeric data being published using web technology without any change in composition.

  1. Definition of a CDI metadata profile and its ISO 19139 based encoding

    Science.gov (United States)

    Boldrini, Enrico; de Korte, Arjen; Santoro, Mattia; Schaap, Dick M. A.; Nativi, Stefano; Manzella, Giuseppe

    2010-05-01

    The Common Data Index (CDI) is the middleware service adopted by SeaDataNet for discovery and query. The primary goal of the EU funded project SeaDataNet is to develop a system which provides transparent access to marine data sets and data products from 36 countries in and around Europe. The European context of SeaDataNet requires that the developed system complies with European Directive INSPIRE. In order to assure the required conformity a GI-cat based solution is proposed. GI-cat is a broker service able to mediate from different metadata sources and publish them through a consistent and unified interface. In this case GI-cat is used as a front end to the SeaDataNet portal publishing the original data, based on CDI v.1 XML schema, through an ISO 19139 application profile catalog interface (OGC CSW AP ISO). The choice of ISO 19139 is supported and driven by INSPIRE Implementing Rules, that have been used as a reference through the whole development process. A mapping from the CDI data model to the ISO 19139 was hence to be implemented in GI-cat and a first draft quickly developed, as both CDI v.1 and ISO 19139 happen to be XML implementations based on the same abstract data model (standard ISO 19115 - metadata about geographic information). This first draft mapping pointed out the CDI metadata model differences with respect to ISO 19115, as it was not possible to accommodate all the information contained in CDI v.1 into ISO 19139. Moreover some modifications were needed in order to reach INSPIRE compliance. The consequent work consisted in the definition of the CDI metadata model as a profile of ISO 19115. This included checking of all the metadata elements present in CDI and their cardinality. A comparison was made with respect to ISO 19115 and possible extensions were individuated. ISO 19139 was then chosen as a natural XML implementation of this new CDI metadata profile. The mapping and the profile definition processes were iteratively refined leading up to a

  2. Big Data Discovery and Access Services through NOAA OneStop

    Science.gov (United States)

    Casey, K. S.; Neufeld, D.; Ritchey, N. A.; Relph, J.; Fischman, D.; Baldwin, R.

    2017-12-01

    The NOAA OneStop Project was created as a pathfinder effort to to improve the discovery of, access to, and usability of NOAA's vast and diverse collection of big data. OneStop is led by the NOAA/NESDIS National Centers for Environmental Information (NCEI), and is seen as a key NESDIS contribution to NOAA's open data and data stewardship efforts. OneStop consists of an entire framework of services, from storage and interoperable access services at the base, through metadata and catalog services in the middle, to a modern user interface experience at the top. Importantly, it is an open framework where external tools and services can connect at whichever level is most appropriate. Since the beta release of the OneStop user interface at the 2016 Fall AGU meeting, significant progress has been made improving and modernizing many NOAA data collections to optimize their use within the framework. In addition, OneStop has made progress implementing robust metadata management and catalog systems at the collection and granule level and improving the user experience with the web interface. This progress will be summarized and the results of extensive user testing including professional usability studies will be reviewed. Key big data technologies supporting the framework will be presented and a community input sought on the future directions of the OneStop Project.

  3. PhysDoc: A Distributed Network of Physics Institutions: Collecting, Indexing, and Searching High Quality Documents by Using Harvest; The Dublin Core Metadata Initiative: Mission, Current Activities, and Future Directions; Information Services for Higher Education: A New Competitive Space; Intellectual Property Conservancies.

    Science.gov (United States)

    Severiens, Thomas; Hohlfeld, Michael; Zimmermann, Kerstin; Hilf, Eberhard R.; von Ossietzky, Carl; Weibel, Stuart L.; Koch, Traugott; Hughes, Carol Ann; Bearman, David

    2000-01-01

    Includes four articles that discuss a variety to topics, including a distributed network of physics institutions documents called PhysDocs which harvests information from the local Web-servers of professional physics institutions; the Dublin Core metadata initiative; information services for higher education in a competitive environment; and…

  4. Accounting for costs, QALYs, and capacity constraints: using discrete-event simulation to evaluate alternative service delivery and organizational scenarios for hospital-based glaucoma services.

    Science.gov (United States)

    Crane, Glenis J; Kymes, Steven M; Hiller, Janet E; Casson, Robert; Martin, Adam; Karnon, Jonathan D

    2013-11-01

    Decision-analytic models are routinely used as a framework for cost-effectiveness analyses of health care services and technologies; however, these models mostly ignore resource constraints. In this study, we use a discrete-event simulation model to inform a cost-effectiveness analysis of alternative options for the organization and delivery of clinical services in the ophthalmology department of a public hospital. The model is novel, given that it represents both disease outcomes and resource constraints in a routine clinical setting. A 5-year discrete-event simulation model representing glaucoma patient services at the Royal Adelaide Hospital (RAH) was implemented and calibrated to patient-level data. The data were sourced from routinely collected waiting and appointment lists, patient record data, and the published literature. Patient-level costs and quality-adjusted life years were estimated for a range of alternative scenarios, including combinations of alternate follow-up times, booking cycles, and treatment pathways. The model shows that a) extending booking cycle length from 4 to 6 months, b) extending follow-up visit times by 2 to 3 months, and c) using laser in preference to medication are more cost-effective than current practice at the RAH eye clinic. The current simulation model provides a useful tool for informing improvements in the organization and delivery of glaucoma services at a local level (e.g., within a hospital), on the basis of expected effects on costs and health outcomes while accounting for current capacity constraints. Our model may be adapted to represent glaucoma services at other hospitals, whereas the general modeling approach could be applied to many other clinical service areas.

  5. A message-queuing framework for STAR's online monitoring and metadata collection

    International Nuclear Information System (INIS)

    Arkhipkin, D; Lauret, J; Betts, W

    2011-01-01

    We report our experience on migrating STARs Online Services (Run Control System, Data Acquisition System, Slow Control System and Subsystem Monitoring) from direct read/write database accesses to a modern non-blocking message-oriented infrastructure. Based on the Advanced Messaging Queuing Protocol (AMQP) and standards, this novel approach does not specify the message data structure, allowing great flexibility in its use. After careful consideration, we chose Google Protocol Buffers as our primary (de)serialization format for structured data exchange. This migration allows us to reduce the overall system complexity and greatly improve the reliability of the metadata collection and the performance of our online services in general. We will present this new framework through its software architecture overview, providing details about our staged and non-disruptive migration process as well as details of the implementation of pluggable components to provide future improvements without compromising stability and availability of services.

  6. Utility of collecting metadata to manage a large scale conditions database in ATLAS

    CERN Document Server

    Gallas, EJ; The ATLAS collaboration; Borodin, M; Formica, A

    2014-01-01

    The ATLAS Conditions Database, based on the LCG Conditions Database infrastructure, contains a wide variety of information needed in online data taking and offline analysis. The total volume of ATLAS conditions data is in the multi-Terabyte range. Internally, the active data is divided into 65 separate schemas (each with hundreds of underlying tables) according to overall data taking type, detector subsystem, and whether the data is used offline or strictly online. While each schema has a common infrastructure, each schema's data is entirely independent of other schemas, except at the highest level, where sets of conditions from each subsystem are tagged globally for ATLAS event data reconstruction and reprocessing. The partitioned nature of the conditions infrastructure works well for most purposes, but metadata about each schema is problematic to collect in global tools from such a system because it is only accessible via LCG tools schema by schema. This makes it difficult to get an overview of all schemas,...

  7. Engineering the ATLAS TAG Browser

    CERN Document Server

    Zhang, Q; The ATLAS collaboration

    2011-01-01

    ELSSI is a web-based event metadata (TAG) browser and event-level selection service for ATLAS. TAGs from all ATLAS physics and Monte Carlo data sets are routinely loaded into Oracle databases as an integral part of event processing. As data volumes increase, more and more sites are joining the distributed TAG data hosting topology. Meanwhile, TAG content and database schemata continue to evolve as new user requirements and additional sources of metadata emerge. All of this has posed many challenges to the development of ELSSI, which must support vast amounts of TAG data while source, content, geographic locations, and user query patterns may change over time. In this paper, we describe some of the challenges encountered in the process of developing ELSSI, and the software engineering strategies adopted to address those challenges. Approaches to management of access to data, browsing, data rendering, query building, query validation, execution, connection management, and communication with auxiliary services a...

  8. Engineering the ATLAS TAG Browser

    CERN Document Server

    Zhang, Q; The ATLAS collaboration

    2011-01-01

    ELSSI is a web-based event metadata (TAG) browser and event-level selection service for ATLAS. TAGs from all ATLAS physics and Monte Carlo data sets are routinely loaded into Oracle databases as an integral part of event processing. As data volumes increase, more and more sites are joining the distributed TAG data hosting topology[1]. Meanwhile, TAG content and database schemata continue to evolve as new user requirements and additional sources of metadata emerge. All of this has posed many challenges to the development of ELSSI, which must support vast amounts of TAG data while source, content, geographic locations, and user query patterns may change over time. In this paper, we describe some of the challenges encountered in the process of developing ELSSI, and the software engineering strategies adopted to address those challenges. Approaches to management of access to data, browsing, data rendering, query building, query validation, execution, connection management, and communication with auxiliary service...

  9. Developing a new service for organizing social events in Finland

    OpenAIRE

    Mohanty, Titikshya

    2017-01-01

    The purpose of this thesis is to develop a new service concept for people living in Finland by using service design methods. In this busy world, people frequently have no time to organize events on their own. Therefore, service design tools and methodology will be used as foundational pillars for the case company in this thesis in developing solutions for this issue, as well as bringing customer satisfaction from service delivery. This service concept is designed to save time and simplify the...

  10. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format.

    Science.gov (United States)

    Ismail, Mahmoud; Philbin, James

    2015-04-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.

  11. The Use of Metadata Visualisation Assist Information Retrieval

    Science.gov (United States)

    2007-10-01

    album title, the track length and the genre of music . Again, any of these pieces of information can be used to quickly search and locate specific...that person. Music files also have metadata tags, in a format called ID3. This usually contains information such as the artist, the song title, the...tracks, to provide more information about the entire music collection, or to find similar or diverse tracks within the collection. Metadata is

  12. Politeness Strategies and Levels In Tourism-Service Language in Surakarta Residency

    Directory of Open Access Journals (Sweden)

    Budi Purnomo

    2016-07-01

    Full Text Available In tourism industry, tourists act as guests and tourism industry practitioners act as hosts. Typically tourism industry practitioners will try to act politely and follow politeness strategies as well as possible when serving tourists to ensure their satisfaction. Levels of satisfaction could be determined by the politeness of the hosts' behaviour towards their guests, including the politeness levels of their tourism-service language. This research was done in Surakarta Residency, the main tourist destination in Central Java. Data sources of this research came from (1 informants and (2 events.  The data were analyzed by Brown and Levinson’s politeness strategies (1987. The results of this research show that the tourism industry practitioners in Surakarta Residency use various politeness strategies and levels in tourism-service language to serve their guests.

  13. Linked data for libraries, archives and museums how to clean, link and publish your metadata

    CERN Document Server

    Hooland, Seth van

    2014-01-01

    This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Libraries, archives and museums are facing up to the challenge of providing access to fast growing collections whilst managing cuts to budgets. Key to this is the creation, linking and publishing of good quality metadata as Linked Data that will allow their collections to be discovered, accessed and disseminated in a sustainable manner. This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Metadata experts Seth van Hooland and Ruben Verborgh introduce the key concepts of metadata standards and Linked Data and how they can be practically applied to existing metadata, giving readers the tools and understanding to achieve maximum results with limited re...

  14. The Geodetic Seamless Archive Centers Service Layer: A System Architecture for Federating Geodesy Data Repositories

    Science.gov (United States)

    McWhirter, J.; Boler, F. M.; Bock, Y.; Jamason, P.; Squibb, M. B.; Noll, C. E.; Blewitt, G.; Kreemer, C. W.

    2010-12-01

    Three geodesy Archive Centers, Scripps Orbit and Permanent Array Center (SOPAC), NASA's Crustal Dynamics Data Information System (CDDIS) and UNAVCO are engaged in a joint effort to define and develop a common Web Service Application Programming Interface (API) for accessing geodetic data holdings. This effort is funded by the NASA ROSES ACCESS Program to modernize the original GPS Seamless Archive Centers (GSAC) technology which was developed in the 1990s. A new web service interface, the GSAC-WS, is being developed to provide uniform and expanded mechanisms through which users can access our data repositories. In total, our respective archives hold tens of millions of files and contain a rich collection of site/station metadata. Though we serve similar user communities, we currently provide a range of different access methods, query services and metadata formats. This leads to a lack of consistency in the userís experience and a duplication of engineering efforts. The GSAC-WS API and its reference implementation in an underlying Java-based GSAC Service Layer (GSL) supports metadata and data queries into site/station oriented data archives. The general nature of this API makes it applicable to a broad range of data systems. The overall goals of this project include providing consistent and rich query interfaces for end users and client programs, the development of enabling technology to facilitate third party repositories in developing these web service capabilities and to enable the ability to perform data queries across a collection of federated GSAC-WS enabled repositories. A fundamental challenge faced in this project is to provide a common suite of query services across a heterogeneous collection of data yet enabling each repository to expose their specific metadata holdings. To address this challenge we are developing a "capabilities" based service where a repository can describe its specific query and metadata capabilities. Furthermore, the architecture of

  15. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  16. Autonomous Underwater Vehicle Data Management and Metadata Interoperability for Coastal Ocean Studies

    Science.gov (United States)

    McCann, M. P.; Ryan, J. P.; Chavez, F. P.; Rienecker, E.

    2004-12-01

    Data from over 1000 km of Autonomous Underwater Vehicle (AUV) surveys of Monterey Bay have been collected and cataloged in an ocean observatory data management system. The Monterey Bay Aquarium Institute's AUV is equipped with a suite of instruments that include a conductivity, temperature, depth (CTD) instrument, transmissometers, a fluorometer, a nitrate sensor, and an inertial navigation system. Data are logged on the vehicle and upon completion of a survey XML descriptions of the data are submitted to the Shore Side Data System (SSDS). Instrument data are then processed on shore to apply calibrations and produce scientifically useful data products. The SSDS employs a data model that tracks data from the instrument that created it through all the consuming processes that generate derived products. SSDS employs OPeNDAP and netCDF to provide data set interoperability at the data level. The core of SSDS is the metadata that is the catalog of these data sets and their relation to all other relevant data. The metadata is managed in a relational database and governed by a Enterprise Java Bean (EJB) server application. Cross-platform Java applications have been written to manage and visualize these data. A Java Swing application - the Hierarchical Ocean Observatory Visualization and Editing System (HOOVES) - has been developed to provide visualization of data set pedigree and data set variables. Because the SSDS data model is generalized according to "Data Producers" and "Data Containers" many different types of data can be represented in SSDS allowing for interoperability at a metadata level. Comparisons of appropriate data sets, whether they are from an autonomous underwater vehicle or from a fixed mooring are easily made using SSDS. The authors will present the SSDS data model and show examples of how the model helps organize data set metadata allowing for data discovery and interoperability. With improved discovery and interoperability the system is helping us

  17. An Observation Capability Metadata Model for EO Sensor Discovery in Sensor Web Enablement Environments

    Directory of Open Access Journals (Sweden)

    Chuli Hu

    2014-10-01

    Full Text Available Accurate and fine-grained discovery by diverse Earth observation (EO sensors ensures a comprehensive response to collaborative observation-required emergency tasks. This discovery remains a challenge in an EO sensor web environment. In this study, we propose an EO sensor observation capability metadata model that reuses and extends the existing sensor observation-related metadata standards to enable the accurate and fine-grained discovery of EO sensors. The proposed model is composed of five sub-modules, namely, ObservationBreadth, ObservationDepth, ObservationFrequency, ObservationQuality and ObservationData. The model is applied to different types of EO sensors and is formalized by the Open Geospatial Consortium Sensor Model Language 1.0. The GeosensorQuery prototype retrieves the qualified EO sensors based on the provided geo-event. An actual application to flood emergency observation in the Yangtze River Basin in China is conducted, and the results indicate that sensor inquiry can accurately achieve fine-grained discovery of qualified EO sensors and obtain enriched observation capability information. In summary, the proposed model enables an efficient encoding system that ensures minimum unification to represent the observation capabilities of EO sensors. The model functions as a foundation for the efficient discovery of EO sensors. In addition, the definition and development of this proposed EO sensor observation capability metadata model is a helpful step in extending the Sensor Model Language (SensorML 2.0 Profile for the description of the observation capabilities of EO sensors.

  18. Shared Geospatial Metadata Repository for Ontario University Libraries: Collaborative Approaches

    Science.gov (United States)

    Forward, Erin; Leahey, Amber; Trimble, Leanne

    2015-01-01

    Successfully providing access to special collections of digital geospatial data in academic libraries relies upon complete and accurate metadata. Creating and maintaining metadata using specialized standards is a formidable challenge for libraries. The Ontario Council of University Libraries' Scholars GeoPortal project, which created a shared…

  19. Statistical Data Processing with R – Metadata Driven Approach

    Directory of Open Access Journals (Sweden)

    Rudi SELJAK

    2016-06-01

    Full Text Available In recent years the Statistical Office of the Republic of Slovenia has put a lot of effort into re-designing its statistical process. We replaced the classical stove-pipe oriented production system with general software solutions, based on the metadata driven approach. This means that one general program code, which is parametrized with process metadata, is used for data processing for a particular survey. Currently, the general program code is entirely based on SAS macros, but in the future we would like to explore how successfully statistical software R can be used for this approach. Paper describes the metadata driven principle for data validation, generic software solution and main issues connected with the use of statistical software R for this approach.

  20. High-Level Location Based Search Services That Improve Discoverability of Geophysical Data in the Virtual ITM Observatory

    Science.gov (United States)

    Schaefer, R. K.; Morrison, D.; Potter, M.; Barnes, R. J.; Nylund, S. R.; Patrone, D.; Aiello, J.; Talaat, E. R.; Sarris, T.

    2015-12-01

    The great promise of Virtual Observatories is the ability to perform complex search operations across the metadata of a large variety of different data sets. This allows the researcher to isolate and select the relevant measurements for their topic of study. The Virtual ITM Observatory (VITMO) has many diverse geophysical datasets that cover a large temporal and spatial range that present a unique search problem. VITMO provides many methods by which the user can search for and select data of interest including restricting selections based on geophysical conditions (solar wind speed, Kp, etc) as well as finding those datasets that overlap in time. One of the key challenges in improving discoverability is the ability to identify portions of datasets that overlap in time and in location. The difficulty is that location data is not contained in the metadata for datasets produced by satellites and would be extremely large in volume if it were available, making searching for overlapping data very time consuming. To solve this problem we have developed a series of light-weight web services that can provide a new data search capability for VITMO and others. The services consist of a database of spacecraft ephemerides and instrument fields of view; an overlap calculator to find times when the fields of view of different instruments intersect; and a magnetic field line tracing service that maps in situ and ground based measurements to the equatorial plane in magnetic coordinates for a number of field models and geophysical conditions. These services run in real-time when the user queries for data. These services will allow the non-specialist user to select data that they were previously unable to locate, opening up analysis opportunities beyond the instrument teams and specialists, making it easier for future students who come into the field.

  1. Evolution of the Architecture of the ATLAS Metadata Interface (AMI)

    CERN Document Server

    Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian

    2015-01-01

    The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service remains high. We describe the evolution from the beginning of the application life, using one server with a MySQL backend database, to the current state in which a cluster of virtual machines on the French Tier 1 cloud at Lyon, an Oracle database also at Lyon, with replication to Oracle at CERN and a back-up server are used.

  2. Distributed Data Collection For Next Generation ATLAS EventIndex Project

    CERN Document Server

    Fernandez Casani, Alvaro; The ATLAS collaboration

    2018-01-01

    The ATLAS EventIndex currently runs in production in order to build a complete catalogue of events for experiments with large amounts of data. The current approach is to index all final produced data files at CERN Tier0, and at hundreds of grid sites, with a distributed data collection architecture using Object Stores to temporary maintain the conveyed information, with references to them sent with a Messaging System. The final backend of all the indexed data is a central Hadoop infrastructure at CERN; an Oracle relational database is used for faster access to a subset of this information. In the future of ATLAS, instead of files, the event should be the atomic information unit for metadata. This motivation arises in order to accommodate future data processing and storage technologies. Files will no longer be static quantities, possibly dynamically aggregating data, and also allowing event-level granularity processing in heavily parallel computing environments. It also simplifies the handling of loss and or e...

  3. A Comparative Study on Metadata Scheme of Chinese and American Open Data Platforms

    Directory of Open Access Journals (Sweden)

    Yang Sinan

    2018-01-01

    Full Text Available [Purpose/significance] Open government data is conducive to the rational development and utilization of data resources. It can encourage social innovation and promote economic development. Besides, in order to ensure effective utilization and social increment of open government data, high-quality metadata schemes is necessary. [Method/process] Firstly, this paper analyzed the related research of open government data at home and abroad. Then, it investigated the open metadata schemes of some Chinese main local governments’ data platforms, and made a comparison with the metadata standard of American open government data. [Result/conclusion] This paper reveals that there are some disadvantages about Chinese local government open data affect the use effect of open data, which including that different governments use different data metadata schemes, the description of data set is too simple for further utilization and usually presented in HTML Web page format with lower machine-readable. Therefore, our government should come up with a standardized metadata schemes by drawing on the international mature and effective metadata standard, to ensure the social needs of high quality and high value data.

  4. Using JavaScript and the FDSN web service to create an interactive earthquake information system

    Science.gov (United States)

    Fischer, Kasper D.

    2015-04-01

    The FDSN web service provides a web interface to access earthquake meta-data (e. g. event or station information) and waveform date over the internet. Requests are send to a server as URLs and the output is either XML or miniSEED. This makes it hard to read by humans but easy to process with different software. Different data centers are already supporting the FDSN web service, e. g. USGS, IRIS, ORFEUS. The FDSN web service is also part of the Seiscomp3 (http://www.seiscomp3.org) software. The Seismological Observatory of the Ruhr-University switched to Seiscomp3 as the standard software for the analysis of mining induced earthquakes at the beginning of 2014. This made it necessary to create a new web-based earthquake information service for the publication of results to the general public. This has be done by processing the output of a FDSN web service query by javascript running in a standard browser. The result is an interactive map presenting the observed events and further information of events and stations on a single web page as a table and on a map. In addition the user can download event information, waveform data and station data in different formats like miniSEED, quakeML or FDSNxml. The developed code and all used libraries are open source and freely available.

  5. FSA 2003-2004 Digital Orthophoto Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the 2003-2004 FSA Color Orthophotos Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the...

  6. USGS Digital Orthophoto Quad (DOQ) Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the USGS DOQ Orthophoto Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the quarter-quad tile...

  7. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    Science.gov (United States)

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  8. Drivers of flood damage on event level

    DEFF Research Database (Denmark)

    Kreibich, H.; Aerts, J. C. J. H.; Apel, H.

    2016-01-01

    Flood risk is dynamic and influenced by many processes related to hazard, exposure and vulnerability. Flood damage increased significantly over the past decades, however, resulting overall economic loss per event is an aggregated indicator and it is difficult to attribute causes to this increasing...... trend. Much has been learned about damaging processes during floods at the micro-scale, e.g. building level. However, little is known about the main factors determining the amount of flood damage on event level. Thus, we analyse and compare paired flood events, i.e. consecutive, similar damaging floods...... example are the 2002 and 2013 floods in the Elbe and Danube catchments in Germany. The 2002 flood caused the highest economic damage (EUR 11600 million) due to a natural hazard event in Germany. Damage was so high due to extreme flood hazard triggered by extreme precipitation and a high number...

  9. Treatment of complementary events in constructing the linked Level 1 and Level 2 fault trees

    International Nuclear Information System (INIS)

    Jo, Young G.; Ahn, Kwang-Il

    2009-01-01

    Complementary events in the event trees for a PRA model should be treated properly in order to evaluate plant risk correctly. In this paper, the characteristics of the following three different cutset generation methods were investigated first in order to find the best practical way for treating complementary events: (1) exact method which treats complementary events logically, (2) no-delete term method which does not treat complementary events at all, and (3) delete term method which treats complementary events by deleting nonsense cutsets which are generated as a result of ignoring complementary events. Then, practical methods for treating complementary events in constructing linked fault trees for Level 1 and Level 2 PRA were suggested and demonstrated. The suggested methods deal with the following selected four typical cases: (1) Case 1-an event tree event (E) is represented by a fault tree gate whose inputs consist of only fault tree gates, (2) Case 2-E is represented by a single basic event, (3) Case 3-E is represented by an OR fault tree gate which has a single basic event and a fault tree gate as inputs, and (4) Case 4-E is represented by an AND fault tree gate which has a single basic event and a fault tree gate as inputs. In the suggested methods, first the high level logic structures of event tree events are examined and restructured, if needed. Then, the delete term method, the exact method, and the combination of the two methods are applied to through Case 1 to Case 4, respectively. As a result, it is recommended to treat complementary events, using the suggested methods, before Level 1 and Level 2 PRA fault trees are coupled

  10. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    OpenAIRE

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes...

  11. Sustained Assessment Metadata as a Pathway to Trustworthiness of Climate Science Information

    Science.gov (United States)

    Champion, S. M.; Kunkel, K.

    2017-12-01

    The Sustained Assessment process has produced a suite of climate change reports: The Third National Climate Assessment (NCA3), Regional Surface Climate Conditions in CMIP3 and CMIP5 for the United States: Differences, Similarities, and Implications for the U.S. National Climate Assessment, Impacts of Climate Change on Human Health in the United States: A Scientific Assessment, The State Climate Summaries, as well as the anticipated Climate Science Special Report and Fourth National Climate Assessment. Not only are these groundbreaking reports of climate change science, they are also the first suite of climate science reports to provide access to complex metadata directly connected to the report figures and graphics products. While the basic metadata documentation requirement is federally mandated through a series of federal guidelines as a part of the Information Quality Act, Sustained Assessment products are also deemed Highly Influential Scientific Assessments, which further requires demonstration of the transparency and reproducibility of the content. To meet these requirements, the Technical Support Unit (TSU) for the Sustained Assessment embarked on building a system for not only collecting and documenting metadata to the required standards, but one that also provides consumers unprecedented access to the underlying data and methods. As our process and documentation have evolved, the value of both continue to grow in parallel with the consumer expectation of quality, accessible climate science information. This presentation will detail the how the TSU accomplishes the mandated requirements with their metadata collection and documentation process, as well as the technical solution designed to demonstrate compliance while also providing access to the content for the general public. We will also illustrate how our accessibility platforms guide consumers through the Assessment science at a level of transparency that builds trust and confidence in the report

  12. Migration of the ATLAS Metadata Interface (AMI) to Web 2.0 and cloud

    CERN Document Server

    Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian

    2015-01-01

    The ATLAS Metadata Interface (AMI) can be considered to be a mature application because it has existed for at least 10 years. Over the last year, we have been adapting the application to some recently available technologies. The web interface, which previously manipulated XML documents using XSL transformations, has been migrated to Asynchronous Java Script (AJAX). Web development has been considerably simplified by the development of a framework for AMI based on JQuery and Twitter Bootstrap. Finally there has been a major upgrade of the python web service client.

  13. A Services-Oriented Architecture for Water Observations Data

    Science.gov (United States)

    Maidment, D. R.; Zaslavsky, I.; Valentine, D.; Tarboton, D. G.; Whitenack, T.; Whiteaker, T.; Hooper, R.; Kirschtel, D.

    2009-04-01

    Water observations data are time series of measurements made at point locations of water level, flow, and quality and corresponding data for climatic observations at point locations such as gaged precipitation and weather variables. A services-oriented architecture has been built for such information for the United States that has three components: hydrologic information servers, hydrologic information clients, and a centralized metadata cataloging system. These are connected using web services for observations data and metadata defined by an XML-based language called WaterML. A Hydrologic Information Server can be built by storing observations data in a relational database schema in the CUAHSI Observations Data Model, in which case, web services access to the data and metadata is automatically provided by query functions for WaterML that are wrapped around the relational database within a web server. A Hydrologic Information Server can also be constructed by custom-programming an interface to an existing water agency web site so that responds to the same queries by producing data in WaterML as do the CUAHSI Observations Data Model based servers. A Hydrologic Information Client is one which can interpret and ingest WaterML metadata and data. We have two client applications for Excel and ArcGIS and have shown how WaterML web services can be ingested into programming environments such as Matlab and Visual Basic. HIS Central, maintained at the San Diego Supercomputer Center is a repository of observational metadata for WaterML web services which presently indexes 342 million data measured at 1.75 million locations. This is the largest catalog water observational data for the United States presently in existence. As more observation networks join what we term "CUAHSI Water Data Federation", and the system accommodates a growing number of sites, measured parameters, applications, and users, rapid and reliable access to large heterogeneous hydrologic data repositories

  14. Predicting age groups of Twitter users based on language and metadata features.

    Directory of Open Access Journals (Sweden)

    Antonio A Morgan-Lopez

    Full Text Available Health organizations are increasingly using social media, such as Twitter, to disseminate health messages to target audiences. Determining the extent to which the target audience (e.g., age groups was reached is critical to evaluating the impact of social media education campaigns. The main objective of this study was to examine the separate and joint predictive validity of linguistic and metadata features in predicting the age of Twitter users. We created a labeled dataset of Twitter users across different age groups (youth, young adults, adults by collecting publicly available birthday announcement tweets using the Twitter Search application programming interface. We manually reviewed results and, for each age-labeled handle, collected the 200 most recent publicly available tweets and user handles' metadata. The labeled data were split into training and test datasets. We created separate models to examine the predictive validity of language features only, metadata features only, language and metadata features, and words/phrases from another age-validated dataset. We estimated accuracy, precision, recall, and F1 metrics for each model. An L1-regularized logistic regression model was conducted for each age group, and predicted probabilities between the training and test sets were compared for each age group. Cohen's d effect sizes were calculated to examine the relative importance of significant features. Models containing both Tweet language features and metadata features performed the best (74% precision, 74% recall, 74% F1 while the model containing only Twitter metadata features were least accurate (58% precision, 60% recall, and 57% F1 score. Top predictive features included use of terms such as "school" for youth and "college" for young adults. Overall, it was more challenging to predict older adults accurately. These results suggest that examining linguistic and Twitter metadata features to predict youth and young adult Twitter users may

  15. Playing the Metadata Game: Technologies and Strategies Used by Climate Diagnostics Center for Cataloging and Distributing Climate Data.

    Science.gov (United States)

    Schweitzer, R. H.

    2001-05-01

    The Climate Diagnostics Center maintains a collection of gridded climate data primarily for use by local researchers. Because this data is available on fast digital storage and because it has been converted to netCDF using a standard metadata convention (called COARDS), we recognize that this data collection is also useful to the community at large. At CDC we try to use technology and metadata standards to reduce our costs associated with making these data available to the public. The World Wide Web has been an excellent technology platform for meeting that goal. Specifically we have developed Web-based user interfaces that allow users to search, plot and download subsets from the data collection. We have also been exploring use of the Pacific Marine Environment Laboratory's Live Access Server (LAS) as an engine for this task. This would result in further savings by allowing us to concentrate on customizing the LAS where needed, rather that developing and maintaining our own system. One such customization currently under development is the use of Java Servlets and JavaServer pages in conjunction with a metadata database to produce a hierarchical user interface to LAS. In addition to these Web-based user interfaces all of our data are available via the Distributed Oceanographic Data System (DODS). This allows other sites using LAS and individuals using DODS-enabled clients to use our data as if it were a local file. All of these technology systems are driven by metadata. When we began to create netCDF files, we collaborated with several other agencies to develop a netCDF convention (COARDS) for metadata. At CDC we have extended that convention to incorporate additional metadata elements to make the netCDF files as self-describing as possible. Part of the local metadata is a set of controlled names for the variable, level in the atmosphere and ocean, statistic and data set for each netCDF file. To allow searching and easy reorganization of these metadata, we loaded

  16. Document Classification in Support of Automated Metadata Extraction Form Heterogeneous Collections

    Science.gov (United States)

    Flynn, Paul K.

    2014-01-01

    A number of federal agencies, universities, laboratories, and companies are placing their documents online and making them searchable via metadata fields such as author, title, and publishing organization. To enable this, every document in the collection must be catalogued using the metadata fields. Though time consuming, the task of identifying…

  17. An Assistant for Loading Learning Object Metadata: An Ontology Based Approach

    Science.gov (United States)

    Casali, Ana; Deco, Claudia; Romano, Agustín; Tomé, Guillermo

    2013-01-01

    In the last years, the development of different Repositories of Learning Objects has been increased. Users can retrieve these resources for reuse and personalization through searches in web repositories. The importance of high quality metadata is key for a successful retrieval. Learning Objects are described with metadata usually in the standard…

  18. A metadata schema for data objects in clinical research.

    Science.gov (United States)

    Canham, Steve; Ohmann, Christian

    2016-11-24

    A large number of stakeholders have accepted the need for greater transparency in clinical research and, in the context of various initiatives and systems, have developed a diverse and expanding number of repositories for storing the data and documents created by clinical studies (collectively known as data objects). To make the best use of such resources, we assert that it is also necessary for stakeholders to agree and deploy a simple, consistent metadata scheme. The relevant data objects and their likely storage are described, and the requirements for metadata to support data sharing in clinical research are identified. Issues concerning persistent identifiers, for both studies and data objects, are explored. A scheme is proposed that is based on the DataCite standard, with extensions to cover the needs of clinical researchers, specifically to provide (a) study identification data, including links to clinical trial registries; (b) data object characteristics and identifiers; and (c) data covering location, ownership and access to the data object. The components of the metadata scheme are described. The metadata schema is proposed as a natural extension of a widely agreed standard to fill a gap not tackled by other standards related to clinical research (e.g., Clinical Data Interchange Standards Consortium, Biomedical Research Integrated Domain Group). The proposal could be integrated with, but is not dependent on, other moves to better structure data in clinical research.

  19. Ontology-based Metadata Portal for Unified Semantics

    Data.gov (United States)

    National Aeronautics and Space Administration — The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS) will extend the prototype Ontology-Driven Interactive Search Environment for Earth Sciences...

  20. Big Data Tools as Applied to ATLAS Event Data

    Science.gov (United States)

    Vukotic, I.; Gardner, R. W.; Bryant, L. A.

    2017-10-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Logfiles, database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and associated analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data. Such modes would simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning environments and tools like Spark, Jupyter, R, SciPy, Caffe, TensorFlow, etc. Machine learning challenges such as the Higgs Boson Machine Learning Challenge, the Tracking challenge, Event viewers (VP1, ATLANTIS, ATLASrift), and still to be developed educational and outreach tools would be able to access the data through a simple REST API. In this preliminary investigation we focus on derived xAOD data sets. These are much smaller than the primary xAODs having containers, variables, and events of interest to a particular analysis. Being encouraged with the performance of Elasticsearch for the ADC analytics platform, we developed an algorithm for indexing derived xAOD event data. We have made an appropriate document mapping and have imported a full set of standard model W/Z datasets. We compare the disk space efficiency of this approach to that of standard ROOT files, the performance in simple cut flow type of data analysis, and will present preliminary results on its scaling

  1. Radiological dose and metadata management

    International Nuclear Information System (INIS)

    Walz, M.; Madsack, B.; Kolodziej, M.

    2016-01-01

    This article describes the features of management systems currently available in Germany for extraction, registration and evaluation of metadata from radiological examinations, particularly in the digital imaging and communications in medicine (DICOM) environment. In addition, the probable relevant developments in this area concerning radiation protection legislation, terminology, standardization and information technology are presented. (orig.) [de

  2. Meta-Data Objects as the Basis for System Evolution

    CERN Document Server

    Estrella, Florida; Tóth, N; Kovács, Z; Le Goff, J M; Clatchey, Richard Mc; Toth, Norbert; Kovacs, Zsolt; Goff, Jean-Marie Le

    2001-01-01

    One of the main factors driving object-oriented software development in the Web- age is the need for systems to evolve as user requirements change. A crucial factor in the creation of adaptable systems dealing with changing requirements is the suitability of the underlying technology in allowing the evolution of the system. A reflective system utilizes an open architecture where implicit system aspects are reified to become explicit first-class (meta-data) objects. These implicit system aspects are often fundamental structures which are inaccessible and immutable, and their reification as meta-data objects can serve as the basis for changes and extensions to the system, making it self- describing. To address the evolvability issue, this paper proposes a reflective architecture based on two orthogonal abstractions - model abstraction and information abstraction. In this architecture the modeling abstractions allow for the separation of the description meta-data from the system aspects they represent so that th...

  3. Factors associated with emergency medical services scope of practice for acute cardiovascular events.

    Science.gov (United States)

    Williams, Ishmael; Valderrama, Amy L; Bolton, Patricia; Greek, April; Greer, Sophia; Patterson, Davis G; Zhang, Zefeng

    2012-01-01

    To examine prehospital emergency medical services (EMS) scope of practice for acute cardiovascular events and characteristics that may affect scope of practice; and to describe variations in EMS scope of practice for these events and the characteristics associated with that variability. In 2008, we conducted a telephone survey of 1,939 eligible EMS providers in nine states to measure EMS agency characteristics, medical director involvement, and 18 interventions authorized for prehospital care of acute cardiovascular events by three levels of emergency medical technician (EMT) personnel. A total of 1,292 providers responded to the survey, for a response rate of 67%. EMS scope of practice interventions varied by EMT personnel level, with the proportion of authorized interventions increasing as expected from EMT-Basic to EMT-Paramedic. Seven of eight statistically significant associations indicated that EMS agencies in urban settings were less likely to authorize interventions (odds ratios department-based EMS agencies were two to three times more likely to authorize interventions for EMT-Intermediate personnel. Volunteer EMS agencies were more than twice as likely as nonvolunteer agencies to authorize interventions for EMT-Basic and EMT-Intermediate personnel but were less likely to authorize any one of the 11 interventions for EMT-Paramedics. Greater medical director involvement was associated with greater likelihood of authorization of seven of the 18 interventions for EMT-Basic and EMT-Paramedic personnel but had no association with EMT-Intermediate personnel. We noted statistically significant variations in scope of practice by rural vs. urban setting, medical director involvement, and type of EMS service (fire department-based/non-fire department-based; volunteer/paid). These variations highlight local differences in the composition and capacity of EMS providers and offer important information for the transition towards the implementation of a national scope of

  4. Batch metadata assignment to archival photograph collections using facial recognition software

    Directory of Open Access Journals (Sweden)

    Kyle Banerjee

    2013-07-01

    Full Text Available Useful metadata is essential to giving individual meaning and value within the context of a greater image collection as well as making them more discoverable. However, often little information is available about the photos themselves, so adding consistent metadata to large collections of digital and digitized photographs is a time consuming process requiring highly experienced staff. By using facial recognition software, staff can identify individuals more quickly and reliably. Knowledge of individuals in photos helps staff determine when and where photos are taken and also improves understanding of the subject matter. This article demonstrates simple techniques for using facial recognition software and command line tools to assign, modify, and read metadata for large archival photograph collections.

  5. phosphorus retention data and metadata

    Science.gov (United States)

    phosphorus retention in wetlands data and metadataThis dataset is associated with the following publication:Lane , C., and B. Autrey. Phosphorus retention of forested and emergent marsh depressional wetlands in differing land uses in Florida, USA. Wetlands Ecology and Management. Springer Science and Business Media B.V;Formerly Kluwer Academic Publishers B.V., GERMANY, 24(1): 45-60, (2016).

  6. VT Wireless Internet Service Providers 2006

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VT Wireless Internet Service Provider (ISP) dataset (WISP2006) includes polygons depicting the extent of Vermont's WISP broadband system as of...

  7. FPGA Acceleration of Information Management Services

    National Research Council Canada - National Science Library

    Linderman, Richard W; Linderman, Mark H; Lin, Chun-Shin

    2005-01-01

    .... The specific core service accelerated by FPGAs is the brokering of XML metadata of publications against the XPATH logical predicates expressing the types of publications that the subscribers wish to receive...

  8. VT Wireless Internet Service Providers 2007

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VT Wireless Internet Service Provider (ISP) dataset (WISP2007) includes polygons depicting the extent of Vermont's WISP broadband system as of...

  9. The TDR: A Repository for Long Term Storage of Geophysical Data and Metadata

    Science.gov (United States)

    Wilson, A.; Baltzer, T.; Caron, J.

    2006-12-01

    For many years Unidata has provided easy, low cost data access to universities and research labs. Historically Unidata technology provided access to data in near real time. In recent years Unidata has additionally turned to providing middleware to serve longer term data and associated metadata via its THREDDS technology, the most recent offering being the THREDDS Data Server (TDS). The TDS provides middleware for metadata access and management, OPeNDAP data access, and integration with the Unidata Integrated Data Viewer (IDV), among other benefits. The TDS was designed to support rolling archives of data, that is, data that exist only for a relatively short, predefined time window. Now we are creating an addition to the TDS, called the THREDDS Data Repository (TDR), which allows users to store and retrieve data and other objects for an arbitrarily long time period. Data in the TDR can also be served by the TDS. The TDR performs important functions of locating storage for the data, moving the data to and from the repository, assigning unique identifiers, and generating metadata. The TDR framework supports pluggable components that allow tailoring an implementation for a particular application. The Linked Environments for Atmospheric Discovery (LEAD) project provides an excellent use case for the TDR. LEAD is a multi-institutional Large Information Technology Research project funded by the National Science Foundation (NSF). The goal of LEAD is to create a framework based on Grid and Web Services to support mesoscale meteorology research and education. This includes capabilities such as launching forecast models, mining data for meteorological phenomena, and dynamic workflows that are automatically reconfigurable in response to changing weather. LEAD presents unique challenges in managing and storing large data volumes from real-time observational systems as well as data that are dynamically created during the execution of adaptive workflows. For example, in order to

  10. CERN database services for the LHC computing grid

    International Nuclear Information System (INIS)

    Girone, M

    2008-01-01

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed

  11. CERN database services for the LHC computing grid

    Energy Technology Data Exchange (ETDEWEB)

    Girone, M [CERN IT Department, CH-1211 Geneva 23 (Switzerland)], E-mail: maria.girone@cern.ch

    2008-07-15

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed.

  12. Data catalog project—A browsable, searchable, metadata system

    International Nuclear Information System (INIS)

    Stillerman, Joshua; Fredian, Thomas; Greenwald, Martin; Manduchi, Gabriele

    2016-01-01

    Modern experiments are typically conducted by large, extended groups, where researchers rely on other team members to produce much of the data they use. The experiments record very large numbers of measurements that can be difficult for users to find, access and understand. We are developing a system for users to annotate their data products with structured metadata, providing data consumers with a discoverable, browsable data index. Machine understandable metadata captures the underlying semantics of the recorded data, which can then be consumed by both programs, and interactively by users. Collaborators can use these metadata to select and understand recorded measurements. The data catalog project is a data dictionary and index which enables users to record general descriptive metadata, use cases and rendering information as well as providing them a transparent data access mechanism (URI). Users describe their diagnostic including references, text descriptions, units, labels, example data instances, author contact information and data access URIs. The list of possible attribute labels is extensible, but limiting the vocabulary of names increases the utility of the system. The data catalog is focused on the data products and complements process-based systems like the Metadata Ontology Provenance project [Greenwald, 2012; Schissel, 2015]. This system can be coupled with MDSplus to provide a simple platform for data driven display and analysis programs. Sites which use MDSplus can describe tree branches, and if desired create ‘processed data trees’ with homogeneous node structures for measurements. Sites not currently using MDSplus can either use the database to reference local data stores, or construct an MDSplus tree whose leaves reference the local data store. A data catalog system can provide a useful roadmap of data acquired from experiments or simulations making it easier for researchers to find and access important data and understand the meaning of the

  13. Data catalog project—A browsable, searchable, metadata system

    Energy Technology Data Exchange (ETDEWEB)

    Stillerman, Joshua, E-mail: jas@psfc.mit.edu [MIT Plasma Science and Fusion Center, Cambridge, MA (United States); Fredian, Thomas; Greenwald, Martin [MIT Plasma Science and Fusion Center, Cambridge, MA (United States); Manduchi, Gabriele [Consorzio RFX, Euratom-ENEA Association, Corso Stati Uniti 4, Padova 35127 (Italy)

    2016-11-15

    Modern experiments are typically conducted by large, extended groups, where researchers rely on other team members to produce much of the data they use. The experiments record very large numbers of measurements that can be difficult for users to find, access and understand. We are developing a system for users to annotate their data products with structured metadata, providing data consumers with a discoverable, browsable data index. Machine understandable metadata captures the underlying semantics of the recorded data, which can then be consumed by both programs, and interactively by users. Collaborators can use these metadata to select and understand recorded measurements. The data catalog project is a data dictionary and index which enables users to record general descriptive metadata, use cases and rendering information as well as providing them a transparent data access mechanism (URI). Users describe their diagnostic including references, text descriptions, units, labels, example data instances, author contact information and data access URIs. The list of possible attribute labels is extensible, but limiting the vocabulary of names increases the utility of the system. The data catalog is focused on the data products and complements process-based systems like the Metadata Ontology Provenance project [Greenwald, 2012; Schissel, 2015]. This system can be coupled with MDSplus to provide a simple platform for data driven display and analysis programs. Sites which use MDSplus can describe tree branches, and if desired create ‘processed data trees’ with homogeneous node structures for measurements. Sites not currently using MDSplus can either use the database to reference local data stores, or construct an MDSplus tree whose leaves reference the local data store. A data catalog system can provide a useful roadmap of data acquired from experiments or simulations making it easier for researchers to find and access important data and understand the meaning of the

  14. Title, Description, and Subject are the Most Important Metadata Fields for Keyword Discoverability

    Directory of Open Access Journals (Sweden)

    Laura Costello

    2016-09-01

    Full Text Available A Review of: Yang, L. (2016. Metadata effectiveness in internet discovery: An analysis of digital collection metadata elements and internet search engine keywords. College & Research Libraries, 77(1, 7-19. http://doi.org/10.5860/crl.77.1.7 Objective – To determine which metadata elements best facilitate discovery of digital collections. Design – Case study. Setting – A public research university serving over 32,000 graduate and undergraduate students in the Southwestern United States of America. Subjects – A sample of 22,559 keyword searches leading to the institution’s digital repository between August 1, 2013, and July 31, 2014. Methods – The author used Google Analytics to analyze 73,341 visits to the institution’s digital repository. He determined that 22,559 of these visits were due to keyword searches. Using Random Integer Generator, the author identified a random sample of 378 keyword searches. The author then matched the keywords with the Dublin Core and VRA Core metadata elements on the landing page in the digital repository to determine which metadata field had drawn the keyword searcher to that particular page. Many of these keywords matched to more than one metadata field, so the author also analyzed the metadata elements that generated unique keyword hits and those fields that were frequently matched together. Main Results – Title was the most matched metadata field with 279 matched keywords from searches. Description and Subject were also significant fields with 208 and 79 matches respectively. Slightly more than half of the results, 195 keywords, matched the institutional repository in one field only. Both Title and Description had significant match rates both independently and in conjunction with other elements, but Subject keywords were the sole match in only three of the sampled cases. Conclusion – The Dublin Core elements of Title, Description, and Subject were the most frequently matched fields in keyword

  15. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    Science.gov (United States)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  16. Level of service at airport passenger terminals

    Directory of Open Access Journals (Sweden)

    Tasić Tamara D.

    2016-01-01

    Full Text Available Airport terminals are designed by level of service standards which are regulated by International Air Transport Association (IATA. Level of service offered to passengers, regarding to the primary processes, is not necessarily equivalent to the level of service perceived by them. The fact that passengers spending longer times in terminals makes the secondary processes more important in passenger experience. Aiming to improving airport attractiveness, and business success, passenger perception is approached by paying close attention. This paper discusses the two aspects of level of service. Concept of level of service used in air traffic industry with purpose of designing and planning of passenger terminal is derived from the Highway Capacity Manual. Subject of the paper regards last changes which have been introduced during 2014. Second part of the paper explains the needs of examining and analyzing passenger perception from the management point of view, and gives overview of methods which are conducted during researches. Similarities and differences are shown among measurements of level of service and perceived level of service, including the importance of these aspects mutual complementing.

  17. Leveraging Metadata to Create Interactive Images... Today!

    Science.gov (United States)

    Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.

    2011-01-01

    The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org

  18. Persistent Data Layout and Infrastructure for Efficient Selective Retrieval of Event Data in ATLAS

    CERN Document Server

    INSPIRE-00084279; Malon, David

    2011-01-01

    The ATLAS detector at CERN has completed its first full year of recording collisions at 7 TeV, resulting in billions of events and petabytes of data. At these scales, physicists must have the capability to read only the data of interest to their analyses, with the importance of efficient selective access increasing as data taking continues. ATLAS has developed a sophisticated event-level metadata infrastructure and supporting I/O framework allowing event selections by explicit specification, by back navigation, and by selection queries to a TAG database via an integrated web interface. These systems and their performance have been reported on elsewhere. The ultimate success of such a system, however, depends significantly upon the efficiency of selective event retrieval. Supporting such retrieval can be challenging, as ATLAS stores its event data in column-wise orientation using ROOT trees for a number of reasons, including compression considerations, histogramming use cases, and more. For 2011 data, ATLAS wi...

  19. Comparative Study of Metadata Elements Used in the Website of Central Library of Universities Subordinate to the Ministry of Science, Research and Technology with the Dublin Core Metadata Elements

    Directory of Open Access Journals (Sweden)

    Kobra Babaei

    2012-03-01

    Full Text Available This research has been carried out with the aim of studying the web sites of central libraries of universities subordinate to the Ministry of Science, Research and Technology usage of metadata elements and its comparison with Dublin Core standard elements. This study was a comparative survey, in which 40 websites of academic library by using Internet Explorer browser. Then the HTML pages of these websites were seen through the Source of View menu, and metadata elements of each websites were extracted and entered in the checklist. Then, with using descriptive statistics (frequency, percentage and mean analysis of data was discussed. Research findings showed that the reviewed websites did not use any Dublin Core metadata elements, general metadata Markup language used in design of all websites, the amount of metadata elements used in website, Central Library of Ferdowsi University of Mashhad and Iran Science and Industries with 57% in first ranked and Shahid Beheshti University with 49% in second ranked and the International University of Imam Khomeini with 40% was in third ranked. The approach to web designers was determined too that as follows: the content of source in first ranked and attention to physical appearance source in second ranked and also ownership of source in third position.

  20. Dealing with metadata quality: the legacy of digital library efforts

    OpenAIRE

    Tani, Alice; Candela, Leonardo; Castelli, Donatella

    2013-01-01

    In this work, we elaborate on the meaning of metadata quality by surveying efforts and experiences matured in the digital library domain. In particular, an overview of the frameworks developed to characterize such a multi-faceted concept is presented. Moreover, the most common quality-related problems affecting metadata both during the creation and the aggregation phase are discussed together with the approaches, technologies and tools developed to mitigate them. This survey on digital librar...

  1. Making Information Visible, Accessible, and Understandable: Meta-Data and Registries

    Science.gov (United States)

    2007-07-01

    the data created, the length of play time, album name, and the genre. Without resource metadata, portable digital music players would not be so...notion of a catalog card in a library. An example of metadata is the description of a music file specifying the creator, the artist that performed the song...describe struc- ture and formatting which are critical to interoperability and the management of databases. Going back to the portable music player example

  2. SM4AM: A Semantic Metamodel for Analytical Metadata

    DEFF Research Database (Denmark)

    Varga, Jovan; Romero, Oscar; Pedersen, Torben Bach

    2014-01-01

    Next generation BI systems emerge as platforms where traditional BI tools meet semi-structured and unstructured data coming from the Web. In these settings, the user-centric orientation represents a key characteristic for the acceptance and wide usage by numerous and diverse end users in their data....... We present SM4AM, a Semantic Metamodel for Analytical Metadata created as an RDF formalization of the Analytical Metadata artifacts needed for user assistance exploitation purposes in next generation BI systems. We consider the Linked Data initiative and its relevance for user assistance...

  3. Structural Metadata Research in the Ears Program

    National Research Council Canada - National Science Library

    Liu, Yang; Shriberg, Elizabeth; Stolcke, Andreas; Peskin, Barbara; Ang, Jeremy; Hillard, Dustin; Ostendorf, Mari; Tomalin, Marcus; Woodland, Phil; Harper, Mary

    2005-01-01

    Both human and automatic processing of speech require recognition of more than just words. In this paper we provide a brief overview of research on structural metadata extraction in the DARPA EARS rich transcription program...

  4. Embedding Metadata and Other Semantics in Word Processing Documents

    Directory of Open Access Journals (Sweden)

    Peter Sefton

    2009-10-01

    Full Text Available This paper describes a technique for embedding document metadata, and potentially other semantic references inline in word processing documents, which the authors have implemented with the help of a software development team. Several assumptions underly the approach; It must be available across computing platforms and work with both Microsoft Word (because of its user base and OpenOffice.org (because of its free availability. Further the application needs to be acceptable to and usable by users, so the initial implementation covers only small number of features, which will only be extended after user-testing. Within these constraints the system provides a mechanism for encoding not only simple metadata, but for inferring hierarchical relationships between metadata elements from a ‘flat’ word processing file.The paper includes links to open source code implementing the techniques as part of a broader suite of tools for academic writing. This addresses tools and software, semantic web and data curation, integrating curation into research workflows and will provide a platform for integrating work on ontologies, vocabularies and folksonomies into word processing tools.

  5. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata.

    Science.gov (United States)

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-16

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.

  6. Prototype of a file-based high-level trigger in CMS

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    The DAQ system of the CMS experiment at the LHC is upgraded during the accelerator shutdown in 2013/14. To reduce the interdependency of the DAQ system and the high-level trigger (HLT), we investigate the feasibility of using a file-system-based HLT. Events of ∼1 MB size are built at the level-1 trigger rate of 100 kHz. The events are assembled by ∼50 builder units (BUs). Each BU writes the raw events at ∼2GB/s to a local file system shared with Q(10) filter-unit machines (FUs) running the HLT code. The FUs read the raw data from the file system, select Q(1%) of the events, and write the selected events together with monitoring meta-data back to a disk. This data is then aggregated over several steps and made available for offline reconstruction and online monitoring. We present the challenges, technical choices, and performance figures from the prototyping phase. In addition, the steps to the final system implementation will be discussed.

  7. Rescue, Archival and Discovery of Tsunami Events on Marigrams

    Science.gov (United States)

    Eble, M. C.; Wright, L. M.; Stroker, K. J.; Sweeney, A.; Lancaster, M.

    2017-12-01

    The Big Earth Data Initiative made possible the reformatting of paper marigram records on which were recorded measurements of the 1946, 1952, 1960, and 1964 tsunamis generated in the Pacific Ocean. Data contained within each record were determined to be invaluable for tsunami researchers and operational agencies with a responsibility for issuing warnings during a tsunami event. All marigrams were carefully digitized and metadata were generated to form numerical datasets in order to provide the tsunami and other research and application-driven communities with quality data. Data were then packaged as CF-compliant netCDF datafiles and submitted to the NOAA Centers for Environmental Information for long-term stewardship, archival, and public discovery of both original scanned images and data in digital netCDF and CSC formats. The PNG plots of each time series were generated and included with data packages to provide a visual representation of the numerical data sets. ISO-compliant metadata were compiled for the collection at the event level and individual DOIs were minted for each of the four events included in this project. The procedure followed to reformat each record in this four-event subset of the larger NCEI scanned marigram inventory is presented and discussed. The practical use of these data is presented to highlight that even infrequent measurements of tsunamis hold information that may potentially help constrain earthquake rupture area, provide estimates of earthquake co-seismic slip distribution, identify subsidence or uplift, and significantly increase the holdings of situ data available for tsunami model validation. These same data may also prove valuable to the broader global tide community for validation and further development of tide models and for investigation into the stability of tidal harmonic constants. Data reformatted as part of this project are PARR compliant and meet the requirements for Data Management, Discoverability, Accessibility

  8. Data Service: Distributed Data Capture and Replication

    Science.gov (United States)

    Warner, P. B.; Pietrowicz, S. R.

    2007-10-01

    Data Service is a critical component of the NOAO Data Management and Science Support (DMaSS) Solutions Platform, which is based on a service-oriented architecture, and is to replace the current NOAO Data Transport System. Its responsibilities include capturing data from NOAO and partner telescopes and instruments and replicating the data across multiple (currently six) storage sites. Java 5 was chosen as the implementation language, and Java EE as the underlying enterprise framework. Application metadata persistence is performed using EJB and Hibernate on the JBoss Application Server, with PostgreSQL as the persistence back-end. Although potentially any underlying mass storage system may be used as the Data Service file persistence technology, DTS deployments and Data Service test deployments currently use the Storage Resource Broker from SDSC. This paper presents an overview and high-level design of the Data Service, including aspects of deployment, e.g., for the LSST Data Challenge at the NCSA computing facilities.

  9. Geo-Enrichment and Semantic Enhancement of Metadata Sets to Augment Discovery in Geoportals

    Directory of Open Access Journals (Sweden)

    Bernhard Vockner

    2014-03-01

    Full Text Available Geoportals are established to function as main gateways to find, evaluate, and start “using” geographic information. Still, current geoportal implementations face problems in optimizing the discovery process due to semantic heterogeneity issues, which leads to low recall and low precision in performing text-based searches. Therefore, we propose an enhanced semantic discovery approach that supports multilingualism and information domain context. Thus, we present workflow that enriches existing structured metadata with synonyms, toponyms, and translated terms derived from user-defined keywords based on multilingual thesauri and ontologies. To make the results easier and understandable, we also provide automated translation capabilities for the resource metadata to support the user in conceiving the thematic content of the descriptive metadata, even if it has been documented using a language the user is not familiar with. In addition, to text-enable spatial filtering capabilities, we add additional location name keywords to metadata sets. These are based on the existing bounding box and shall tweak discovery scores when performing single text line queries. In order to improve the user’s search experience, we tailor faceted search strategies presenting an enhanced query interface for geo-metadata discovery that are transparently leveraging the underlying thesauri and ontologies.

  10. File and metadata management for BESIII distributed computing

    International Nuclear Information System (INIS)

    Nicholson, C; Zheng, Y H; Lin, L; Deng, Z Y; Li, W D; Zhang, X M

    2012-01-01

    The BESIII experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPCII e + e − collider to study physics in the π-charm energy region around 3.7 GeV; BEPCII has produced the worlds largest samples of J/φ and φ’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very centralized to a distributed computing environment, as well as the development of an efficient file and metadata management system. While BESIII is on a smaller scale than some other HEP experiments, this poses particular challenges for its distributed computing and data management system. These constraints include limited resources and manpower, and low quality of network connections to IHEP. Drawing on the rich experience of the HEP community, a system has been developed which meets these constraints. The design and development of the BESIII distributed data management system, including its integration with other BESIII distributed computing components, such as job management, are presented here.

  11. Stop the Bleeding: the Development of a Tool to Streamline NASA Earth Science Metadata Curation Efforts

    Science.gov (United States)

    le Roux, J.; Baker, A.; Caltagirone, S.; Bugbee, K.

    2017-12-01

    The Common Metadata Repository (CMR) is a high-performance, high-quality repository for Earth science metadata records, and serves as the primary way to search NASA's growing 17.5 petabytes of Earth science data holdings. Released in 2015, CMR has the capability to support several different metadata standards already being utilized by NASA's combined network of Earth science data providers, or Distributed Active Archive Centers (DAACs). The Analysis and Review of CMR (ARC) Team located at Marshall Space Flight Center is working to improve the quality of records already in CMR with the goal of making records optimal for search and discovery. This effort entails a combination of automated and manual review, where each NASA record in CMR is checked for completeness, accuracy, and consistency. This effort is highly collaborative in nature, requiring communication and transparency of findings amongst NASA personnel, DAACs, the CMR team and other metadata curation teams. Through the evolution of this project it has become apparent that there is a need to document and report findings, as well as track metadata improvements in a more efficient manner. The ARC team has collaborated with Element 84 in order to develop a metadata curation tool to meet these needs. In this presentation, we will provide an overview of this metadata curation tool and its current capabilities. Challenges and future plans for the tool will also be discussed.

  12. The Benefits and Future of Standards: Metadata and Beyond

    Science.gov (United States)

    Stracke, Christian M.

    This article discusses the benefits and future of standards and presents the generic multi-dimensional Reference Model. First the importance and the tasks of interoperability as well as quality development and their relationship are analyzed. Especially in e-Learning their connection and interdependence is evident: Interoperability is one basic requirement for quality development. In this paper, it is shown how standards and specifications are supporting these crucial issues. The upcoming ISO metadata standard MLR (Metadata for Learning Resource) will be introduced and used as example for identifying the requirements and needs for future standardization. In conclusion a vision of the challenges and potentials for e-Learning standardization is outlined.

  13. Crowd-sourced BMS point matching and metadata maintenance with Babel

    DEFF Research Database (Denmark)

    Fürst, Jonathan; Chen, Kaifei; Katz, Randy H.

    2016-01-01

    Cyber-physical applications, deployed on top of Building Management Systems (BMS), promise energy saving and comfort improvement in non-residential buildings. Such applications are so far mainly deployed as research prototypes. The main roadblock to widespread adoption is the low quality of BMS...... systems. Such applications access sensors and actuators through BMS metadata in form of point labels. The naming of labels is however often inconsistent and incomplete. To tackle this problem, we introduce Babel, a crowd-sourced approach to the creation and maintenance of BMS metadata. In our system...

  14. GreenIT Service Level Agreements

    Science.gov (United States)

    von Laszewski, Gregor; Wang, Lizhe

    In this paper we are introducing a framework towards the inclusion of Green IT metrics as part of service level agreements for future Grids and Clouds. As part of this effort we need to revisit Green IT metrics and proxies that we consider optimizing against in order to develop GreenIT as a Services (GaaS) that can be reused as part of a Software as a Service (SaaS) and Infrastructure Infrastructureas a service (IaaS) framework. We report on some of our ongoing efforts and demonstrate how we already achieve impact on the environment with our services.

  15. Evaluating Commercial and Private Cloud Services for Facility-Scale Geodetic Data Access, Analysis, and Services

    Science.gov (United States)

    Meertens, C. M.; Boler, F. M.; Ertz, D. J.; Mencin, D.; Phillips, D.; Baker, S.

    2017-12-01

    UNAVCO, in its role as a NSF facility for geodetic infrastructure and data, has succeeded for over two decades using on-premises infrastructure, and while the promise of cloud-based infrastructure is well-established, significant questions about suitability of such infrastructure for facility-scale services remain. Primarily through the GeoSciCloud award from NSF EarthCube, UNAVCO is investigating the costs, advantages, and disadvantages of providing its geodetic data and services in the cloud versus using UNAVCO's on-premises infrastructure. (IRIS is a collaborator on the project and is performing its own suite of investigations). In contrast to the 2-3 year time scale for the research cycle, the time scale of operation and planning for NSF facilities is for a minimum of five years and for some services extends to a decade or more. Planning for on-premises infrastructure is deliberate, and migrations typically take months to years to fully implement. Migrations to a cloud environment can only go forward with similar deliberate planning and understanding of all costs and benefits. The EarthCube GeoSciCloud project is intended to address the uncertainties of facility-level operations in the cloud. Investigations are being performed in a commercial cloud environment (Amazon AWS) during the first year of the project and in a private cloud environment (NSF XSEDE resource at the Texas Advanced Computing Center) during the second year. These investigations are expected to illuminate the potential as well as the limitations of running facility scale production services in the cloud. The work includes running parallel equivalent cloud-based services to on premises services and includes: data serving via ftp from a large data store, operation of a metadata database, production scale processing of multiple months of geodetic data, web services delivery of quality checked data and products, large-scale compute services for event post-processing, and serving real time data

  16. Atmospheric Radiation Measurement's Data Management Facility captures metadata and uses visualization tools to assist in routine data management.

    Science.gov (United States)

    Keck, N. N.; Macduff, M.; Martin, T.

    2017-12-01

    The Atmospheric Radiation Measurement's (ARM) Data Management Facility (DMF) plays a critical support role in processing and curating data generated by the Department of Energy's ARM Program. Data are collected near real time from hundreds of observational instruments spread out all over the globe. Data are then ingested hourly to provide time series data in NetCDF (network Common Data Format) and includes standardized metadata. Based on automated processes and a variety of user reviews the data may need to be reprocessed. Final data sets are then stored and accessed by users through the ARM Archive. Over the course of 20 years, a suite of data visualization tools have been developed to facilitate the operational processes to manage and maintain the more than 18,000 real time events, that move 1.3 TB of data each day through the various stages of the DMF's data system. This poster will present the resources and methodology used to capture metadata and the tools that assist in routine data management and discoverability.

  17. Inconsistencies between Academic E-Book Platforms: A Comparison of Metadata and Search Results

    Science.gov (United States)

    Wiersma, Gabrielle; Tovstiadi, Esta

    2017-01-01

    This article presents the results of a study of academic e-books that compared the metadata and search results from major academic e-book platforms. The authors collected data and performed a series of test searches designed to produce the same result regardless of platform. Testing, however, revealed metadata-related errors and significant…

  18. Evaluation procedure for Service Level C and D upper shelf toughness criteria for Linde 80 weld material

    International Nuclear Information System (INIS)

    Gamble, R.M.

    1993-09-01

    Procedures have been defined to evaluate the upper shelf toughness criteria recently defined by ASME Section XI Subgroup on Evaluation Standards for ASME Service Level C and D conditions. These procedures and the ASME criteria were applied to assess the upper toughness requirements for axial Linde 80 welds in a typical PWR reactor pressure vessel beltline geometry. Because there is a wide spectrum of possible Level C and D events, the evaluation was performed for a bounding thermal transient and results were obtained to determine allowable pressure versus upper shelf energy. The allowable pressure for the bounding thermal transient was found to be above normal operating pressure for upper shelf energies equal to and greater than 40 ft-lb. Because a bounding thermal transient was used, the allowable pressures determined in this work should be minimum values. The results from this work provide a screening criterion to quickly identify limiting transients that may require more detailed analysis and an analysis procedure to assess compliance with ASME upper shelf energy criteria for Service Level C and D events. For events with significant thermal transients, such as those evaluated in this work, the transient pressure typically decreases rapidly from normal operating pressure to 1,500 psi or less before the maximum thermal K I , is reached. Comparing this pressure range with the allowable pressure versus upper shelf energy curves determined in this work indicates that the transient pressures are significantly less than the allowables computed for Level C and D events; consequently, Level A/B conditions will likely dictate the upper shelf requirements for most applications

  19. Improving the accessibility and re-use of environmental models through provision of model metadata - a scoping study

    Science.gov (United States)

    Riddick, Andrew; Hughes, Andrew; Harpham, Quillon; Royse, Katherine; Singh, Anubha

    2014-05-01

    There has been an increasing interest both from academic and commercial organisations over recent years in developing hydrologic and other environmental models in response to some of the major challenges facing the environment, for example environmental change and its effects and ensuring water resource security. This has resulted in a significant investment in modelling by many organisations both in terms of financial resources and intellectual capital. To capitalise on the effort on producing models, then it is necessary for the models to be both discoverable and appropriately described. If this is not undertaken then the effort in producing the models will be wasted. However, whilst there are some recognised metadata standards relating to datasets these may not completely address the needs of modellers regarding input data for example. Also there appears to be a lack of metadata schemes configured to encourage the discovery and re-use of the models themselves. The lack of an established standard for model metadata is considered to be a factor inhibiting the more widespread use of environmental models particularly the use of linked model compositions which fuse together hydrologic models with models from other environmental disciplines. This poster presents the results of a Natural Environment Research Council (NERC) funded scoping study to understand the requirements of modellers and other end users for metadata about data and models. A user consultation exercise using an on-line questionnaire has been undertaken to capture the views of a wide spectrum of stakeholders on how they are currently managing metadata for modelling. This has provided a strong confirmation of our original supposition that there is a lack of systems and facilities to capture metadata about models. A number of specific gaps in current provision for data and model metadata were also identified, including a need for a standard means to record detailed information about the modelling

  20. Flexible Authoring of Metadata for Learning : Assembling forms from a declarative data and view model

    OpenAIRE

    Enoksson, Fredrik

    2011-01-01

    With the vast amount of information in various formats that is produced today it becomes necessary for consumers ofthis information to be able to judge if it is relevant for them. One way to enable that is to provide information abouteach piece of information, i.e. provide metadata. When metadata is to be edited by a human being, a metadata editorneeds to be provided. This thesis describes the design and practical use of a configuration mechanism for metadataeditors called annotation profiles...

  1. Grid Enabled Geospatial Catalogue Web Service

    Science.gov (United States)

    Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush

    2004-01-01

    Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.

  2. US Geoscience Information Network, Web Services for Geoscience Information Discovery and Access

    Science.gov (United States)

    Richard, S.; Allison, L.; Clark, R.; Coleman, C.; Chen, G.

    2012-04-01

    The US Geoscience information network has developed metadata profiles for interoperable catalog services based on ISO19139 and the OGC CSW 2.0.2. Currently data services are being deployed for the US Dept. of Energy-funded National Geothermal Data System. These services utilize OGC Web Map Services, Web Feature Services, and THREDDS-served NetCDF for gridded datasets. Services and underlying datasets (along with a wide variety of other information and non information resources are registered in the catalog system. Metadata for registration is produced by various workflows, including harvest from OGC capabilities documents, Drupal-based web applications, transformation from tabular compilations. Catalog search is implemented using the ESRI Geoportal open-source server. We are pursuing various client applications to demonstrated discovery and utilization of the data services. Currently operational applications allow catalog search and data acquisition from map services in an ESRI ArcMap extension, a catalog browse and search application built on openlayers and Django. We are developing use cases and requirements for other applications to utilize geothermal data services for resource exploration and evaluation.

  3. USGS 24k Digital Raster Graphic (DRG) Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the scanned USGS 24k Topograpic Map Series (also known as 24k Digital Raster Graphic). Each scanned map is represented by a polygon in the layer and the...

  4. Evaluation of Semi-Automatic Metadata Generation Tools: A Survey of the Current State of the Art

    Directory of Open Access Journals (Sweden)

    Jung-ran Park

    2015-09-01

    Full Text Available Assessment of the current landscape of semi-automatic metadata generation tools is particularly important considering the rapid development of digital repositories and the recent explosion of big data. Utilization of (semiautomatic metadata generation is critical in addressing these environmental changes and may be unavoidable in the future considering the costly and complex operation of manual metadata creation. To address such needs, this study examines the range of semi-automatic metadata generation tools (n=39 while providing an analysis of their techniques, features, and functions. The study focuses on open-source tools that can be readily utilized in libraries and other memory institutions.  The challenges and current barriers to implementation of these tools were identified. The greatest area of difficulty lies in the fact that  the piecemeal development of most semi-automatic generation tools only addresses part of the issue of semi-automatic metadata generation, providing solutions to one or a few metadata elements but not the full range elements.  This indicates that significant local efforts will be required to integrate the various tools into a coherent set of a working whole.  Suggestions toward such efforts are presented for future developments that may assist information professionals with incorporation of semi-automatic tools within their daily workflows.

  5. mzML2ISA & nmrML2ISA: generating enriched ISA-Tab metadata files from metabolomics XML data.

    Science.gov (United States)

    Larralde, Martin; Lawson, Thomas N; Weber, Ralf J M; Moreno, Pablo; Haug, Kenneth; Rocca-Serra, Philippe; Viant, Mark R; Steinbeck, Christoph; Salek, Reza M

    2017-08-15

    Submission to the MetaboLights repository for metabolomics data currently places the burden of reporting instrument and acquisition parameters in ISA-Tab format on users, who have to do it manually, a process that is time consuming and prone to user input error. Since the large majority of these parameters are embedded in instrument raw data files, an opportunity exists to capture this metadata more accurately. Here we report a set of Python packages that can automatically generate ISA-Tab metadata file stubs from raw XML metabolomics data files. The parsing packages are separated into mzML2ISA (encompassing mzML and imzML formats) and nmrML2ISA (nmrML format only). Overall, the use of mzML2ISA & nmrML2ISA reduces the time needed to capture metadata substantially (capturing 90% of metadata on assay and sample levels), is much less prone to user input errors, improves compliance with minimum information reporting guidelines and facilitates more finely grained data exploration and querying of datasets. mzML2ISA & nmrML2ISA are available under version 3 of the GNU General Public Licence at https://github.com/ISA-tools. Documentation is available from http://2isa.readthedocs.io/en/latest/. reza.salek@ebi.ac.uk or isatools@googlegroups.com. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  6. EVALUATION METHOD ON LEVEL-OF-SERVICE OF SHARED USE SIDEWALKS BY PEDESTRIANS AND BICYCLES USING HINDRANCE EVENTS INDICES FOR ANALYSIS OF NECESSITY OF SEPARATION

    Science.gov (United States)

    Yamanaka, Hideo

    Problems of shared use of bicycles and pedestrians on sidewalks have been an important issue for cycling policy promotion in Japan. Several studies on evaluation methods of shared use have been carried out, but it remains no appropriate index which can be used operationally in planning process. The aim of this study is to develop a method of evaluation of level-of-service for the shared use sidewalks using hindrance events index which newly considers the gap width in meeting and passing of cycling and pedestrians. This index can be estimated only by traffic volume, velocity, width of sidewalks which can be obtained in usual traffic observation. In conclusion, the author shows the appropriate indices which can explains conflicts behavior and unsafe sense of street users, and show the present situation of necessity of separation of existing streets in Japan by analyzing the ratio of street miles by the LOS.

  7. Climate Products and Services to Meet the Challenges of Extreme Events

    Science.gov (United States)

    McCalla, M. R.

    2008-12-01

    The 2002 Office of the Federal Coordinator for Meteorological Services and Supporting Research (OFCM1)-sponsored report, Weather Information for Surface Transportation: National Needs Assessment Report, addressed meteorological needs for six core modes of surface transportation: roadway, railway, transit, marine transportation/operations, pipeline, and airport ground operations. The report's goal was to articulate the weather information needs and attendant surface transportation weather products and services for those entities that use, operate, and manage America's surface transportation infrastructure. The report documented weather thresholds and associated impacts which are critical for decision-making in surface transportation. More recently, the 2008 Climate Change Science Program's (CCSP) Synthesis and Assessment Product (SAP) 4.7 entitled, Impacts of Climate Change and Variability on Transportation Systems and Infrastructure: Gulf Coast Study, Phase I, included many of the impacts from the OFCM- sponsored report in Table 1.1 of this SAP.2 The Intergovernmental Panel on Climate Change (IPCC) reported that since 1950, there has been an increase in the number of heat waves, heavy precipitation events, and areas of drought. Moreover, the IPCC indicated that greater wind speeds could accompany more severe tropical cyclones.3 Taken together, the OFCM, CCSP, and IPCC reports indicate not only the significance of extreme events, but also the potential increasing significance of many of the weather thresholds and associated impacts which are critical for decision-making in surface transportation. Accordingly, there is a real and urgent need to understand what climate products and services are available now to address the weather thresholds within the surface transportation arena. It is equally urgent to understand what new climate products and services are needed to address these weather thresholds, and articulate what can be done to fill the gap between the

  8. A renaissance in library metadata? The importance of community collaboration in a digital world

    Directory of Open Access Journals (Sweden)

    Sarah Bull

    2016-07-01

    Full Text Available This article summarizes a presentation given by Sarah Bull as part of the Association of Learned and Professional Society Publishers (ALPSP seminar ‘Setting the Standard’ in November 2015. Representing the library community at the wide-ranging seminar, Sarah was tasked with making the topic of library metadata an engaging and informative one for a largely publisher audience. With help from co-author Amanda Quimby, this article is an attempt to achieve the same aim! It covers the importance of library metadata and standards in the supply chain and also reflects on the role of the community in successful standards development and maintenance. Special emphasis is given to the importance of quality in e-book metadata and the need for publisher and library collaboration to improve discovery, usage and the student experience. The article details the University of Birmingham experience of e-book metadata from a workflow perspective to highlight the complex integration issues which remain between content procurement and discovery.

  9. Features, Events, and Processes: System Level

    Energy Technology Data Exchange (ETDEWEB)

    D. McGregor

    2004-04-19

    The primary purpose of this analysis is to evaluate System Level features, events, and processes (FEPs). The System Level FEPs typically are overarching in nature, rather than being focused on a particular process or subsystem. As a result, they are best dealt with at the system level rather than addressed within supporting process-level or subsystem level analyses and models reports. The System Level FEPs also tend to be directly addressed by regulations, guidance documents, or assumptions listed in the regulations; or are addressed in background information used in development of the regulations. This evaluation determines which of the System Level FEPs are excluded from modeling used to support the total system performance assessment for license application (TSPA-LA). The evaluation is based on the information presented in analysis reports, model reports, direct input, or corroborative documents that are cited in the individual FEP discussions in Section 6.2 of this analysis report.

  10. Building a semantic web-based metadata repository for facilitating detailed clinical modeling in cancer genome studies.

    Science.gov (United States)

    Sharma, Deepak K; Solbrig, Harold R; Tao, Cui; Weng, Chunhua; Chute, Christopher G; Jiang, Guoqian

    2017-06-05

    Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.

  11. ARIADNE: a tracking system for relationships in LHCb metadata

    International Nuclear Information System (INIS)

    Shapoval, I; Clemencic, M; Cattaneo, M

    2014-01-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ariadne – a generic metadata relationships tracking system based on the novel NoSQL Neo4j graph database. Its aim is to track and analyze many thousands of evolving relationships for cases such as the one described above, and several others, which would otherwise remain unmanaged and potentially harmful. The highlights of the paper include the system's implementation and management details, infrastructure needed for running it, security issues, first experience of usage in the LHCb production and potential of the system to be applied to a wider set of LHCb tasks.

  12. ARIADNE: a Tracking System for Relationships in LHCb Metadata

    Science.gov (United States)

    Shapoval, I.; Clemencic, M.; Cattaneo, M.

    2014-06-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ariadne - a generic metadata relationships tracking system based on the novel NoSQL Neo4j graph database. Its aim is to track and analyze many thousands of evolving relationships for cases such as the one described above, and several others, which would otherwise remain unmanaged and potentially harmful. The highlights of the paper include the system's implementation and management details, infrastructure needed for running it, security issues, first experience of usage in the LHCb production and potential of the system to be applied to a wider set of LHCb tasks.

  13. iLOG: A Framework for Automatic Annotation of Learning Objects with Empirical Usage Metadata

    Science.gov (United States)

    Miller, L. D.; Soh, Leen-Kiat; Samal, Ashok; Nugent, Gwen

    2012-01-01

    Learning objects (LOs) are digital or non-digital entities used for learning, education or training commonly stored in repositories searchable by their associated metadata. Unfortunately, based on the current standards, such metadata is often missing or incorrectly entered making search difficult or impossible. In this paper, we investigate…

  14. Digital Libraries that Demonstrate High Levels of Mutual Complementarity in Collection-level Metadata Give a Richer Representation of their Content and Improve Subject Access for Users

    OpenAIRE

    Aoife Lawton

    2014-01-01

    A Review of: Zavalina, O. L. (2013). Complementarity in subject metadata in large-scale digital libraries: A comparative analysis. Cataloging & Classification Quarterly, 52(1), 77-89. http://dx.doi.org/10.1080/01639374.2013.848316 Abstract Objective – To determine how well digital library content is represented through free-text and subject headings. Specifically to examine whether a combination of free-text description data and controlled vocabulary is more comprehensive than free...

  15. QuakeML: XML for Seismological Data Exchange and Resource Metadata Description

    Science.gov (United States)

    Euchner, F.; Schorlemmer, D.; Becker, J.; Heinloo, A.; Kästli, P.; Saul, J.; Weber, B.; QuakeML Working Group

    2007-12-01

    QuakeML is an XML-based data exchange format for seismology that is under development. Current collaborators are from ETH, GFZ, USC, USGS, IRIS DMC, EMSC, ORFEUS, and ISTI. QuakeML development was motivated by the lack of a widely accepted and well-documented data format that is applicable to a broad range of fields in seismology. The development team brings together expertise from communities dealing with analysis and creation of earthquake catalogs, distribution of seismic bulletins, and real-time processing of seismic data. Efforts to merge QuakeML with existing XML dialects are under way. The first release of QuakeML will cover a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Further extensions are in progress or planned, e.g., for macroseismic information, location probability density functions, slip distributions, and ground motion information. The QuakeML language definition is supplemented by a concept to provide resource metadata and facilitate metadata exchange between distributed data providers. For that purpose, we introduce unique, location-independent identifiers of seismological resources. As an application of QuakeML, ETH Zurich currently develops a Python-based seismicity analysis toolkit as a contribution to CSEP (Collaboratory for the Study of Earthquake Predictability). We follow a collaborative and transparent development approach along the lines of the procedures of the World Wide Web Consortium (W3C). QuakeML currently is in working draft status. The standard description will be subjected to a public Request for Comments (RFC) process and eventually reach the status of a recommendation. QuakeML can be found at http://www.quakeml.org.

  16. A metadata catalog for organization and systemization of fusion simulation data

    International Nuclear Information System (INIS)

    Greenwald, M.; Fredian, T.; Schissel, D.; Stillerman, J.

    2012-01-01

    Highlights: ► We find that modeling and simulation data need better systemization. ► Workflow, data provenance and relations among data items need to be captured. ► We have begun a design for a simulation metadata catalog that meets these needs. ► The catalog design also supports creation of science notebooks for simulation. - Abstract: Careful management of data and associated metadata is a critical part of any scientific enterprise. Unfortunately, most current fusion simulation efforts lack systematic, project-wide organization of their data. This paper describes an approach to managing simulation data through creation of a comprehensive metadata catalog, currently under development. The catalog is intended to document all past and current simulation activities (including data provenance); to enable global data location and to facilitate data access, analysis and visualization through uniform provision of metadata. The catalog will capture workflow, holding entries for each simulation activity including, at least, data importing and staging, data pre-processing and input preparation, code execution, data storage, post-processing and exporting. The overall aim is that between the catalog and the main data archive, the system would hold a complete and accessible description of the data, all of its attributes and the processes used to generate the data. The catalog will describe data collections, including those representing simulation workflows as well as any other useful groupings. Finally it would be populated with user supplied comments to explain the motivation and results of any activity documented by the catalog.

  17. Engineering the ATLAS TAG Browser

    International Nuclear Information System (INIS)

    Zhang Qizhi

    2011-01-01

    ELSSI is a web-based event metadata (TAG) browser and event-level selection service for ATLAS. In this paper, we describe some of the challenges encountered in the process of developing ELSSI, and the software engineering strategies adopted to address those challenges. Approaches to management of access to data, browsing, data rendering, query building, query validation, execution, connection management, and communication with auxiliary services are discussed. We also describe strategies for dealing with data that may vary over time, such as run-dependent trigger decision decoding. Along with examples, we illustrate how programming techniques in multiple languages (PHP, JAVASCRIPT, XML, AJAX, and PL/SQL) have been blended to achieve the required results. Finally, we evaluate features of the ELSSI service in terms of functionality, scalability, and performance.

  18. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    Science.gov (United States)

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-01-01

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. PMID:25310473

  19. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    Directory of Open Access Journals (Sweden)

    Min Huang

    2014-10-01

    Full Text Available While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it’s critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB pages which are more reliable than least significant bit (LSB pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme.

  20. Scalable Metadata Management for a Large Multi-Source Seismic Data Repository

    Energy Technology Data Exchange (ETDEWEB)

    Gaylord, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dodge, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Magana-Zook, S. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barno, J. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Knapp, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-11

    In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity. We began the effort with an assessment of open source data flow tools from the Hadoop ecosystem. We then began the construction of a layered architecture that is specifically designed to address many of the scalability and data quality issues we experience with our current pipeline. This included implementing basic functionality in each of the layers, such as establishing a data lake, designing a unified metadata schema, tracking provenance, and calculating data quality metrics. Our original intent was to test and validate the new ingestion framework with data from a large-scale field deployment in a temporary network. This delivered somewhat unsatisfying results, since the new system immediately identified fatal flaws in the data relatively early in the pipeline. Although this is a correct result it did not allow us to sufficiently exercise the whole framework. We then widened our scope to process all available metadata from over a dozen online seismic data sources to further test the implementation and validate the design. This experiment also uncovered a higher than expected frequency of certain types of metadata issues that challenged us to further tune our data management strategy to handle them. Our result from this project is a greatly improved understanding of real world data issues, a validated design, and prototype implementations of major components of an eventual production framework. This successfully forms the basis of future development for the Geophysical Monitoring Program data pipeline, which is a critical asset supporting multiple programs. It also positions us very well to deliver valuable metadata management expertise to our sponsors, and has already resulted in an NNSA Office of Defense Nuclear Nonproliferation

  1. Parallel file system with metadata distributed across partitioned key-value store c

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  2. Standardizing metadata and taxonomic identification in metabarcoding studies

    NARCIS (Netherlands)

    Tedersoo, Leho; Ramirez, Kelly; Nilsson, R; Kaljuvee, Aivi; Koljalg, Urmas; Abarenkov, Kessy

    2015-01-01

    High-throughput sequencing-based metabarcoding studies produce vast amounts of ecological data, but a lack of consensus on standardization of metadata and how to refer to the species recovered severely hampers reanalysis and comparisons among studies. Here we propose an automated workflow covering

  3. The relevance of music information representation metadata from the perspective of expert users

    Directory of Open Access Journals (Sweden)

    Camila Monteiro de Barros

    Full Text Available The general goal of this research was to verify which metadata elements of music information representation are relevant for its retrieval from the perspective of expert music users. Based on a bibliographical research, a comprehensive metadata set of music information representation was developed and transformed into a questionnaire for data collection, which was applied to students and professors of the Graduate Program in Music at the Federal University of Rio Grande do Sul. The results show that the most relevant information for expert music users is related to identification and authorship responsibilities. The respondents from Composition and Interpretative Practice areas agree with these results, while the respondents from Musicology/Ethnomusicology and Music Education areas also consider the metadata related to the historical context of composition relevant.

  4. Preliminary structural integrity evaluations for the elevated temperature piping of the SFR IHTS against typical level a service events

    International Nuclear Information System (INIS)

    Park, Chang-Gyu; Kim, Jong-Bum; Lee, Jae-Han

    2009-01-01

    The SFR is adapting the IHTS(Intermediate Heat Transport System) to prevent the interaction of radioactive primary sodium and SG(Steam Generator) water. The IHTS hot leg piping connecting the IHX(Intermediate Heat eXchanger) to the SG of a 1200MWe pool-type SFR is an object component in this study. ASME Boiler and Pressure Vessel code Subsection NB provides rules for the design and analysis of the class 1 components. At an elevated temperature service, the ASME Subsection NH provides rules for the design and analysis of the Class 1 components but unfortunately, special rules for piping components are not provided until now. Therefore, the design and analysis of the IHTS hot leg piping shall comply with the design by analysis requirements of Subsection NH. The piping layout is proposed by considering the reactor component layout and reactor building space and the structural integrity is evaluated by considering two typical types of operating events in this study. Cycle type 1(CT-1) shows the refueling cycle event having a temperature history from a refueling temperature to a normal operating temperature via a hot standby temperature. Cycle type 2(CT-2) is a daily load follow operation. The structural integrity is evaluated by considering the enveloped CT-1 and CT-2 operating events per the ASME Subsection NH procedures. The SIE ASME-NH computer program, which has been developed to implement the ASME subsection NH rules, is used for the structural integrity evaluation by utilizing the finite element analysis results. (author)

  5. Practical management of heterogeneous neuroimaging metadata by global neuroimaging data repositories.

    Science.gov (United States)

    Neu, Scott C; Crawford, Karen L; Toga, Arthur W

    2012-01-01

    Rapidly evolving neuroimaging techniques are producing unprecedented quantities of digital data at the same time that many research studies are evolving into global, multi-disciplinary collaborations between geographically distributed scientists. While networked computers have made it almost trivial to transmit data across long distances, collecting and analyzing this data requires extensive metadata if the data is to be maximally shared. Though it is typically straightforward to encode text and numerical values into files and send content between different locations, it is often difficult to attach context and implicit assumptions to the content. As the number of and geographic separation between data contributors grows to national and global scales, the heterogeneity of the collected metadata increases and conformance to a single standardization becomes implausible. Neuroimaging data repositories must then not only accumulate data but must also consolidate disparate metadata into an integrated view. In this article, using specific examples from our experiences, we demonstrate how standardization alone cannot achieve full integration of neuroimaging data from multiple heterogeneous sources and why a fundamental change in the architecture of neuroimaging data repositories is needed instead.

  6. CMEMS (Copernicus Marine Environment Monitoring Service) In Situ Thematic Assembly Centre: A service for operational Oceanography

    Science.gov (United States)

    Manzano Muñoz, Fernando; Pouliquen, Sylvie; Petit de la Villeon, Loic; Carval, Thierry; Loubrieu, Thomas; Wedhe, Henning; Sjur Ringheim, Lid; Hammarklint, Thomas; Tamm, Susanne; De Alfonso, Marta; Perivoliotis, Leonidas; Chalkiopoulos, Antonis; Marinova, Veselka; Tintore, Joaquin; Troupin, Charles

    2016-04-01

    Copernicus, previously known as GMES (Global Monitoring for Environment and Security), is the European Programme for the establishment of a European capacity for Earth Observation and Monitoring. Copernicus aims to provide a sustainable service for Ocean Monitoring and Forecasting validated and commissioned by users. From May 2015, the Copernicus Marine Environment Monitoring Service (CMEMS) is working on an operational mode through a contract with services engagement (result is regular data provision). Within CMEMS, the In Situ Thematic Assembly Centre (INSTAC) distributed service integrates in situ data from different sources for operational oceanography needs. CMEMS INSTAC is collecting and carrying out quality control in a homogeneous manner on data from providers outside Copernicus (national and international networks), to fit the needs of internal and external users. CMEMS INSTAC has been organized in 7 regional Dissemination Units (DUs) to rely on the EuroGOOS ROOSes. Each DU aggregates data and metadata provided by a series of Production Units (PUs) acting as an interface for providers. Homogeneity and standardization are key features to ensure coherent and efficient service. All DUs provide data in the OceanSITES NetCDF format 1.2 (based on NetCDF 3.6), which is CF compliant, relies on SeaDataNet vocabularies and is able to handle profile and time-series measurements. All the products, both near real-time (NRT) and multi-year (REP), are available online for every CMEMS registered user through an FTP service. On top of the FTP service, INSTAC products are available through Oceanotron, an open-source data server dedicated to marine observations dissemination. It provides services such as aggregation on spatio-temporal coordinates and observed parameters, and subsetting on observed parameters and metadata. The accuracy of the data is checked on various levels. Quality control procedures are applied for the validity of the data and correctness tests for the

  7. The Effect of Supply Disruptions on Customer Service Levels: a Case for Delivering Fertilizer Products using Maritime Transportation

    Science.gov (United States)

    Siswanto, N.; Kurniawati, U.; Wiratno, S. E.; Rusdiansyah, A.

    2018-04-01

    Delivering a product to customers can have a series of activities. It starts with the production of the product and then transporting it to the customers. However, uncontrollable and undesirable chance of disruption can occur during the delivery either at the production facility/supply side or in the process of transporting the product. Many researches has been conducting in the process of delivering the product. However not many considers these disruptions, although the disruptions has negative impacts on company such as reduce the profit, produce unbalanced inventory, and affect its reputation. This research will focus on the effect of supply disruption on customer service levels in the maritime transportation problem in order to maintain inventory level both in the supply and destination warehouses during predetermined planning horizon. The system considered consists of one loading port and two discharge ports for distributing one product. By using discrete event simulation, the result showed that supply disruption affects unbalanced inventory in the destination warehouses so that it will also influence company’s service level. The results show that there is a significant decreasing both in delivery service level, about 14%, and production service level, about 15% when the disruption occurs. A scenario to increase production rate is simulated to improved the service level.

  8. Observed Benefits to On-site Medical Services during an Annual 5-day Electronic Dance Music Event with Harm Reduction Services.

    Science.gov (United States)

    Munn, Matthew Brendan; Lund, Adam; Golby, Riley; Turris, Sheila A

    2016-04-01

    With increasing attendance and media attention, large-scale electronic dance music events (EDMEs) are a subset of mass gatherings that have a unique risk profile for attendees and promoters. Shambhala Music Festival (Canada) is a multi-day event in a rural setting with a recognized history of providing harm reduction (HR) services alongside medical care. Study/Objective This manuscript describes the medical response at a multi-day electronic music festival where on-site HR interventions and dedicated medical care are delivered as parallel public health measures. This study was a descriptive case report. Medical encounters and event-related data were documented prospectively using an established event registry database. In 2014, Shambhala Music Festival had 67,120 cumulative attendees over a 7-day period, with a peak daily attendance of 15,380 people. There were 1,393 patient encounters and the patient presentation rate (PPR) was 20.8 per one thousand. The majority of these (90.9%) were for non-urgent complaints. The ambulance transfer rate (ATR) was 0.194 per one thousand and 0.93% of patient encounters were transferred by ambulance. No patients required intubation and there were no fatalities. Harm reduction services included mobile outreach teams, distribution of educational materials, pill checking facilities, a dedicated women's space, and a "Sanctuary" area that provided non-medical peer support for overwhelmed guests. More than 10,000 encounters were recorded by mobile and booth-based preventive and educational services, and 2,786 pills were checked on-site with a seven percent discard rate. Dedicated medical and HR services represent two complementary public health strategies to minimize risk at a multi-day electronic music festival. The specific extent to which HR strategies reduce the need for medical care is not well understood. Incorporation of HR practices when planning on-site medical care has the potential to inform patient management, reduce

  9. ETICS meta-data software editing - from check out to commit operations

    International Nuclear Information System (INIS)

    Begin, M-E; Sancho, G D-A; Ronco, S D; Gentilini, M; Ronchieri, E; Selmi, M

    2008-01-01

    People involved in modular projects need to improve the build software process, planning the correct execution order and detecting circular dependencies. The lack of suitable tools may cause delays in the development, deployment and maintenance of the software. Experience in such projects has shown that the use of version control and build systems is not able to support the development of the software efficiently, due to a large number of errors each of which causes the breaking of the build process. Common causes of errors are for example the adoption of new libraries, libraries incompatibility, the extension of the current project in order to support new software modules. In this paper, we describe a possible solution implemented in ETICS, an integrated infrastructure for the automated configuration, build and test of Grid and distributed software. ETICS has defined meta-data software abstractions, from which it is possible to download, build and test software projects, setting for instance dependencies, environment variables and properties. Furthermore, the meta-data information is managed by ETICS reflecting the version control system philosophy, because of the existence of a meta-data repository and the handling of a list of operations, such as check out and commit. All the information related to a specific software are stored in the repository only when they are considered to be correct. By means of this solution, we introduce a sort of flexibility inside the ETICS system, allowing users to work accordingly to their needs. Moreover, by introducing this functionality, ETICS will be a version control system like for the management of the meta-data

  10. National Weather Service (NWS) Station Information System (SIS), Version 2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — National Weather Service (NWS) Station Information System (SIS) contains observing station metadata from November 2016 to present. These are renditions are used for...

  11. Biomedical word sense disambiguation with ontologies and metadata: automation meets accuracy

    Directory of Open Access Journals (Sweden)

    Hakenberg Jörg

    2009-01-01

    Full Text Available Abstract Background Ontology term labels can be ambiguous and have multiple senses. While this is no problem for human annotators, it is a challenge to automated methods, which identify ontology terms in text. Classical approaches to word sense disambiguation use co-occurring words or terms. However, most treat ontologies as simple terminologies, without making use of the ontology structure or the semantic similarity between terms. Another useful source of information for disambiguation are metadata. Here, we systematically compare three approaches to word sense disambiguation, which use ontologies and metadata, respectively. Results The 'Closest Sense' method assumes that the ontology defines multiple senses of the term. It computes the shortest path of co-occurring terms in the document to one of these senses. The 'Term Cooc' method defines a log-odds ratio for co-occurring terms including co-occurrences inferred from the ontology structure. The 'MetaData' approach trains a classifier on metadata. It does not require any ontology, but requires training data, which the other methods do not. To evaluate these approaches we defined a manually curated training corpus of 2600 documents for seven ambiguous terms from the Gene Ontology and MeSH. All approaches over all conditions achieve 80% success rate on average. The 'MetaData' approach performed best with 96%, when trained on high-quality data. Its performance deteriorates as quality of the training data decreases. The 'Term Cooc' approach performs better on Gene Ontology (92% success than on MeSH (73% success as MeSH is not a strict is-a/part-of, but rather a loose is-related-to hierarchy. The 'Closest Sense' approach achieves on average 80% success rate. Conclusion Metadata is valuable for disambiguation, but requires high quality training data. Closest Sense requires no training, but a large, consistently modelled ontology, which are two opposing conditions. Term Cooc achieves greater 90

  12. Business Event Notification Service (BENS)

    Data.gov (United States)

    Department of Veterans Affairs — BENS provides a notification of pre-defined business events to applications, portals, and automated business processes. Such events are defined in the Event Catalog,...

  13. Metadata Quality Improvement : DASISH deliverable 5.2A

    NARCIS (Netherlands)

    L'Hours, Hervé; Offersgaard, Lene; Wittenberg, M.; Wloka, Bartholomäus

    2014-01-01

    The aim of this task was to analyse and compare the different metadata strategies of CLARIN, DARIAH and CESSDA, and to identify possibilities of cross-fertilization to take profit from each other solutions where possible. To have a better understanding in which stages of the research lifecycle

  14. Event Displays for the Visualization of CMS Events

    CERN Document Server

    Jones, Christopher Duncan

    2010-01-01

    During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.

  15. Event Display for the Visualization of CMS Events

    Science.gov (United States)

    Bauerdick, L. A. T.; Eulisse, G.; Jones, C. D.; Kovalskyi, D.; McCauley, T.; Mrak Tadel, A.; Muelmenstaedt, J.; Osborne, I.; Tadel, M.; Tu, Y.; Yagil, A.

    2011-12-01

    During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.

  16. Event Display for the Visualization of CMS Events

    International Nuclear Information System (INIS)

    Bauerdick, L A T; Eulisse, G; Jones, C D; McCauley, T; Osborne, I; Kovalskyi, D; Tadel, A Mrak; Muelmenstaedt, J; Tadel, M; Tu, Y; Yagil, A

    2011-01-01

    During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.

  17. Active Construction of Profession-Related Events: The Priming Effect among Pre-service Teachers with Different Professional Identity.

    Science.gov (United States)

    Wang, Xin-Qiang; Zhu, Jun-Cheng; Liu, Lu; Chen, Xiang-Yu; Huo, Jun-Yu

    2018-01-01

    Pre-service teachers with different professional identity may actively construct different subjective profession-related events based on the same objective profession-related events. To explore the priming effect among pre-service teachers with different professional identity, this study examined the effect of positive, negative, or neutral priming sentences in an individualized narration of profession-related events through a priming paradigm. Forty-two female volunteers were asked to complete positive, negative, and neutral priming sentences describing profession-related events. The results showed that, relative to those with weak professional identity, participants with strong professional identity generated a higher number of positive items when primed with different stimuli and displayed greater positive priming bias for positive and neutral stimuli. In addition, relative to those with strong professional identity, participants with weak professional identity generated a higher number of neutral and negative items when primed with positive and negative stimuli, respectively, and displayed greater negative priming bias toward negative stimuli. These results indicate that pre-service teachers with strong professional identity were likely to have established positive self-schemas involving profession-related events, which facilitated active, positive construction of such events.

  18. Active Construction of Profession-Related Events: The Priming Effect among Pre-service Teachers with Different Professional Identity

    Directory of Open Access Journals (Sweden)

    Xin-qiang Wang

    2018-02-01

    Full Text Available Pre-service teachers with different professional identity may actively construct different subjective profession-related events based on the same objective profession-related events. To explore the priming effect among pre-service teachers with different professional identity, this study examined the effect of positive, negative, or neutral priming sentences in an individualized narration of profession-related events through a priming paradigm. Forty-two female volunteers were asked to complete positive, negative, and neutral priming sentences describing profession-related events. The results showed that, relative to those with weak professional identity, participants with strong professional identity generated a higher number of positive items when primed with different stimuli and displayed greater positive priming bias for positive and neutral stimuli. In addition, relative to those with strong professional identity, participants with weak professional identity generated a higher number of neutral and negative items when primed with positive and negative stimuli, respectively, and displayed greater negative priming bias toward negative stimuli. These results indicate that pre-service teachers with strong professional identity were likely to have established positive self-schemas involving profession-related events, which facilitated active, positive construction of such events.

  19. Let Our Powers Combine! Harnessing NASA's Earth Observatory Natural Event Tracker (EONET) in Worldview

    Science.gov (United States)

    Wong, Min Minnie; Ward, Kevin; Boller, Ryan; Gunnoe, Taylor; Baynes, Kathleen; King, Benjamin

    2016-01-01

    Constellations of NASA Earth Observing System (EOS) satellites orbit the earth to collect images and data about the planet in near real-time. Within hours of satellite overpass, you can discover where the latest wildfires, severe storms, volcanic eruptions, and dust and haze events are occurring using NASA's Worldview web application. By harnessing a repository of curated natural event metadata from NASA Earth Observatory's Natural Event Tracker (EONET), Worldview has moved natural event discovery to the forefront and allows users to select events-of-interest from a curated list, zooms to the area, and adds the most relevant imagery layers for that type of natural event. This poster will highlight NASA Worldviews new natural event feed functionality.

  20. Benefits of Record Management For Scientific Writing (Study of Metadata Reception of Zotero Reference Management Software in UIN Malang

    Directory of Open Access Journals (Sweden)

    Moch Fikriansyah Wicaksono

    2018-01-01

    Full Text Available Record creation and management by individuals or organizations grows rapidly, particularly the change from print to electronics, and the smallest part of record (metadata. Therefore, there is a need to perform record management metadata, particularly for students who have the needs of recording references and citation. Reference management software (RMS is a software to help reference management, one of them named zotero. The purpose of this article is to describe the benefits of record management for the writing of scientific papers for students, especially on biology study program in UIN Malik Ibrahim Malang. The type of research used is descriptive with quantitative approach. To increase the depth of respondents' answers, we used additional data by conducting interviews. The selected population is 322 students, class of 2012 to 2014, using random sampling. The selection criteria were chosen because the introduction and use of reference management software, zotero have started since three years ago.  Respondents in this study as many as 80 people, which is obtained from the formula Yamane. The results showed that 70% agreed that using reference management software saved time and energy in managing digital file metadata, 71% agreed that if digital metadata can be quickly stored into RMS, 65% agreed on the ease of storing metadata into the reference management software, 70% agreed when it was easy to configure metadata to quote and bibliography, 56.6% agreed that the metadata stored in reference management software could be edited, 73.8% agreed that using metadata will make it easier to write quotes and bibliography.

  1. CCR+: Metadata Based Extended Personal Health Record Data Model Interoperable with the ASTM CCR Standard.

    Science.gov (United States)

    Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong; Kim, Ju Han

    2014-01-01

    Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models.

  2. The evolution of chondrichthyan research through a metadata ...

    African Journals Online (AJOL)

    We compiled metadata from Sharks Down Under (1991) and the two Sharks International conferences (2010 and 2014), spanning 23 years. Analysis of the data highlighted taxonomic biases towards charismatic species, a declining number of studies in fundamental science such as those related to taxonomy and basic life ...

  3. Scaling the walls of discovery: using semantic metadata for integrative problem solving.

    Science.gov (United States)

    Manning, Maurice; Aggarwal, Amit; Gao, Kevin; Tucker-Kellogg, Greg

    2009-03-01

    Current data integration approaches by bioinformaticians frequently involve extracting data from a wide variety of public and private data repositories, each with a unique vocabulary and schema, via scripts. These separate data sets must then be normalized through the tedious and lengthy process of resolving naming differences and collecting information into a single view. Attempts to consolidate such diverse data using data warehouses or federated queries add significant complexity and have shown limitations in flexibility. The alternative of complete semantic integration of data requires a massive, sustained effort in mapping data types and maintaining ontologies. We focused instead on creating a data architecture that leverages semantic mapping of experimental metadata, to support the rapid prototyping of scientific discovery applications with the twin goals of reducing architectural complexity while still leveraging semantic technologies to provide flexibility, efficiency and more fully characterized data relationships. A metadata ontology was developed to describe our discovery process. A metadata repository was then created by mapping metadata from existing data sources into this ontology, generating RDF triples to describe the entities. Finally an interface to the repository was designed which provided not only search and browse capabilities but complex query templates that aggregate data from both RDF and RDBMS sources. We describe how this approach (i) allows scientists to discover and link relevant data across diverse data sources and (ii) provides a platform for development of integrative informatics applications.

  4. Combined use of semantics and metadata to manage Research Data Life Cycle in Environmental Sciences

    Science.gov (United States)

    Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Pertinez, Esther; Palacio, Aida

    2017-04-01

    The use of metadata to contextualize datasets is quite extended in Earth System Sciences. There are some initiatives and available tools to help data managers to choose the best metadata standard that fit their use cases, like the DCC Metadata Directory (http://www.dcc.ac.uk/resources/metadata-standards). In our use case, we have been gathering physical, chemical and biological data from a water reservoir since 2010. A well metadata definition is crucial not only to contextualize our own data but also to integrate datasets from other sources like satellites or meteorological agencies. That is why we have chosen EML (Ecological Metadata Language), which integrates many different elements to define a dataset, including the project context, instrumentation and parameters definition, and the software used to process, provide quality controls and include the publication details. Those metadata elements can contribute to help both human and machines to understand and process the dataset. However, the use of metadata is not enough to fully support the data life cycle, from the Data Management Plan definition to the Publication and Re-use. To do so, we need to define not only metadata and attributes but also the relationships between them, so semantics are needed. Ontologies, being a knowledge representation, can contribute to define the elements of a research data life cycle, including DMP, datasets, software, etc. They also can define how the different elements are related between them and how they interact. The first advantage of developing an ontology of a knowledge domain is that they provide a common vocabulary hierarchy (i.e. a conceptual schema) that can be used and standardized by all the agents interested in the domain (either humans or machines). This way of using ontologies is one of the basis of the Semantic Web, where ontologies are set to play a key role in establishing a common terminology between agents. To develop an ontology we are using a graphical tool

  5. PhysiomeSpace: digital library service for biomedical data.

    Science.gov (United States)

    Testi, Debora; Quadrani, Paolo; Viceconti, Marco

    2010-06-28

    Every research laboratory has a wealth of biomedical data locked up, which, if shared with other experts, could dramatically improve biomedical and healthcare research. With the PhysiomeSpace service, it is now possible with a few clicks to share with selected users biomedical data in an easy, controlled and safe way. The digital library service is managed using a client-server approach. The client application is used to import, fuse and enrich the data information according to the PhysiomeSpace resource ontology and upload/download the data to the library. The server services are hosted on the Biomed Town community portal, where through a web interface, the user can complete the metadata curation and share and/or publish the data resources. A search service capitalizes on the domain ontology and on the enrichment of metadata for each resource, providing a powerful discovery environment. Once the users have found the data resources they are interested in, they can add them to their basket, following a metaphor popular in e-commerce web sites. When all the necessary resources have been selected, the user can download the basket contents into the client application. The digital library service is now in beta and open to the biomedical research community.

  6. Single-Event Transgene Product Levels Predict Levels in Genetically Modified Breeding Stacks.

    Science.gov (United States)

    Gampala, Satyalinga Srinivas; Fast, Brandon J; Richey, Kimberly A; Gao, Zhifang; Hill, Ryan; Wulfkuhle, Bryant; Shan, Guomin; Bradfisch, Greg A; Herman, Rod A

    2017-09-13

    The concentration of transgene products (proteins and double-stranded RNA) in genetically modified (GM) crop tissues is measured to support food, feed, and environmental risk assessments. Measurement of transgene product concentrations in breeding stacks of previously assessed and approved GM events is required by many regulatory authorities to evaluate unexpected transgene interactions that might affect expression. Research was conducted to determine how well concentrations of transgene products in single GM events predict levels in breeding stacks composed of these events. The concentrations of transgene products were compared between GM maize, soybean, and cotton breeding stacks (MON-87427 × MON-89034 × DAS-Ø15Ø7-1 × MON-87411 × DAS-59122-7 × DAS-40278-9 corn, DAS-81419-2 × DAS-44406-6 soybean, and DAS-21023-5 × DAS-24236-5 × SYN-IR102-7 × MON-88913-8 × DAS-81910-7 cotton) and their component single events (MON-87427, MON-89034, DAS-Ø15Ø7-1, MON-87411, DAS-59122-7, and DAS-40278-9 corn, DAS-81419-2, and DAS-44406-6 soybean, and DAS-21023-5, DAS-24236-5, SYN-IR102-7, MON-88913-8, and DAS-81910-7 cotton). Comparisons were made within a crop and transgene product across plant tissue types and were also made across transgene products in each breeding stack for grain/seed. Scatter plots were generated comparing expression in the stacks to their component events, and the percent of variability accounted for by the line of identity (y = x) was calculated (coefficient of identity, I 2 ). Results support transgene concentrations in single events predicting similar concentrations in breeding stacks containing the single events. Therefore, food, feed, and environmental risk assessments based on concentrations of transgene products in single GM events are generally applicable to breeding stacks composed of these events.

  7. A DDI3.2 Style for Data and Metadata Extracted from SAS

    OpenAIRE

    Hoyle, Larry

    2014-01-01

    Earlier work by Wackerow and Hoyle has shown that DDI can be a useful medium for interchange of data and metadata among statistical packages. DDI 3.2 has new features which enhance this capability, such as the ability to use UserAttributePairs to represent custom attributes. The metadata from a statistical package can also be represented in DDI3.2 using several different styles – embedded in a StudyUnit, in a Resource Package, or in a set of Fragments. The DDI Documentation for a Fragment sta...

  8. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and UrbIS

    Science.gov (United States)

    Crow, M. C.; Devarakonda, R.; Hook, L.; Killeffer, T.; Krassovski, M.; Boden, T.; King, A. W.; Wullschleger, S. D.

    2016-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This discussion describes tools being used in two different projects at Oak Ridge National Laboratory (ORNL), but at different stages of the data lifecycle. The Metadata Entry and Data Search Tool is being used for the documentation, archival, and data discovery stages for the Next Generation Ecosystem Experiment - Arctic (NGEE Arctic) project while the Urban Information Systems (UrbIS) Data Catalog is being used to support indexing, cataloging, and searching. The NGEE Arctic Online Metadata Entry Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The UrbIS Data Catalog is a data discovery tool supported by the Mercury cataloging framework [2] which aims to compile urban environmental data from around the world into one location, and be searchable via a user-friendly interface. Each data record conveniently displays its title, source, and date range, and features: (1) a button for a quick view of the metadata, (2) a direct link to the data and, for some data sets, (3) a button for visualizing the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for searching by area. References: [1] Devarakonda, Ranjeet, et al. "Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example." Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. [2] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery

  9. Developing Federated Services within Seismology: IRIS' involvement in the CoopEUS Project

    Science.gov (United States)

    Ahern, T. K.; Trabant, C. M.; Stults, M.

    2014-12-01

    As a founding member of the CoopEUS initiative, IRIS Data Services has partnered with five data centers in Europe and the UC Berkeley (NCEDC) in the US to implement internationally standardized web services to access seismological data using identical methodologies. The International Federation of Digital Seismograph Networks (FDSN) holds commission status within IASPEI/IUGG and as such is the international body that governs data exchange formats and access protocols within seismology. The CoopEUS project involves IRIS and UNAVCO as part of the EarthScope project and the European collaborators are all members of the European Plate Observing System (EPOS). CoopEUS includes one work package that attempts to coordinate data access between EarthScope and EPOS facilities. IRIS has worked with its partners in the FDSN to develop and adopt three key international service standards within seismology. These include 1) fdsn-dataselect, a service that returns time series data in a variety of standard formats, 2) fdsn-station, a service that returns related metadata about a seismic station in stationXML format, and 3) fdsn-event, a service that returns information about earthquakes and other seismic events in QuakeML format. Currently the 5 European data centers supporting these services include the ORFEUS Data Centre in the Netherlands, the GFZ German Research Centre for Geosciences in Potsdam, Germany, ETH Zurich in Switzerland, INGV in Rome, Italy, and the RESIF Data Centre in Grenoble France. Presently these seven centres can all be accessed using standardized web services with identical service calls and returns results in standardized ways. IRIS is developing an IRIS federator that will allow a client to seamlessly access information across the federated centers. Details and current status of the IRIS Federator will be presented.

  10. ATLAS High Level Calorimeter Trigger Software Performance for Cosmic Ray Events

    CERN Document Server

    Oliveira Damazio, Denis; The ATLAS collaboration

    2009-01-01

    The ATLAS detector is undergoing intense commissioning effort with cosmic rays preparing for the first LHC collisions next spring. Combined runs with all of the ATLAS subsystems are being taken in order to evaluate the detector performance. This is an unique opportunity also for the trigger system to be studied with different detector operation modes, such as different event rates and detector configuration. The ATLAS trigger starts with a hardware based system which tries to identify detector regions where interesting physics objects may be found (eg: large energy depositions in the calorimeter system). An approved event will be further processed by more complex software algorithms at the second level where detailed features are extracted (full detector granularity data for small portions of the detector is available). Events accepted at this level will be further processed at the so-called event filter level. Full detector data at full granularity is available for offline like processing with complete calib...

  11. Statistical Metadata Analysis of the Variability of Latency, Device Transfer Time, and Coordinate Position from Smartphone-Recorded Infrasound Data

    Science.gov (United States)

    Garces, E. L.; Garces, M. A.; Christe, A.

    2017-12-01

    The RedVox infrasound recorder app uses microphones and barometers in smartphones to record infrasound, low-frequency sound below the threshold of human hearing. We study a device's metadata, which includes position, latency time, the differences between the device's internal times and the server times, and the machine time, searching for patterns and possible errors or discontinuities in these scaled parameters. We highlight metadata variability through scaled multivariate displays (histograms, distribution curves, scatter plots), all created and organized through software development in Python. This project is helpful in ascertaining variability and honing the accuracy of smartphones, aiding the emergence of portable devices as viable geophysical data collection instruments. It can also improve the app and cloud service by increasing efficiency and accuracy, allowing to better document and foresee drastic natural movements like tsunamis, earthquakes, volcanic eruptions, storms, rocket launches, and meteor impacts; recorded data can later be used for studies and analysis by a variety of professions. We expect our final results to produce insight on how to counteract problematic issues in data mining and improve accuracy in smartphone data-collection. By eliminating lurking variables and minimizing the effect of confounding variables, we hope to discover efficient processes to reduce superfluous precision, unnecessary errors, and data artifacts. These methods should conceivably be transferable to other areas of software development, data analytics, and statistics-based experiments, contributing a precedent of smartphone metadata studies from geophysical rather than societal data. The results should facilitate the rise of civilian-accessible, hand-held, data-gathering mobile sensor networks and yield more straightforward data mining techniques.

  12. Determining Level of Service for Multilane Median Opening Zone

    Science.gov (United States)

    Ali, Paydar; Johnnie, Ben-Edigbe

    2017-08-01

    The road system is a capital-intensive investment, requiring thorough schematic framework and funding. Roads are built to provide an intrinsic quality of service which satisfies the road users. Roads that provide good services are expected to deliver operational performance that is consistent with their design specifications. Level of service and cumulative percentile speed distribution methods have been used in previous studies to estimate the quality of multilane highway service. Whilst the level of service approach relies on speed/flow curve, the cumulative percentile speed distribution is based solely speed. These estimation methods were used in studies carried out in Johor Malaysia. The aim of the studies is to ascertain the extent of speed reduction caused by midblock U-turn facilities as well as verify which estimation method is more reliable. At selected sites, road segments for both directional flows were divided into free-flow and midblock zones. Traffic volume, speed and vehicle type data for each zone were collected continuously for six weeks. Both estimation methods confirmed that speed reduction would be caused by midblock u-turn facilities. However level of service methods suggested that the quality of service would improve from level F to E or D at midblock zone in spite of speed reduction. Level of service was responding to traffic volume reduction at midblock u-turn facility not travel speed reduction. The studies concluded that since level of service was more responsive to traffic volume reduction than travel speed, it cannot be solely relied upon when assessing the quality of multilane highway service.

  13. Semantic web technologies for video surveillance metadata

    OpenAIRE

    Poppe, Chris; Martens, Gaëtan; De Potter, Pieterjan; Van de Walle, Rik

    2012-01-01

    Video surveillance systems are growing in size and complexity. Such systems typically consist of integrated modules of different vendors to cope with the increasing demands on network and storage capacity, intelligent video analytics, picture quality, and enhanced visual interfaces. Within a surveillance system, relevant information (like technical details on the video sequences, or analysis results of the monitored environment) is described using metadata standards. However, different module...

  14. Metadata: A user`s view

    Energy Technology Data Exchange (ETDEWEB)

    Bretherton, F.P. [Univ. of Wisconsin, Madison, WI (United States); Singley, P.T. [Oak Ridge National Lab., TN (United States)

    1994-12-31

    An analysis is presented of the uses of metadata from four aspects of database operations: (1) search, query, retrieval, (2) ingest, quality control, processing, (3) application to application transfer; (4) storage, archive. Typical degrees of database functionality ranging from simple file retrieval to interdisciplinary global query with metadatabase-user dialog and involving many distributed autonomous databases, are ranked in approximate order of increasing sophistication of the required knowledge representation. An architecture is outlined for implementing such functionality in many different disciplinary domains utilizing a variety of off the shelf database management subsystems and processor software, each specialized to a different abstract data model.

  15. USULAN SISTEM PEMESANAN UNTUK MENINGKATKAN SERVICE LEVEL PADA SISTEM HEIJUNKA

    Directory of Open Access Journals (Sweden)

    Sri Hartini

    2012-02-01

    Full Text Available PT X merupakan distributor  yang melayani permintaan suku cadang untuk wilayah Jawa Tengah dan Yogyakarta. Salah satu ukuran kepuasan pelanggan adalah service level.Berdasarkan data perusahaan pada bulan Januari – April 2007, service level global untuk keseluruhan item masih berada dibawah target service level yang ditetapkan oleh perusahaan. Pada penelitian ini diberikan usulan perbaikan pada sistem peramalan dan model pemesanan untuk item kritis, yaitu item 90915-TE001 dan 85214-0A010.Untuk mengatasi fluktuasi demand, peneliti mengusulkan adanya safety stock yang ditentukan berdasarkan besarnya fluktuasi  demand dan target service level yang ingin dicapai. Dari hasil penelitian didapatkan bahwa model peramalan yang sesuai adalah Holt Exponential Smoothing untuk item 90915-TE001, dan Single Exponential Smoothing untuk item 85214-0A010. Selain itu, perhitungan safety stock dan MIP berdasarkan model usulan,  meningkatkan rata – rata aktual service level hingga 99.38% pada sistem heijunka. Kata Kunci: service level, peramalan, pemesanan heijunka     Abstract   PT X, distributor  wich serves the demand of genuine parts in central java and DIY.As a Distributor Company, PT X has two kinds of responsibilities , to customer as the supplier, to always improve the satisfication of customer. One of satisfication of customer is service level. According to company data from january to april 2007, global service level for all items still stay below the service level target which is decided by company.In this research, given correction suggestion in forecasting system and ordering model for critic items, those are item 90915-TE001 and 85214-OA010. For solving the demand fluctuation, researcher suggests for safety stock that is decided by the value of demand fluctuation and service level target will be reached.From the research, obtained that the suitable forecasting model is Holt Exponential Smoothingfor item 90915-TE001, and Single Exponential

  16. Indexing of ATLAS data management and analysis system metadata

    CERN Document Server

    Grigoryeva, Maria; The ATLAS collaboration

    2017-01-01

    This manuscript is devoted to the development of the system to manage metainformation of modern HENP experiments. The main purpose of the system is to provide scientists with transparent access to the actual and historical metadata related to data analysis, processing and modeling. The system design addresses the following goals : providing a flexible and fast search for metadata on various combinations of keywords, generating aggregated reports, categorized according to selected parameters, such as the studied physical process, scientific topic, physical group, etc. The article presents the architecture of the developed indexing and search system, as well as the results of performance tests. The comparison of the query execution speed within the developed system and in case of querying the original relational databases showed that the developed system provides results faster. Also the new system allows much more complex search requests, than the original storages.

  17. A First-level Event Selector for the CBM Experiment at FAIR

    International Nuclear Information System (INIS)

    Cuveland, J de; Lindenstruth, V

    2011-01-01

    The CBM experiment at the upcoming FAIR accelerator aims to create highest baryon densities in nucleus-nucleus collisions and to explore the properties of super-dense nuclear matter. Event rates of 10 MHz are needed for high-statistics measurements of rare probes, while event selection requires complex global triggers like secondary vertex search. To meet these demands, the CBM experiment uses self-triggered detector front-ends and a data push readout architecture. The First-level Event Selector (FLES) is the central physics selection system in CBM. It receives all hits and performs online event selection on the 1 TByte/s input data stream. The event selection process requires high-throughput event building and full event reconstruction using fast, vectorized track reconstruction algorithms. The current FLES architecture foresees a scalable high-performance computer. To achieve the high throughput and computation efficiency, all available computing devices will have to be used, in particular FPGAs at the first stages of the system and heterogeneous many-core architectures such as CPUs for efficient track reconstruction. A high-throughput network infrastructure and flow control in the system are other key aspects. In this paper, we present the foreseen architecture of the First-level Event Selector.

  18. An Architecture for Automated Fire Detection Early Warning System Based on Geoprocessing Service Composition

    Science.gov (United States)

    Samadzadegan, F.; Saber, M.; Zahmatkesh, H.; Joze Ghazi Khanlou, H.

    2013-09-01

    Rapidly discovering, sharing, integrating and applying geospatial information are key issues in the domain of emergency response and disaster management. Due to the distributed nature of data and processing resources in disaster management, utilizing a Service Oriented Architecture (SOA) to take advantages of workflow of services provides an efficient, flexible and reliable implementations to encounter different hazardous situation. The implementation specification of the Web Processing Service (WPS) has guided geospatial data processing in a Service Oriented Architecture (SOA) platform to become a widely accepted solution for processing remotely sensed data on the web. This paper presents an architecture design based on OGC web services for automated workflow for acquisition, processing remotely sensed data, detecting fire and sending notifications to the authorities. A basic architecture and its building blocks for an automated fire detection early warning system are represented using web-based processing of remote sensing imageries utilizing MODIS data. A composition of WPS processes is proposed as a WPS service to extract fire events from MODIS data. Subsequently, the paper highlights the role of WPS as a middleware interface in the domain of geospatial web service technology that can be used to invoke a large variety of geoprocessing operations and chaining of other web services as an engine of composition. The applicability of proposed architecture by a real world fire event detection and notification use case is evaluated. A GeoPortal client with open-source software was developed to manage data, metadata, processes, and authorities. Investigating feasibility and benefits of proposed framework shows that this framework can be used for wide area of geospatial applications specially disaster management and environmental monitoring.

  19. AN ARCHITECTURE FOR AUTOMATED FIRE DETECTION EARLY WARNING SYSTEM BASED ON GEOPROCESSING SERVICE COMPOSITION

    Directory of Open Access Journals (Sweden)

    F. Samadzadegan

    2013-09-01

    Full Text Available Rapidly discovering, sharing, integrating and applying geospatial information are key issues in the domain of emergency response and disaster management. Due to the distributed nature of data and processing resources in disaster management, utilizing a Service Oriented Architecture (SOA to take advantages of workflow of services provides an efficient, flexible and reliable implementations to encounter different hazardous situation. The implementation specification of the Web Processing Service (WPS has guided geospatial data processing in a Service Oriented Architecture (SOA platform to become a widely accepted solution for processing remotely sensed data on the web. This paper presents an architecture design based on OGC web services for automated workflow for acquisition, processing remotely sensed data, detecting fire and sending notifications to the authorities. A basic architecture and its building blocks for an automated fire detection early warning system are represented using web-based processing of remote sensing imageries utilizing MODIS data. A composition of WPS processes is proposed as a WPS service to extract fire events from MODIS data. Subsequently, the paper highlights the role of WPS as a middleware interface in the domain of geospatial web service technology that can be used to invoke a large variety of geoprocessing operations and chaining of other web services as an engine of composition. The applicability of proposed architecture by a real world fire event detection and notification use case is evaluated. A GeoPortal client with open-source software was developed to manage data, metadata, processes, and authorities. Investigating feasibility and benefits of proposed framework shows that this framework can be used for wide area of geospatial applications specially disaster management and environmental monitoring.

  20. Big Earth Data Initiative: Metadata Improvement: Case Studies

    Science.gov (United States)

    Kozimor, John; Habermann, Ted; Farley, John

    2016-01-01

    Big Earth Data Initiative (BEDI) The Big Earth Data Initiative (BEDI) invests in standardizing and optimizing the collection, management and delivery of U.S. Government's civil Earth observation data to improve discovery, access use, and understanding of Earth observations by the broader user community. Complete and consistent standard metadata helps address all three goals.

  1. ­The Geospatial Metadata Manager’s Toolbox: Three Techniques for Maintaining Records

    Directory of Open Access Journals (Sweden)

    Bruce Godfrey

    2015-07-01

    Full Text Available Managing geospatial metadata records requires a range of techniques. At the University of Idaho Library, we have tens of thousands of records which need to be maintained as well as the addition of new records which need to be normalized and added to the collections. We show a graphical user interface (GUI tool that was developed to make simple modifications, a simple XSLT that operates on complex metadata, and a Python script with enables parallel processing to make maintenance tasks more efficient. Throughout, we compare these techniques and discuss when they may be useful.

  2. Leveraging Python to improve ebook metadata selection, ingest, and management

    Directory of Open Access Journals (Sweden)

    Kelly Thompson

    2017-10-01

    Full Text Available Libraries face many challenges in managing descriptive metadata for ebooks, including quality control, completeness of coverage, and ongoing management. The recent emergence of library management systems that automatically provide descriptive metadata for e-resources activated in system knowledge bases means that ebook management models are moving toward both greater efficiency and more complex implementation and maintenance choices. Automated and data-driven processes for ebook management have always been desirable, but in the current environment, they become necessary. In addition to initial selection of a record source, automation can be applied to quality control processes and ongoing maintenance in order to keep manual, eyes-on work to a minimum while providing the best possible discovery and access. In this article, we describe how we are using Python scripts to address these challenges.

  3. Metafier - a Tool for Annotating and Structuring Building Metadata

    DEFF Research Database (Denmark)

    Holmegaard, Emil; Johansen, Aslak; Kjærgaard, Mikkel Baun

    2017-01-01

    in achieving this goal, but often they work as silos. Improving at scale the energy performance of buildings depends on applications breaking these silos and being portable among buildings. To enable portable building applications, the building instrumentation should be supported by a metadata layer...

  4. SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.

    Directory of Open Access Journals (Sweden)

    Benjamin C Hitz

    Full Text Available The Encyclopedia of DNA elements (ENCODE project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data has been released as a separate Python package.

  5. Towards a best practice of modeling unit of measure and related statistical metadata

    CERN Document Server

    Grossmann, Wilfried

    2011-01-01

    Data and metadata exchange between organizations requires a common language for describing structure and content of statistical data and metadata. The SDMX consortium develops content oriented guidelines (COG) recommending harmonized cross-domain concepts and terminology to increase the efficiency of (meta-) data exchange. A recent challenge is a recommended code list for the unit of measure. Based on examples from SDMX sponsor organizations this paper analyses the diversity of ""unit of measure"" as used in practice, including potential breakdowns and interdependencies of the respective meta-

  6. Online information services in the social sciences

    CERN Document Server

    Jacobs, Neil

    2004-01-01

    Information professionals are increasingly responsible not only for running traditional information and library services but also for providing an online presence for their organisation. This book shows how best practice in delivering online information services should be based on actual user needs and behaviour. A series of case studies provide real life examples of how social science information is being used in the community. The book then draws on these case studies to outline the main issues facing service providers: such as usability, metadata and management. The book concludes with a lo

  7. Multi-Unit Initiating Event Analysis for a Single-Unit Internal Events Level 1 PSA

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong San; Park, Jin Hee; Lim, Ho Gon [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The Fukushima nuclear accident in 2011 highlighted the importance of considering the risks from multi-unit accidents at a site. The ASME/ANS probabilistic risk assessment (PRA) standard also includes some requirements related to multi-unit aspects, one of which (IE-B5) is as follows: 'For multi-unit sites with shared systems, DO NOT SUBSUME multi-unit initiating events if they impact mitigation capability [1].' However, the existing single-unit PSA models do not explicitly consider multi-unit initiating events and hence systems shared by multiple units (e.g., alternate AC diesel generator) are fully credited for the single unit and ignores the need for the shared systems by other units at the same site [2]. This paper describes the results of the multi-unit initiating event (IE) analysis performed as a part of the at-power internal events Level 1 probabilistic safety assessment (PSA) for an OPR1000 single unit ('reference unit'). In this study, a multi-unit initiating event analysis for a single-unit PSA was performed, and using the results, dual-unit LOOP initiating event was added to the existing PSA model for the reference unit (OPR1000 type). Event trees were developed for dual-unit LOOP and dual-unit SBO which can be transferred from dual- unit LOOP. Moreover, CCF basic events for 5 diesel generators were modelled. In case of simultaneous SBO occurrences in both units, this study compared two different assumptions on the availability of the AAC D/G. As a result, when dual-unit LOOP initiating event was added to the existing single-unit PSA model, the total CDF increased by 1∼ 2% depending on the probability that the AAC D/G is available to a specific unit in case of simultaneous SBO in both units.

  8. A Web Service for File-Level Access to Disk Images

    Directory of Open Access Journals (Sweden)

    Sunitha Misra

    2014-07-01

    Full Text Available Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs. Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.

  9. A Geo-Event-Based Geospatial Information Service: A Case Study of Typhoon Hazard

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2017-03-01

    Full Text Available Social media is valuable in propagating information during disasters for its timely and available characteristics nowadays, and assists in making decisions when tagged with locations. Considering the ambiguity and inaccuracy in some social data, additional authoritative data are needed for important verification. However, current works often fail to leverage both social and authoritative data and, on most occasions, the data are used in disaster analysis after the fact. Moreover, current works organize the data from the perspective of the spatial location, but not from the perspective of the disaster, making it difficult to dynamically analyze the disaster. All of the disaster-related data around the affected locations need to be retrieved. To solve these limitations, this study develops a geo-event-based geospatial information service (GEGIS framework and proceeded as follows: (1 a geo-event-related ontology was constructed to provide a uniform semantic basis for the system; (2 geo-events and attributes were extracted from the web using a natural language process (NLP and used in the semantic similarity match of the geospatial resources; and (3 a geospatial information service prototype system was designed and implemented for automatically retrieving and organizing geo-event-related geospatial resources. A case study of a typhoon hazard is analyzed here within the GEGIS and shows that the system would be effective when typhoons occur.

  10. Making Information Visible, Accessible, and Understandable: Meta-Data and Registries

    National Research Council Canada - National Science Library

    Robinson, Clay

    2007-01-01

    ... the interoperability, discovery, and utility of data assets throughout the Department of Defense (DoD). Proper use and understanding of metadata can substantially enhance the utility of data by making it more visible, accessible, and understandable...

  11. Automated Creation of Datamarts from a Clinical Data Warehouse, Driven by an Active Metadata Repository

    Science.gov (United States)

    Rogerson, Charles L.; Kohlmiller, Paul H.; Stutman, Harris

    1998-01-01

    A methodology and toolkit are described which enable the automated metadata-driven creation of datamarts from clinical data warehouses. The software uses schema-to-schema transformation driven by an active metadata repository. Tools for assessing datamart data quality are described, as well as methods for assessing the feasibility of implementing specific datamarts. A methodology for data remediation and the re-engineering of operational data capture is described.

  12. panMetaDocs, eSciDoc, and DOIDB - an infrastructure for the curation and publication of file-based datasets for 'GFZ Data Services'

    Science.gov (United States)

    Ulbricht, Damian; Elger, Kirsten; Bertelmann, Roland; Klump, Jens

    2016-04-01

    With the foundation of DataCite in 2009 and the technical infrastructure installed in the last six years it has become very easy to create citable dataset DOIs. Nowadays, dataset DOIs are increasingly accepted and required by journals in reference lists of manuscripts. In addition, DataCite provides usage statistics [1] of assigned DOIs and offers a public search API to make research data count. By linking related information to the data, they become more useful for future generations of scientists. For this purpose, several identifier systems, as ISBN for books, ISSN for journals, DOI for articles or related data, Orcid for authors, and IGSN for physical samples can be attached to DOIs using the DataCite metadata schema [2]. While these are good preconditions to publish data, free and open solutions that help with the curation of data, the publication of research data, and the assignment of DOIs in one software seem to be rare. At GFZ Potsdam we built a modular software stack that is made of several free and open software solutions and we established 'GFZ Data Services'. 'GFZ Data Services' provides storage, a metadata editor for publication and a facility to moderate minted DOIs. All software solutions are connected through web APIs, which makes it possible to reuse and integrate established software. Core component of 'GFZ Data Services' is an eSciDoc [3] middleware that is used as central storage, and has been designed along the OAIS reference model for digital preservation. Thus, data are stored in self-contained packages that are made of binary file-based data and XML-based metadata. The eSciDoc infrastructure provides access control to data and it is able to handle half-open datasets, which is useful in embargo situations when a subset of the research data are released after an adequate period. The data exchange platform panMetaDocs [4] makes use of eSciDoc's REST API to upload file-based data into eSciDoc and uses a metadata editor [5] to annotate the files

  13. Metadata Laws, Journalism and Resistance in Australia

    Directory of Open Access Journals (Sweden)

    Benedetta Brevini

    2017-03-01

    Full Text Available The intelligence leaks from Edward Snowden in 2013 unveiled the sophistication and extent of data collection by the United States’ National Security Agency and major global digital firms prompting domestic and international debates about the balance between security and privacy, openness and enclosure, accountability and secrecy. It is difficult not to see a clear connection with the Snowden leaks in the sharp acceleration of new national security legislations in Australia, a long term member of the Five Eyes Alliance. In October 2015, the Australian federal government passed controversial laws that require telecommunications companies to retain the metadata of their customers for a period of two years. The new acts pose serious threats for the profession of journalism as they enable government agencies to easily identify and pursue journalists’ sources. Bulk data collections of this type of information deter future whistleblowers from approaching journalists, making the performance of the latter’s democratic role a challenge. After situating this debate within the scholarly literature at the intersection between surveillance studies and communication studies, this article discusses the political context in which journalists are operating and working in Australia; assesses how metadata laws have affected journalism practices and addresses the possibility for resistance.

  14. Projections of extreme water level events for atolls in the western Tropical Pacific

    Science.gov (United States)

    Merrifield, M. A.; Becker, J. M.; Ford, M.; Yao, Y.

    2014-12-01

    Conditions that lead to extreme water levels and coastal flooding are examined for atolls in the Republic of the Marshall Islands based on a recent field study of wave transformations over fringing reefs, tide gauge observations, and wave model hindcasts. Wave-driven water level extremes pose the largest threat to atoll shorelines, with coastal levels scaling as approximately one-third of the incident breaking wave height. The wave-driven coastal water level is partitioned into a mean setup, low frequency oscillations associated with cross-reef quasi-standing modes, and wind waves that reach the shore after undergoing high dissipation due to breaking and bottom friction. All three components depend on the water level over the reef; however, the sum of the components is independent of water level due to cancelling effects. Wave hindcasts suggest that wave-driven water level extremes capable of coastal flooding are infrequent events that require a peak wave event to coincide with mid- to high-tide conditions. Interannual and decadal variations in sea level do not change the frequency of these events appreciably. Future sea-level rise scenarios significantly increase the flooding threat associated with wave events, with a nearly exponential increase in flooding days per year as sea level exceeds 0.3 to 1.0 m above current levels.

  15. Enhancing Media Personalization by Extracting Similarity Knowledge from Metadata

    DEFF Research Database (Denmark)

    Butkus, Andrius

    be seen as a cognitive foundation for modeling concepts. Conceptual Spaces is applied in this thesis to analyze media in terms of its dimensions and knowledge domains, which in return defines properties and concepts. One of the most important domains in terms of describing media is the emotional one...... only “more of the same” type of content which does not necessarily lead to the meaningful personalization. Another way to approach similarity is to find a similar underlying meaning in the content. Aspects of meaning in media can be represented using Gardenfors Conceptual Spaces theory, which can......) using Latent Semantic Analysis (one of the unsupervised machine learning techniques). It presents three separate cases to illustrate the similarity knowledge extraction from the metadata, where the emotional components in each case represents different abstraction levels – genres, synopsis and lyrics...

  16. Habitat-Lite: A GSC case study based on free text terms for environmental metadata

    Energy Technology Data Exchange (ETDEWEB)

    Kyrpides, Nikos; Hirschman, Lynette; Clark, Cheryl; Cohen, K. Bretonnel; Mardis, Scott; Luciano, Joanne; Kottmann, Renzo; Cole, James; Markowitz, Victor; Kyrpides, Nikos; Field, Dawn

    2008-04-01

    There is an urgent need to capture metadata on the rapidly growing number of genomic, metagenomic and related sequences, such as 16S ribosomal genes. This need is a major focus within the Genomic Standards Consortium (GSC), and Habitat is a key metadata descriptor in the proposed 'Minimum Information about a Genome Sequence' (MIGS) specification. The goal of the work described here is to provide a light-weight, easy-to-use (small) set of terms ('Habitat-Lite') that captures high-level information about habitat while preserving a mapping to the recently launched Environment Ontology (EnvO). Our motivation for building Habitat-Lite is to meet the needs of multiple users, such as annotators curating these data, database providers hosting the data, and biologists and bioinformaticians alike who need to search and employ such data in comparative analyses. Here, we report a case study based on semi-automated identification of terms from GenBank and GOLD. We estimate that the terms in the initial version of Habitat-Lite would provide useful labels for over 60% of the kinds of information found in the GenBank isolation-source field, and around 85% of the terms in the GOLD habitat field. We present a revised version of Habitat-Lite and invite the community's feedback on its further development in order to provide a minimum list of terms to capture high-level habitat information and to provide classification bins needed for future studies.

  17. ITIL Based Service Level Management if SLAs Cover Security

    Directory of Open Access Journals (Sweden)

    Tomas Feglar

    2005-08-01

    Full Text Available Current level of information technology creates new perspectives for more IT service oriented market. Quality of these services requires slightly different approach then was applied for products including software. No IT services are delivered and supported in risk free environment. Risks would be considered consistently with IT services quality gaps from Service Level Management (SLM perspective. SLM is one of ITIL modules that are widely used within the IT service industry. We identified some weaknesses in how SLM is developed in ITIL environment if service level agreement (SLA has cover Security. We argue that in such cases Architecture modeling and risk assessment approach let us effectively control analytical effort that relates to risks identification and understanding. Risk driven countermeasures designed in a next step (Risk treatment have significant impact to the SLM especially from responsibility perspective. To demonstrate SLM's importance in real practice we analyze SLA synthesize process in CCI (Cyber Critical Infrastructure environment.

  18. Event-building and PC farm based level-3 trigger at the CDF experiment

    CERN Document Server

    Anikeev, K; Furic, I K; Holmgren, D; Korn, A J; Kravchenko, I V; Mulhearn, M; Ngan, P; Paus, C; Rakitine, A; Rechenmacher, R; Shah, T; Sphicas, Paris; Sumorok, K; Tether, S; Tseng, J

    2000-01-01

    In the technical design report the event building process at Fermilab's CDF experiment is required to function at an event rate of 300 events/sec. The events are expected to have an average size of 150 kBytes (kB) and are assembled from fragments of 16 readout locations. The fragment size from the different locations varies between 12 kB and 16 kB. Once the events are assembled they are fed into the Level-3 trigger which is based on processors running programs to filter events using the full event information. Computing power on the order of a second on a Pentium II processor is required per event. The architecture design is driven by the cost and is therefore based on commodity components: VME processor modules running VxWorks for the readout, an ATM switch for the event building, and Pentium PCs running Linux as an operation system for the Level-3 event processing. Pentium PCs are also used to receive events from the ATM switch and further distribute them to the processing nodes over multiple 100 Mbps Ether...

  19. Patient satisfaction in outpatient healthcare services at secondary level vs. tertiary level

    Directory of Open Access Journals (Sweden)

    Velikj-Stefanovska Vesna

    2014-01-01

    Full Text Available Introduction. Patients satisfaction is a very important part of any clinical practice both for evaluation and improvement of healthcare services. Objective. The aim of this study was to determine patient satisfaction with public outpatient healthcare services at secondary and tertiary level and to assess possible differences between the two levels. Methods. In a quantitative cross-sectional study, a convenient sample of 646 patients who experienced public outpatient healthcare services at the secondary and tertiary level during the last two months were interviewed. Patient satisfaction questionnaires, with statements regarding various aspects of satisfaction, were completed during face-to-face interviews (response rate 84.6%. The research instrument was tested for internal consistency using the Cronbach’s coefficient alpha estimate. Results. The patients were significantly more satisfied in tertiary than in secondary outpatient healthcare facilities in almost all aspects of assessment related to general settings, nurse/administrative staff performance and physician performance (p<0.001. The patients in the secondary healthcare services (SHCS were more satisfied than in the tertiary healthcare services (THCS but only regarding the information on location (83.9% vs.78.3% and possibilities to enter and move inside the department (88.8% vs. 83.3%. Analysis of data for SHCS and THCS showed that there was no significant difference between the mean overall satisfaction scores with regard to patients’ gender, age, marital status, educational level, employment and number of visits. Conclusion. There is a need to improve the current level of patient-provider relationship and communication, as well as that of hospital environment, while special efforts should be made to address the problem of patient waiting time and hospital bureaucracy.

  20. New GES DISC Services Shortening the Path in Science Data Discovery

    Science.gov (United States)

    Li, Angela; Shie, Chung-Lin; Petrenko, Maksym; Hegde, Mahabaleshwa; Teng, William; Liu, Zhong; Bryant, Keith; Shen, Suhung; Hearty, Thomas; Wei, Jennifer; hide

    2017-01-01

    The Current GES DISC available services only allow user to select variables from a single dataset at a time and too many variables from a dataset are displayed, choice is hard. At American Geophysical Union (AGU) 2016 Fall Meeting, Goddard Earth Sciences Data Information Services Center (GES DISC) unveiled a new service: Datalist. A Datalist is a collection of predefined or user-defined data variables from one or more archived datasets. Our science support team curated predefined datalist and provided value to the user community. Imagine some novice user wants to study hurricane and typed in hurricane in the search box. The first item in the search result is GES DISC provided Hurricane Datalist. It contains scientists recommended variables from multiple datasets like TRMM, GPM, MERRA, etc. Datalist uses the same architecture as that of our new website, which also provides one-stop shopping for data, metadata, citation, documentation, visualization and other available services.We implemented Datalist with new GES DISC web architecture, one single web page that unified all user interfaces. From that webpage, users can find data by either type in keyword, or browse by category. It also provides user with a sophisticated integrated data and services package, including metadata, citation, documentation, visualization, and data-specific services, all available from one-stop shopping.

  1. Semantic Web Data Discovery of Earth Science Data at NASA Goddard Earth Sciences Data and Information Services Center (GES DISC)

    Science.gov (United States)

    Hegde, Mahabaleshwara; Strub, Richard F.; Lynnes, Christopher S.; Fang, Hongliang; Teng, William

    2008-01-01

    Mirador is a web interface for searching Earth Science data archived at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Mirador provides keyword-based search and guided navigation for providing efficient search and access to Earth Science data. Mirador employs the power of Google's universal search technology for fast metadata keyword searches, augmented by additional capabilities such as event searches (e.g., hurricanes), searches based on location gazetteer, and data services like format converters and data sub-setters. The objective of guided data navigation is to present users with multiple guided navigation in Mirador is an ontology based on the Global Change Master directory (GCMD) Directory Interchange Format (DIF). Current implementation includes the project ontology covering various instruments and model data. Additional capabilities in the pipeline include Earth Science parameter and applications ontologies.

  2. Training and Best Practice Guidelines: Implications for Metadata Creation

    Science.gov (United States)

    Chuttur, Mohammad Y.

    2012-01-01

    In response to the rapid development of digital libraries over the past decade, researchers have focused on the use of metadata as an effective means to support resource discovery within online repositories. With the increasing involvement of libraries in digitization projects and the growing number of institutional repositories, it is anticipated…

  3. Methane Yield Database: Online infrastructure and bioresource for methane yield data and related metadata.

    Science.gov (United States)

    Murovec, Boštjan; Kolbl, Sabina; Stres, Blaž

    2015-01-01

    The aim of this study was to develop and validate a community supported online infrastructure and bioresource for methane yield data and accompanying metadata collected from published literature. In total, 1164 entries described by 15,749 data points were assembled. Analysis of data collection showed little congruence in reporting of methodological approaches. The largest identifiable source of variation in reported methane yields was represented by authorship (i.e. substrate batches within particular substrate class) within which experimental scales (volumes (0.02-5l), incubation temperature (34-40 °C) and % VS of substrate played an important role (p 63%). This calls for reconsideration of accepted approaches to reporting data in currently published literature to increase capacity to service industrial decision making to a greater extent. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Using machine-coded event data for the micro-level study of political violence

    Directory of Open Access Journals (Sweden)

    Jesse Hammond

    2014-07-01

    Full Text Available Machine-coded datasets likely represent the future of event data analysis. We assess the use of one of these datasets—Global Database of Events, Language and Tone (GDELT—for the micro-level study of political violence by comparing it to two hand-coded conflict event datasets. Our findings indicate that GDELT should be used with caution for geo-spatial analyses at the subnational level: its overall correlation with hand-coded data is mediocre, and at the local level major issues of geographic bias exist in how events are reported. Overall, our findings suggest that due to these issues, researchers studying local conflict processes may want to wait for a more reliable geocoding method before relying too heavily on this set of machine-coded data.

  5. Leveraging the BPEL Event Model to Support QoS-aware Process Execution

    Science.gov (United States)

    Zaid, Farid; Berbner, Rainer; Steinmetz, Ralf

    Business processes executed using compositions of distributed Web Services are susceptible to different fault types. The Web Services Business Process Execution Language (BPEL) is widely used to execute such processes. While BPEL provides fault handling mechanisms to handle functional faults like invalid message types, it still lacks a flexible native mechanism to handle non-functional exceptions associated with violations of QoS levels that are typically specified in a governing Service Level Agreement (SLA), In this paper, we present an approach to complement BPEL's fault handling, where expected QoS levels and necessary recovery actions are specified declaratively in form of Event-Condition-Action (ECA) rules. Our main contribution is leveraging BPEL's standard event model which we use as an event space for the created ECA rules. We validate our approach by an extension to an open source BPEL engine.

  6. Entry Level Skills for the Event Management Profession: Implications for Curriculum Development

    Science.gov (United States)

    Fletcher, Donna; Dunn, Julie; Prince, Rosemary

    2009-01-01

    The rapid growth of the event industry has resulted in a world-wide demand for education and training programs in event management. While the professional associations in event management have provided providing quality training and credentialing for their members, the 140 colleges and universities preparing students for entry level positions in…

  7. Dynamic Service Selection in Workflows Using Performance Data

    Directory of Open Access Journals (Sweden)

    David W. Walker

    2007-01-01

    Full Text Available An approach to dynamic workflow management and optimisation using near-realtime performance data is presented. Strategies are discussed for choosing an optimal service (based on user-specified criteria from several semantically equivalent Web services. Such an approach may involve finding "similar" services, by first pruning the set of discovered services based on service metadata, and subsequently selecting an optimal service based on performance data. The current implementation of the prototype workflow framework is described, and demonstrated with a simple workflow. Performance results are presented that show the performance benefits of dynamic service selection. A statistical analysis based on the first order statistic is used to investigate the likely improvement in service response time arising from dynamic service selection.

  8. Visualizing attitudes towards service levels

    NARCIS (Netherlands)

    K.Y. Lam (Kar Yin); M. van de Velden (Michel); Ph.H.B.F. Franses (Philip Hans)

    2011-01-01

    textabstractTo assess the attitudes with respect to the quality of banks’ service levels, we use survey data amongst more than 250 Chief Financial Officers (CFOs) of a range of Netherlands-based companies. These companies range from small to very large (including multinationals as Philips and Shell)

  9. User interface development and metadata considerations for the Atmospheric Radiation Measurement (ARM) archive

    Science.gov (United States)

    Singley, P. T.; Bell, J. D.; Daugherty, P. F.; Hubbs, C. A.; Tuggle, J. G.

    1993-01-01

    This paper will discuss user interface development and the structure and use of metadata for the Atmospheric Radiation Measurement (ARM) Archive. The ARM Archive, located at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, is the data repository for the U.S. Department of Energy's (DOE's) ARM Project. After a short description of the ARM Project and the ARM Archive's role, we will consider the philosophy and goals, constraints, and prototype implementation of the user interface for the archive. We will also describe the metadata that are stored at the archive and support the user interface.

  10. AMCO Scribe Sampling Data Map Service, Oakland CA, 2017, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This map service contains a single layer: Groundwater Samples. The layer draws at all scales. Full FGDC metadata for the layer may be found by clicking the layer...

  11. Competence Based Educational Metadata for Supporting Lifelong Competence Development Programmes

    NARCIS (Netherlands)

    Sampson, Demetrios; Fytros, Demetrios

    2008-01-01

    Sampson, D., & Fytros, D. (2008). Competence Based Educational Metadata for Supporting Lifelong Competence Development Programmes. In P. Diaz, Kinshuk, I. Aedo & E. Mora (Eds.), Proceedings of the 8th IEEE International Conference on Advanced Learning Technologies (ICALT 2008), pp. 288-292. July,

  12. Logic programming and metadata specifications

    Science.gov (United States)

    Lopez, Antonio M., Jr.; Saacks, Marguerite E.

    1992-01-01

    Artificial intelligence (AI) ideas and techniques are critical to the development of intelligent information systems that will be used to collect, manipulate, and retrieve the vast amounts of space data produced by 'Missions to Planet Earth.' Natural language processing, inference, and expert systems are at the core of this space application of AI. This paper presents logic programming as an AI tool that can support inference (the ability to draw conclusions from a set of complicated and interrelated facts). It reports on the use of logic programming in the study of metadata specifications for a small problem domain of airborne sensors, and the dataset characteristics and pointers that are needed for data access.

  13. Features, Events, and Processes: system Level

    Energy Technology Data Exchange (ETDEWEB)

    D. McGregor

    2004-10-15

    The purpose of this analysis report is to evaluate and document the inclusion or exclusion of the system-level features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment for the license application (TSPA-LA). A screening decision, either Included or Excluded, is given for each FEP along with the technical basis for screening decisions. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.113 (d, e, and f) (DIRS 156605). The system-level FEPs addressed in this report typically are overarching in nature, rather than being focused on a particular process or subsystem. As a result, they are best dealt with at the system level rather than addressed within supporting process-level or subsystem-level analyses and models reports. The system-level FEPs also tend to be directly addressed by regulations, guidance documents, or assumptions listed in the regulations; or are addressed in background information used in development of the regulations. For included FEPs, this analysis summarizes the implementation of the FEP in the TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from the TSPA-LA (i.e., why the FEP is excluded). The initial version of this report (Revision 00) was developed to support the total system performance assessment for site recommendation (TSPA-SR). This revision addresses the license application (LA) FEP List (DIRS 170760).

  14. Features, Events, and Processes: system Level

    International Nuclear Information System (INIS)

    D. McGregor

    2004-01-01

    The purpose of this analysis report is to evaluate and document the inclusion or exclusion of the system-level features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment for the license application (TSPA-LA). A screening decision, either Included or Excluded, is given for each FEP along with the technical basis for screening decisions. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.113 (d, e, and f) (DIRS 156605). The system-level FEPs addressed in this report typically are overarching in nature, rather than being focused on a particular process or subsystem. As a result, they are best dealt with at the system level rather than addressed within supporting process-level or subsystem-level analyses and models reports. The system-level FEPs also tend to be directly addressed by regulations, guidance documents, or assumptions listed in the regulations; or are addressed in background information used in development of the regulations. For included FEPs, this analysis summarizes the implementation of the FEP in the TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from the TSPA-LA (i.e., why the FEP is excluded). The initial version of this report (Revision 00) was developed to support the total system performance assessment for site recommendation (TSPA-SR). This revision addresses the license application (LA) FEP List (DIRS 170760)

  15. SPASE, Metadata, and the Heliophysics Virtual Observatories

    Science.gov (United States)

    Thieman, James; King, Todd; Roberts, Aaron

    2010-01-01

    To provide data search and access capability in the field of Heliophysics (the study of the Sun and its effects on the Solar System, especially the Earth) a number of Virtual Observatories (VO) have been established both via direct funding from the U.S. National Aeronautics and Space Administration (NASA) and through other funding agencies in the U.S. and worldwide. At least 15 systems can be labeled as Virtual Observatories in the Heliophysics community, 9 of them funded by NASA. The problem is that different metadata and data search approaches are used by these VO's and a search for data relevant to a particular research question can involve consulting with multiple VO's - needing to learn a different approach for finding and acquiring data for each. The Space Physics Archive Search and Extract (SPASE) project is intended to provide a common data model for Heliophysics data and therefore a common set of metadata for searches of the VO's. The SPASE Data Model has been developed through the common efforts of the Heliophysics Data and Model Consortium (HDMC) representatives over a number of years. We currently have released Version 2.1 of the Data Model. The advantages and disadvantages of the Data Model will be discussed along with the plans for the future. Recent changes requested by new members of the SPASE community indicate some of the directions for further development.

  16. Assuring the Quality of Agricultural Learning Repositories: Issues for the Learning Object Metadata Creation Process of the CGIAR

    Science.gov (United States)

    Zschocke, Thomas; Beniest, Jan

    The Consultative Group on International Agricultural Re- search (CGIAR) has established a digital repository to share its teaching and learning resources along with descriptive educational information based on the IEEE Learning Object Metadata (LOM) standard. As a critical component of any digital repository, quality metadata are critical not only to enable users to find more easily the resources they require, but also for the operation and interoperability of the repository itself. Studies show that repositories have difficulties in obtaining good quality metadata from their contributors, especially when this process involves many different stakeholders as is the case with the CGIAR as an international organization. To address this issue the CGIAR began investigating the Open ECBCheck as well as the ISO/IEC 19796-1 standard to establish quality protocols for its training. The paper highlights the implications and challenges posed by strengthening the metadata creation workflow for disseminating learning objects of the CGIAR.

  17. ARIADNE: a Tracking System for Relationships in LHCb Metadata

    CERN Document Server

    Shapoval, I; Cattaneo, M

    2014-01-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ari...

  18. Scalable Metadata Management for a Large Multi-Source Seismic Data Repository

    Energy Technology Data Exchange (ETDEWEB)

    Gaylord, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dodge, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Magana-Zook, S. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barno, J. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Knapp, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thomas, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sullivan, D. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ruppert, S. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mellors, R. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity.

  19. Semantic Web: Metadata, Linked Data, Open Data

    Directory of Open Access Journals (Sweden)

    Vanessa Russo

    2015-12-01

    Full Text Available What's the Semantic Web? What's the use? The inventor of the Web Tim Berners-Lee describes it as a research methodology able to take advantage of the network to its maximum capacity. This metadata system represents the innovative element through web 2.0 to web 3.0. In this context will try to understand what are the theoretical and informatic requirements of the Semantic Web. Finally will explain Linked Data applications to develop new tools for active citizenship.

  20. Service Oriented Architecture for High Level Applications

    International Nuclear Information System (INIS)

    Chu, P.

    2012-01-01

    Standalone high level applications often suffer from poor performance and reliability due to lengthy initialization, heavy computation and rapid graphical update. Service-oriented architecture (SOA) is trying to separate the initialization and computation from applications and to distribute such work to various service providers. Heavy computation such as beam tracking will be done periodically on a dedicated server and data will be available to client applications at all time. Industrial standard service architecture can help to improve the performance, reliability and maintainability of the service. Robustness will also be improved by reducing the complexity of individual client applications.

  1. TAPRegExt: a VOResource Schema Extension for Describing TAP Services Version 1.0

    Science.gov (United States)

    Demleitner, Markus; Dowler, Patrick; Plante, Ray; Rixon, Guy; Taylor, Mark; Demleitner, Markus

    2012-08-01

    This document describes an XML encoding standard for metadata about services implementing the table access protocol TAP [TAP], referred to as TAPRegExt. Instance documents are part of the service's registry record or can be obtained from the service itself. They deliver information to both humans and software on the languages, output formats, and upload methods supported by the service, as well as data models implemented by the exposed tables, optional language features, and certain limits enforced by the service.

  2. Service level status-a new real-time status display for IT services

    Energy Technology Data Exchange (ETDEWEB)

    Lopienski, S [IT Department, FIO group CERN, European Organization for Nuclear Research, 1211 Geneve 23 (Switzerland)], E-mail: Sebastian.Lopienski@cern.ch

    2008-07-15

    Nowadays, IT departments provide, and people use, computing services of an increasingly heterogeneous nature. There is thus a growing need for a status display that groups these different services and reports their status and availability in a uniform way. The Service Level Status (SLS) system addresses these needs by providing a web-based display that dynamically shows availability, basic information and statistics about various IT services, as well as the dependencies between them. This paper first introduces the requirements SLS had to meet, and the main concepts behind it, like service availability and status, Key Performance Indicators (KPIs), sub/meta-services, and service dependencies. It then describes the SLS system architecture, and some interesting implementation details, such as the usage of XML Schemas. Since clear visualization of service availability and status is one of the main goals of SLS, emphasis is put on describing the intuitive web-based user interface.

  3. Service level status-a new real-time status display for IT services

    International Nuclear Information System (INIS)

    Lopienski, S

    2008-01-01

    Nowadays, IT departments provide, and people use, computing services of an increasingly heterogeneous nature. There is thus a growing need for a status display that groups these different services and reports their status and availability in a uniform way. The Service Level Status (SLS) system addresses these needs by providing a web-based display that dynamically shows availability, basic information and statistics about various IT services, as well as the dependencies between them. This paper first introduces the requirements SLS had to meet, and the main concepts behind it, like service availability and status, Key Performance Indicators (KPIs), sub/meta-services, and service dependencies. It then describes the SLS system architecture, and some interesting implementation details, such as the usage of XML Schemas. Since clear visualization of service availability and status is one of the main goals of SLS, emphasis is put on describing the intuitive web-based user interface

  4. A Proposal for a Thesaurus for Web Services in Solar Radiation

    Science.gov (United States)

    Gschwind, Benoit; Menard, Lionel; Ranchin, Thierry; Wald, Lucien; Stackhouse, Paul W., Jr.

    2007-01-01

    Metadata are necessary to discover, describe and exchange any type of information, resource and service at a large scale. A significant amount of effort has been made in the field of geography and environment to establish standards. Efforts still remain to address more specific domains such as renewable energies. This communication focuses on solar energy and more specifically on aspects in solar radiation that relate to geography and meteorology. A thesaurus in solar radiation is proposed for the keys elements in solar radiation namely time, space and radiation types. The importance of time-series in solar radiation is outlined and attributes of the key elements are discussed. An XML schema for encoding metadata is proposed. The exploitation of such a schema in web services is discussed. This proposal is a first attempt at establishing a thesaurus for describing data and applications in solar radiation.

  5. A Discrete-Events Simulation Approach for Evaluation of Service-Based Applications

    OpenAIRE

    Driss , Maha; Jamoussi , Yassine; Jézéquel , Jean-Marc; Ben Ghézala , Henda Hajjami

    2008-01-01

    International audience; One of the promises of the service-oriented architecture(SOA) is to build complex added-value services in order to enhance and extend existing ones. service-based applications(SBAs) are asked not only to perform required functionalities,but also to deliver expected level of Quality of Service (QoS). Dealing with QoS management of such distributed applications, which are executed in dynamic environments,raises the need to consider context characteristics.This paper prop...

  6. Discrete event simulation of the ATLAS second level trigger

    International Nuclear Information System (INIS)

    Vermeulen, J.C.; Dankers, R.J.; Hunt, S.; Harris, F.; Hortnagl, C.; Erasov, A.; Bogaerts, A.

    1998-01-01

    Discrete event simulation is applied for determining the computing and networking resources needed for the ATLAS second level trigger. This paper discusses the techniques used and some of the results obtained so far for well defined laboratory configurations and for the full system

  7. On the communication of scientific data: The Full-Metadata Format

    DEFF Research Database (Denmark)

    Riede, Moritz; Schueppel, Rico; Sylvester-Hvid, Kristian O.

    2010-01-01

    In this paper, we introduce a scientific format for text-based data files, which facilitates storing and communicating tabular data sets. The so-called Full-Metadata Format builds on the widely used INI-standard and is based on four principles: readable self-documentation, flexible structure, fail...

  8. Linked Metadata - lightweight semantics for data integration (Invited)

    Science.gov (United States)

    Hendler, J. A.

    2013-12-01

    fly integration may prefer to do more traditional data queries and then convert and link the 'views' returned at retrieval time, providing another means of using the linked data infrastructure without having to convert whole datasets to triples to provide linking. Web companies have been taking advantage of 'lightweight' semantic metadata for search quality and optimization (cf. schema.org), linking networks within and without web sites (cf. Facebook's Open Graph Protocol), and in doing various kinds of advertisement and user modeling across datasets. Scientific metadata, on the other hand, has traditionally been geared at being largescale and highly descriptive, and scientific ontologies have been aimed at high expressivity, essentially providing complex reasoning services rather than the less expressive vocabularies needed for data discovery and simple mappings that can allow humans (or more complex systems) when full scale integration is needed. Although this work is just the beginning for providing integration, as the community creates more and more datasets, discovery of these data resources on the Web becomes a crucial starting place. Simple descriptors, that can be combined with textual fields and/or common community vocabularies, can be a great starting place on bringing scientific data into the Web of Data that is growing in other communities. References: [1] Pouchard, Line C., et al. "A Linked Science investigation: enhancing climate change data discovery with semantic technologies." Earth science informatics 6.3 (2013): 175-185.

  9. Evaluation of logistic service level of an automotive parts distribution firm

    Directory of Open Access Journals (Sweden)

    Fábio Germano Cardoso

    2014-11-01

    Full Text Available The stock management theories have a mathematical and statistical trend, usually failing in exposing the business side of decisions regarding inventory costs and the related service levels. The article tests the theory of defining the service level by building the profit mathematical model, using for such some available historical data. With this theory it is possible to find the optimal service level by deriving the profit curve and equaling the marginal profit to zero. The research is characterized as a case research of an auto parts distribution company. The calculated optimal service level was compared to the data available from the company. The results of such analysis revealed the fact that only the theory of calculating the optimal service level through the profit curve derivation is not sufficient to explain how the company defines which service level is going to be used.

  10. Interoperability Across the Stewardship Spectrum in the DataONE Repository Federation

    Science.gov (United States)

    Jones, M. B.; Vieglais, D.; Wilson, B. E.

    2016-12-01

    Thousands of earth and environmental science repositories serve many researchers and communities, each with their own community and legal mandates, sustainability models, and historical infrastructure. These repositories span the stewardship spectrum from highly curated collections that employ large numbers of staff members to review and improve data, to small, minimal budget repositories that accept data caveat emptor and where all responsibility for quality lies with the submitter. Each repository fills a niche, providing services that meet the stewardship tradeoffs of one or more communities. We have reviewed these stewardship tradeoffs for several DataONE member repositories ranging from minimally (KNB) to highly curated (Arctic Data Center), as well as general purpose (Dryad) to highly discipline or project specific (NEON). The rationale behind different levels of stewardship reflect resolution of these tradeoffs. Some repositories aim to encourage extensive uptake by keeping processes simple and minimizing the amount of information collected, but this limits the long-term utility of the data and the search, discovery, and integration systems that are possible. Other repositories require extensive metadata input, review, and assessment, allowing for excellent preservation, discovery, and integration but at the cost of significant time for submitters and expense for curatorial staff. DataONE recognizes these different levels of curation, and attempts to embrace them to create a federation that is useful across the stewardship spectrum. DataONE provides a tiered model for repositories with growing utility of DataONE services at higher tiers of curation. The lowest tier supports read-only access to data and requires little more than title and contact metadata. Repositories can gradually phase in support for higher levels of metadata and services as needed. These tiered capabilities are possible through flexible support for multiple metadata standards and services

  11. The importance of metadata to assess information content in digital reconstructions of neuronal morphology.

    Science.gov (United States)

    Parekh, Ruchi; Armañanzas, Rubén; Ascoli, Giorgio A

    2015-04-01

    Digital reconstructions of axonal and dendritic arbors provide a powerful representation of neuronal morphology in formats amenable to quantitative analysis, computational modeling, and data mining. Reconstructed files, however, require adequate metadata to identify the appropriate animal species, developmental stage, brain region, and neuron type. Moreover, experimental details about tissue processing, neurite visualization and microscopic imaging are essential to assess the information content of digital morphologies. Typical morphological reconstructions only partially capture the underlying biological reality. Tracings are often limited to certain domains (e.g., dendrites and not axons), may be incomplete due to tissue sectioning, imperfect staining, and limited imaging resolution, or can disregard aspects irrelevant to their specific scientific focus (such as branch thickness or depth). Gauging these factors is critical in subsequent data reuse and comparison. NeuroMorpho.Org is a central repository of reconstructions from many laboratories and experimental conditions. Here, we introduce substantial additions to the existing metadata annotation aimed to describe the completeness of the reconstructed neurons in NeuroMorpho.Org. These expanded metadata form a suitable basis for effective description of neuromorphological data.

  12. Introducing the PRIDE Archive RESTful web services.

    Science.gov (United States)

    Reisinger, Florian; del-Toro, Noemi; Ternent, Tobias; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-07-01

    The PRIDE (PRoteomics IDEntifications) database is one of the world-leading public repositories of mass spectrometry (MS)-based proteomics data and it is a founding member of the ProteomeXchange Consortium of proteomics resources. In the original PRIDE database system, users could access data programmatically by accessing the web services provided by the PRIDE BioMart interface. New REST (REpresentational State Transfer) web services have been developed to serve the most popular functionality provided by BioMart (now discontinued due to data scalability issues) and address the data access requirements of the newly developed PRIDE Archive. Using the API (Application Programming Interface) it is now possible to programmatically query for and retrieve peptide and protein identifications, project and assay metadata and the originally submitted files. Searching and filtering is also possible by metadata information, such as sample details (e.g. species and tissues), instrumentation (mass spectrometer), keywords and other provided annotations. The PRIDE Archive web services were first made available in April 2014. The API has already been adopted by a few applications and standalone tools such as PeptideShaker, PRIDE Inspector, the Unipept web application and the Python-based BioServices package. This application is free and open to all users with no login requirement and can be accessed at http://www.ebi.ac.uk/pride/ws/archive/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Negotiation and Monitoring of Service Level Agreements

    Science.gov (United States)

    Quillinan, Thomas B.; Clark, Kassidy P.; Warnier, Martijn; Brazier, Frances M. T.; Rana, Omer

    Service level agreements (SLAs) provide a means to define specific Quality of Service (QoS) guarantees between providers and consumers of services. Negotiation and definition of these QoS characteristics is an area of significant research. However, defining the actions that take place when an agreement is violated is a topic of more recent focus. This paper discusses recent advances in this field and propose some additional features that can help both consumers and producers during the enactment of services. These features include the ability to (re)negotiate penalties in an agreement, and specifically focuses on the renegotiation of penalties during enactment to reflect ongoing violations.

  14. CBM first-level event selector input interface

    Energy Technology Data Exchange (ETDEWEB)

    Hutter, Dirk [Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt (Germany); Collaboration: CBM-Collaboration

    2016-07-01

    The CBM First-level Event Selector (FLES) is the central event selection system of the upcoming CBM experiment at FAIR. Designed as a high-performance computing cluster, its task is an online analysis of the physics data at a total data rate exceeding 1 TByte/s. To allow efficient event selection, the FLES performs timeslice building, which combines the data from all given input links to self-contained, overlapping processing intervals and distributes them to compute nodes. Partitioning the input data streams into specialized containers allows to perform this task very efficiently. The FLES Input Interface defines the linkage between FEE and FLES data transport framework. Utilizing a custom FPGA board, it receives data via optical links, prepares them for subsequent timeslice building, and transfers the data via DMA to the PC's memory. An accompanying HDL module implements the front-end logic interface and FLES link protocol in the front-end FPGAs. Prototypes of all Input Interface components have been implemented and integrated into the FLES framework. In contrast to earlier prototypes, which included components to work without a FPGA layer between FLES and FEE, the structure matches the foreseen final setup. This allows the implementation and evaluation of the final CBM read-out chain. An overview of the FLES Input Interface as well as studies on system integration and system start-up are presented.

  15. Information resource description creating and managing metadata

    CERN Document Server

    Hider, Philip

    2012-01-01

    An overview of the field of information organization that examines resource description as both a product and process of the contemporary digital environment.This timely book employs the unifying mechanism of the semantic web and the resource description framework to integrate the various traditions and practices of information and knowledge organization. Uniquely, it covers both the domain-specific traditions and practices and the practices of the ?metadata movement' through a single lens ? that of resource description in the broadest, semantic web sense.This approach more readily accommodate

  16. Usulan Sistem Pemesanan Untuk Meningkatkan Service Level Pada Sistem Heijunka

    OpenAIRE

    Hartini, Sri; Fanani, Zainal; Rachimi A., Luki

    2008-01-01

    PT X merupakan distributor  yang melayani permintaan suku cadang untuk wilayah Jawa Tengah dan Yogyakarta. Salah satu ukuran kepuasan pelanggan adalah service level.Berdasarkan data perusahaan pada bulan Januari – April 2007, service level global untuk keseluruhan item masih berada dibawah target service level yang ditetapkan oleh perusahaan. Pada penelitian ini diberikan usulan perbaikan pada sistem peramalan dan model pemesanan untuk item kritis, yaitu item 90915-TE001 dan 85214-0A...

  17. Ready to put metadata on the post-2015 development agenda? Linking data publications to responsible innovation and science diplomacy.

    Science.gov (United States)

    Özdemir, Vural; Kolker, Eugene; Hotez, Peter J; Mohin, Sophie; Prainsack, Barbara; Wynne, Brian; Vayena, Effy; Coşkun, Yavuz; Dereli, Türkay; Huzair, Farah; Borda-Rodriguez, Alexander; Bragazzi, Nicola Luigi; Faris, Jack; Ramesar, Raj; Wonkam, Ambroise; Dandara, Collet; Nair, Bipin; Llerena, Adrián; Kılıç, Koray; Jain, Rekha; Reddy, Panga Jaipal; Gollapalli, Kishore; Srivastava, Sanjeeva; Kickbusch, Ilona

    2014-01-01

    Metadata refer to descriptions about data or as some put it, "data about data." Metadata capture what happens on the backstage of science, on the trajectory from study conception, design, funding, implementation, and analysis to reporting. Definitions of metadata vary, but they can include the context information surrounding the practice of science, or data generated as one uses a technology, including transactional information about the user. As the pursuit of knowledge broadens in the 21(st) century from traditional "science of whats" (data) to include "science of hows" (metadata), we analyze the ways in which metadata serve as a catalyst for responsible and open innovation, and by extension, science diplomacy. In 2015, the United Nations Millennium Development Goals (MDGs) will formally come to an end. Therefore, we propose that metadata, as an ingredient of responsible innovation, can help achieve the Sustainable Development Goals (SDGs) on the post-2015 agenda. Such responsible innovation, as a collective learning process, has become a key component, for example, of the European Union's 80 billion Euro Horizon 2020 R&D Program from 2014-2020. Looking ahead, OMICS: A Journal of Integrative Biology, is launching an initiative for a multi-omics metadata checklist that is flexible yet comprehensive, and will enable more complete utilization of single and multi-omics data sets through data harmonization and greater visibility and accessibility. The generation of metadata that shed light on how omics research is carried out, by whom and under what circumstances, will create an "intervention space" for integration of science with its socio-technical context. This will go a long way to addressing responsible innovation for a fairer and more transparent society. If we believe in science, then such reflexive qualities and commitments attained by availability of omics metadata are preconditions for a robust and socially attuned science, which can then remain broadly

  18. CASE- METHODOLOGY FOR TRAINING THE SCIENTIFIC PEDAGOGICAL STAFF FOR SERVICES OF A DIGITAL LIBRARY

    Directory of Open Access Journals (Sweden)

    Tatyana Novytska

    2015-10-01

    Full Text Available This article describes a technique using a case method in preparing teaching staff to use digital library services as an example of a training exercise. It has been suggested program, which will be trained users, editors and administrators DL. Developed scale assessment of knowledge and skills of teaching staff for the results of card problems. Created test questions for the training sessions. Discovered the case-method as type of learning method. Discovered the basic characteristics of situational teaching method: analytical and cognitive. Analytical activities may include problem analysis and/or systems analysis and/or causal analysis, and/or praxeological analysis, and/or prognostic analysis and/or target-oriented analysis. Investigated the principle of formation of bibliographic descriptions DL resource based metadata concept of «quality metadata DL». Proved that mistake when making metadata may somehow block access to the DL.

  19. The quasi-biennial oscillation of 1.7 years in ground level enhancement events

    Science.gov (United States)

    Velasco Herrera, V. M.; Pérez-Peraza, J.; Soon, W.; Márquez-Adame, J. C.

    2018-04-01

    The so-called Ground Level Enhancement events are sporadic relativistic solar particles measured at ground level by a network of cosmic ray detectors worldwide. These sporadic events are typically assumed to occur by random chance. However, we find that by studying the last 56 ground level enhancement events reported from 1966 through 2014, these events occur preferentially in the positive phase of the quasi-biennial oscillation of 1.7 year periodicity. These discrete ground level enhancement events show that there is another type of solar emission (i.e., wavelike packets) that occurs only in a specific phase of a very particular oscillation. We interpret this empirical result to support that ground level enhancement events are not a result of purely stochastic processes. We used the Morlet wavelet to analyze the phase of each of the periodicities found by the wavelet analyses and local variations of power spectral density in these sporadic events. We found quasi-regular periodicities of 10.4, 6.55, 4.12, 2.9, 1.73, 0.86, 0.61, 0.4 and 0.24 years in ground level enhancements. Although some of these quasi-biennial oscillation periodicities (i.e., oscillations operating between 0.6 and 4 years) may be interpreted as simply harmonics and overtones of the fundamental solar cycle from the underlying sun-spot magnetism phenomenon. The sources of these periodicities are still unclear. Also there is no clear mechanism for the variability of the quasi-biennial oscillation periodicities itself. The quasi-biennial oscillation periodicities are broadly considered to be a variation of solar activity, associated with the solar dynamo process. Also, the intensity of these periodicities is more important around the years of maximum solar activity because the quasi-biennial oscillation periodicities are modulated by the solar cycle where the Sun is more energetically enhanced during activity maxima. To identify the relationships among ground level enhancement, solar, and cosmic

  20. AMCO Off-Site Air Monitoring Map Service, Oakland CA, 2017, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This map service contains a single layer: Off-Site Air Monitors. The layer draws at all scales. Full FGDC metadata for the layer may be found by clicking the layer...

  1. Standardizing metadata and taxonomic identification in metabarcoding studies.

    Science.gov (United States)

    Tedersoo, Leho; Ramirez, Kelly S; Nilsson, R Henrik; Kaljuvee, Aivi; Kõljalg, Urmas; Abarenkov, Kessy

    2015-01-01

    High-throughput sequencing-based metabarcoding studies produce vast amounts of ecological data, but a lack of consensus on standardization of metadata and how to refer to the species recovered severely hampers reanalysis and comparisons among studies. Here we propose an automated workflow covering data submission, compression, storage and public access to allow easy data retrieval and inter-study communication. Such standardized and readily accessible datasets facilitate data management, taxonomic comparisons and compilation of global metastudies.

  2. Earth Science Data Fusion with Event Building Approach

    Science.gov (United States)

    Lukashin, C.; Bartle, Ar.; Callaway, E.; Gyurjyan, V.; Mancilla, S.; Oyarzun, R.; Vakhnin, A.

    2015-01-01

    Objectives of the NASA Information And Data System (NAIADS) project are to develop a prototype of a conceptually new middleware framework to modernize and significantly improve efficiency of the Earth Science data fusion, big data processing and analytics. The key components of the NAIADS include: Service Oriented Architecture (SOA) multi-lingual framework, multi-sensor coincident data Predictor, fast into-memory data Staging, multi-sensor data-Event Builder, complete data-Event streaming (a work flow with minimized IO), on-line data processing control and analytics services. The NAIADS project is leveraging CLARA framework, developed in Jefferson Lab, and integrated with the ZeroMQ messaging library. The science services are prototyped and incorporated into the system. Merging the SCIAMACHY Level-1 observations and MODIS/Terra Level-2 (Clouds and Aerosols) data products, and ECMWF re- analysis will be used for NAIADS demonstration and performance tests in compute Cloud and Cluster environments.

  3. Metadata Access Tool for Climate and Health

    Science.gov (United States)

    Trtanji, J.

    2012-12-01

    The need for health information resources to support climate change adaptation and mitigation decisions is growing, both in the United States and around the world, as the manifestations of climate change become more evident and widespread. In many instances, these information resources are not specific to a changing climate, but have either been developed or are highly relevant for addressing health issues related to existing climate variability and weather extremes. To help address the need for more integrated data, the Interagency Cross-Cutting Group on Climate Change and Human Health, a working group of the U.S. Global Change Research Program, has developed the Metadata Access Tool for Climate and Health (MATCH). MATCH is a gateway to relevant information that can be used to solve problems at the nexus of climate science and public health by facilitating research, enabling scientific collaborations in a One Health approach, and promoting data stewardship that will enhance the quality and application of climate and health research. MATCH is a searchable clearinghouse of publicly available Federal metadata including monitoring and surveillance data sets, early warning systems, and tools for characterizing the health impacts of global climate change. Examples of relevant databases include the Centers for Disease Control and Prevention's Environmental Public Health Tracking System and NOAA's National Climate Data Center's national and state temperature and precipitation data. This presentation will introduce the audience to this new web-based geoportal and demonstrate its features and potential applications.

  4. Definition of an ISO 19115 metadata profile for SeaDataNet II Cruise Summary Reports and its XML encoding

    Science.gov (United States)

    Boldrini, Enrico; Schaap, Dick M. A.; Nativi, Stefano

    2013-04-01

    SeaDataNet implements a distributed pan-European infrastructure for Ocean and Marine Data Management whose nodes are maintained by 40 national oceanographic and marine data centers from 35 countries riparian to all European seas. A unique portal makes possible distributed discovery, visualization and access of the available sea data across all the member nodes. Geographic metadata play an important role in such an infrastructure, enabling an efficient documentation and discovery of the resources of interest. In particular: - Common Data Index (CDI) metadata describe the sea datasets, including identification information (e.g. product title, interested area), evaluation information (e.g. data resolution, constraints) and distribution information (e.g. download endpoint, download protocol); - Cruise Summary Reports (CSR) metadata describe cruises and field experiments at sea, including identification information (e.g. cruise title, name of the ship), acquisition information (e.g. utilized instruments, number of samples taken) In the context of the second phase of SeaDataNet (SeaDataNet 2 EU FP7 project, grant agreement 283607, started on October 1st, 2011 for a duration of 4 years) a major target is the setting, adoption and promotion of common international standards, to the benefit of outreach and interoperability with the international initiatives and communities (e.g. OGC, INSPIRE, GEOSS, …). A standardization effort conducted by CNR with the support of MARIS, IFREMER, STFC, BODC and ENEA has led to the creation of a ISO 19115 metadata profile of CDI and its XML encoding based on ISO 19139. The CDI profile is now in its stable version and it's being implemented and adopted by the SeaDataNet community tools and software. The effort has then continued to produce an ISO based metadata model and its XML encoding also for CSR. The metadata elements included in the CSR profile belong to different models: - ISO 19115: E.g. cruise identification information, including

  5. AMCO On-Site Air Monitoring Map Service, Oakland CA, Live 2017, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This map service contains the following layers: All On-Site Air Monitors, TCE, PCE, and Vinyl Chloride. The layers draws at all scales. Full FGDC metadata for the...

  6. CNE (Embalse nuclear power plant): probabilistic safety study. Loss of service water. Probabilistic evaluation and analysis through events sequence

    International Nuclear Information System (INIS)

    Couto, A.J.; Perez, S.S.

    1987-01-01

    This work is part of a study on the service water systems of the Embalse nuclear power plant from a safety point of view. The faults of service water systems of high and low pressure that can lead to situations threatening the plant safety were analyzed in a previous report. The event 'total loss of low pressure service water' causes the largest number of such conditions. Such event is an operational incident that can lead to an accident situation due to faults in the required process systems or by omission of a procedure. The annual frequency of the event 'total loss of low pressure service water' is calculated. The main contribution comes from pump failure. The evaluation of the accident sequences shows that the most direct way to the liberation of fission products is the loss of steam generators as heat sink. The contributions to small and large LOCA and electric supply loss are analyzed. The sequence that leads to tritium release through boiling of moderator is also evaluated. (Author)

  7. HDL cholesterol, very low levels of LDL cholesterol, and cardiovascular events

    NARCIS (Netherlands)

    Barter, Philip; Gotto, Antonio M.; LaRosa, John C.; Maroni, Jaman; Szarek, Michael; Grundy, Scott M.; Kastelein, John J. P.; Bittner, Vera; Fruchart, Jean-Charles

    2007-01-01

    BACKGROUND: High-density lipoprotein (HDL) cholesterol levels are a strong inverse predictor of cardiovascular events. However, it is not clear whether this association is maintained at very low levels of low-density lipoprotein (LDL) cholesterol. METHODS: A post hoc analysis of the recently

  8. gCube Grid services

    CERN Document Server

    Andrade, Pedro

    2008-01-01

    gCube is a service-based framework for eScience applications requiring collaboratory, on-demand, and intensive information processing. It provides to these communities Virtual Research Environments (VREs) to support their activities. gCube is build on top of standard technologies for computational Grids, namely the gLite middleware. The software was produced by the DILIGENT project and will continue to be supported and further developed by the D4Science project. gCube reflects within its name a three-sided interpretation of the Grid vision of resource sharing: sharing of computational resources, sharing of structured data, and sharing of application services. As such, gCube embodies the defining characteristics of computational Grids, data Grids, and virtual data Grids. Precisely, it builds on gLite middleware for managing distributed computations and unstructured data, includes dedicated services for managing data and metadata, provides services for distributed information retrieval, allows the orchestration...

  9. Context-Adaptive Learning Designs by Using Semantic Web Services

    Science.gov (United States)

    Dietze, Stefan; Gugliotta, Alessio; Domingue, John

    2007-01-01

    IMS Learning Design (IMS-LD) is a promising technology aimed at supporting learning processes. IMS-LD packages contain the learning process metadata as well as the learning resources. However, the allocation of resources--whether data or services--within the learning design is done manually at design-time on the basis of the subjective appraisals…

  10. The CBM first-level event selector input interface

    Energy Technology Data Exchange (ETDEWEB)

    Hutter, Dirk; Lindenstruth, Volker [Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The CBM First-level Event Selector (FLES) is the central event selection system of the upcoming CBM experiment at FAIR. Designed as a high-performance computing cluster, its task is an online analysis of the physics data at a total data rate exceeding 1 TByte/s. To allow efficient event selection, the FLES has to combine the data from all given input links to self-contained, overlapping processing intervals and distribute them to compute nodes. This task can be performed efficiently by partitioning the detector data streams into specialized containers. The FLES Interface Board (FLIB), implemented as a custom FPGA board, receives these containers via optical links, prepares them for subsequent interval building, and transfers the data via DMA to the PC's memory. A prototype of the FLIB has been implemented. The inclusion of features foreseen for other parts of the CBM read-out chain allows the evaluation of the interval building concept. Performance studies demonstrated high read-out bandwidth with low overhead. In addition, the FLIB has been used successfully as a readout device in test-beams and lab setups. An overview of the FLES Interface Board as well as results from latest studies is presented.

  11. An institutional repository initiative and issues concerning metadata

    OpenAIRE

    BAYRAM, Özlem; ATILGAN, Doğan; ARSLANTEKİN, Sacit

    2006-01-01

    Ankara University has become one of the fist open access initiatives in Turkey. Ankara University Open Access Program (AUO) was formed as part of the Open Access project (http://acikarsiv.ankara.edu.tr ) and supported by the University with an example of an open access institutional repository. As for the further step, the system will require the metadata tools to enable international recognization. According to Budapest Open Access Initiative, as suggested two strategies for open access t...

  12. Individual-Level, Partnership-Level, and Sexual Event-Level Predictors of Condom Use During Receptive Anal Intercourse Among HIV-Negative Men Who Have Sex with Men in Los Angeles.

    Science.gov (United States)

    Pines, Heather A; Gorbach, Pamina M; Weiss, Robert E; Reback, Cathy J; Landovitz, Raphael J; Mutchler, Matt G; Mitsuyasu, Ronald T

    2016-06-01

    We examined individual-level, partnership-level, and sexual event-level factors associated with condom use during receptive anal intercourse (RAI) among 163 low-income, racially/ethnically diverse, HIV-negative men who have sex with men (MSM) in Los Angeles (2007-2010). At baseline, 3-month, and 12-month visits, computer-assisted self-interviews collected information on ≤3 recent male partners and the last sexual event with those partners. Factors associated with condom use during RAI at the last sexual event were identified using logistic generalized linear mixed models. Condom use during RAI was negatively associated with reporting ≥ high school education (adjusted odds ratio [AOR] = 0.32, 95 % confidence interval [CI] 0.11-0.96) and methamphetamine use, specifically during RAI events with non-main partners (AOR = 0.20, 95 % CI 0.07-0.53) and those that included lubricant use (AOR = 0.20, 95 % CI 0.08-0.53). Condom use during RAI varies according to individual-level, partnership-level, and sexual event-level factors that should be considered in the development of risk reduction strategies for this population.

  13. An Examination of the Adoption of Preservation Metadata in Cultural Heritage Institutions: An Exploratory Study Using Diffusion of Innovations Theory

    Science.gov (United States)

    Alemneh, Daniel Gelaw

    2009-01-01

    Digital preservation is a significant challenge for cultural heritage institutions and other repositories of digital information resources. Recognizing the critical role of metadata in any successful digital preservation strategy, the Preservation Metadata Implementation Strategies (PREMIS) has been extremely influential on providing a "core" set…

  14. Metadata Harvesting in Regional Digital Libraries in the PIONIER Network

    Science.gov (United States)

    Mazurek, Cezary; Stroinski, Maciej; Werla, Marcin; Weglarz, Jan

    2006-01-01

    Purpose: The paper aims to present the concept of the functionality of metadata harvesting for regional digital libraries, based on the OAI-PMH protocol. This functionality is a part of regional digital libraries platform created in Poland. The platform was required to reach one of main objectives of the Polish PIONIER Programme--to enrich the…

  15. Metadata In, Library Out. A Simple, Robust Digital Library System

    Directory of Open Access Journals (Sweden)

    Tonio Loewald

    2010-06-01

    Full Text Available Tired of being held hostage to expensive systems that did not meet our needs, the University of Alabama Libraries developed an XML schema-agnostic, light-weight digital library delivery system based on the principles of "Keep It Simple, Stupid!" Metadata and derivatives reside in openly accessible web directories, which support the development of web agents and new usability software, as well as modification and complete retrieval at any time. The file name structure is echoed in the file system structure, enabling the delivery software to make inferences about relationships, sequencing, and complex object structure without having to encapsulate files in complex metadata schemas. The web delivery system, Acumen, is built of PHP, JSON, JavaScript and HTML5, using MySQL to support fielded searching. Recognizing that spreadsheets are more user-friendly than XML, an accompanying widget, Archivists Utility, transforms spreadsheets into MODS based on rules selected by the user. Acumen, Archivists Utility, and all supporting software scripts will be made available as open source.

  16. Nitrated nucleosome levels and neuropsychiatric events in systemic lupus erythematosus;

    DEFF Research Database (Denmark)

    Ferreira, Isabel; Croca, Sara; Raimondo, Maria Gabriella

    2017-01-01

    BACKGROUND: In patients with systemic lupus erythematosus (SLE) there is no serological test that will reliably distinguish neuropsychiatric (NP) events due to active SLE from those due to other causes. Previously we showed that serum levels of nitrated nucleosomes (NN) were elevated in a small...... number of patients with NPSLE. Here we measured serum NN in samples from a larger population of patients with SLE and NP events to see whether elevated serum NN could be a marker for NPSLE. METHODS: We obtained serum samples from patients in the Systemic Lupus International Collaborative Clinics (SLICC...

  17. Detection of Vandalism in Wikipedia using Metadata Features – Implementation in Simple English and Albanian sections

    Directory of Open Access Journals (Sweden)

    Arsim Susuri

    2017-03-01

    Full Text Available In this paper, we evaluate a list of classifiers in order to use them in the detection of vandalism by focusing on metadata features. Our work is focused on two low resource data sets (Simple English and Albanian from Wikipedia. The aim of this research is to prove that this form of vandalism detection applied in one data set (language can be extended into another data set (language. Article views data sets in Wikipedia have been used rarely for the purpose of detecting vandalism. We will show the benefits of using article views data set with features from the article revisions data set with the aim of improving the detection of vandalism. The key advantage of using metadata features is that these metadata features are language independent and simple to extract because they require minimal processing. This paper shows that application of vandalism models across low resource languages is possible, and vandalism can be detected through view patterns of articles.

  18. Implementation of a metadata architecture and knowledge collection to support semantic interoperability in an enterprise data warehouse.

    Science.gov (United States)

    Dhaval, Rakesh; Borlawsky, Tara; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti; Payne, Philip R O

    2008-11-06

    In order to enhance interoperability between enterprise systems, and improve data validity and reliability throughout The Ohio State University Medical Center (OSUMC), we have initiated the development of an ontology-anchored metadata architecture and knowledge collection for our enterprise data warehouse. The metadata and corresponding semantic relationships stored in the OSUMC knowledge collection are intended to promote consistency and interoperability across the heterogeneous clinical, research, business and education information managed within the data warehouse.

  19. Mining dark information resources to develop new informatics capabilities to support science

    Science.gov (United States)

    Ramachandran, Rahul; Maskey, Manil; Bugbee, Kaylin

    2016-04-01

    Dark information resources are digital resources that organizations collect, process, and store for regular business or operational activities but fail to realize their potential for other purposes. The challenge for any organization is to recognize, identify and effectively exploit these dark information stores. Metadata catalogs at different data centers store dark information resources consisting of structured information, free form descriptions of data and browse images. These information resources are never fully exploited beyond a few fields used for search and discovery. For example, the NASA Earth science catalog holds greater than 6000 data collections, 127 million records for individual files and 67 million browse images. We believe that the information contained in the metadata catalogs and the browse images can be utilized beyond their original design intent to provide new data discovery and exploration pathways to support science and education communities. In this paper we present two research applications using information stored in the metadata catalog in a completely novel way. The first application is designing a data curation service. The objective of the data curation service is to augment the existing data search capabilities. Given a specific atmospheric phenomenon, the data curation service returns the user a ranked list of relevant data sets. Different fields in the metadata records including textual descriptions are mined. A specialized relevancy ranking algorithm has been developed that uses a "bag of words" to define phenomena along with an ensemble of known approaches such as the Jaccard Coefficient, Cosine Similarity and Zone ranking to rank the data sets. This approach is also extended to map from the data set level to data file variable level. The second application is focused on providing a service where a user can search and discover browse images containing specific phenomena from the vast catalog. This service will aid researchers

  20. Transforming and enhancing metadata for enduser discovery: a case study

    Directory of Open Access Journals (Sweden)

    Edward M. Corrado

    2014-05-01

    The Libraries’ workflow and portions of code will be shared; issues and challenges involved will be discussed. While this case study is specific to Binghamton University Libraries, examples of strategies used at other institutions will also be introduced. This paper should be useful to anyone interested in describing large quantities of photographs or other materials with preexisting embedded metadata.

  1. Drivers of flood damage on event level

    DEFF Research Database (Denmark)

    Kreibich, H.; Aerts, J. C. J. H.; Apel, H.

    2016-01-01

    example are the 2002 and 2013 floods in the Elbe and Danube catchments in Germany. The 2002 flood caused the highest economic damage (EUR 11600 million) due to a natural hazard event in Germany. Damage was so high due to extreme flood hazard triggered by extreme precipitation and a high number......-level mitigation measures, 3) more effective early warning and improved coordination of disaster response and 4) a more targeted maintenance of flood defence systems and their deliberate relocation. Thus, despite higher hydrological severity damage due to the 2013 flood was significantly lower than in 2002. In our...

  2. Technology Integration Support Levels for In-Service Teachers

    Science.gov (United States)

    Williams, Mable Evans

    2017-01-01

    In-service teachers across the globe are expected to integrate technology in their respective instructional content area. The purpose of this qualitative study was to explore the perceptions of in-service teachers concerning building-level support for technology integration. Participants in the study were asked to participate in semi-structured…

  3. Trending of low level events and near misses to enhance safety performance in nuclear power plants

    International Nuclear Information System (INIS)

    2005-11-01

    The IAEA Safety Fundamentals publication, Safety of Nuclear Installations, Safety Series No. 110, states the need for operating organizations to establish a programme for the collection and analysis of operating experience in nuclear power plants. Such a programme ensures that operating experience is analysed, events important to safety are reviewed in depth, and lessons learned are disseminated to the staff of the organization and to relevant national and international organizations. As a result of the effort to enhance safety in operating organizations, incidents are progressively decreasing in number and significance. This means that in accordance with international reporting requirements the amount of collected data becomes less sufficient to draw meaningful statistical conclusions. This is where the collection and trend analysis of low level events and near misses can prove to be very useful. These trends can show which of the safety barriers are weak or failing more frequently. Evaluation and trending of low level events and near misses will help to prevent major incidents because latent weaknesses have been identified and corrective actions taken to prevent recurrence. This leads to improved safety and production. Low level events and near misses, which may reach several thousand per reactor operating year, need to be treated by the organizations as learning opportunities. A system for capturing these low level events and near misses truly needs to be an organization-wide system in which all levels of the organization, including contractors, participate. It is desirable that the overall operational experience feedback (OEF) process should integrate the lessons learned and the associated data from significant events with those of lower level events and near misses. To be able to effectively implement a process dealing with low level events and near misses, it is necessary that the organization have a well established OEF process for significant events

  4. Time-to-event methodology improved statistical evaluation in register-based health services research.

    Science.gov (United States)

    Bluhmki, Tobias; Bramlage, Peter; Volk, Michael; Kaltheuner, Matthias; Danne, Thomas; Rathmann, Wolfgang; Beyersmann, Jan

    2017-02-01

    Complex longitudinal sampling and the observational structure of patient registers in health services research are associated with methodological challenges regarding data management and statistical evaluation. We exemplify common pitfalls and want to stimulate discussions on the design, development, and deployment of future longitudinal patient registers and register-based studies. For illustrative purposes, we use data from the prospective, observational, German DIabetes Versorgungs-Evaluation register. One aim was to explore predictors for the initiation of a basal insulin supported therapy in patients with type 2 diabetes initially prescribed to glucose-lowering drugs alone. Major challenges are missing mortality information, time-dependent outcomes, delayed study entries, different follow-up times, and competing events. We show that time-to-event methodology is a valuable tool for improved statistical evaluation of register data and should be preferred to simple case-control approaches. Patient registers provide rich data sources for health services research. Analyses are accompanied with the trade-off between data availability, clinical plausibility, and statistical feasibility. Cox' proportional hazards model allows for the evaluation of the outcome-specific hazards, but prediction of outcome probabilities is compromised by missing mortality information. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Security in Service Level Agreements for Cloud Computing

    OpenAIRE

    Bernsmed, Karin; JAATUN, Martin Gilje; Undheim, Astrid

    2011-01-01

    The Cloud computing paradigm promises reliable services, accessible from anywhere in the world, in an on-demand manner. Insufficient security has been identified as a major obstacle to adopting Cloud services. To deal with the risks associated with outsourcing data and applications to the Cloud, new methods for security assurance are urgently needed. This paper presents a framework for security in Service Level Agreements for Cloud computing. The purpose is twofold; to help potential Cloud cu...

  6. Interoperable web applications for sharing data and products of the International DORIS Service

    Science.gov (United States)

    Soudarin, L.; Ferrage, P.

    2017-12-01

    The International DORIS Service (IDS) was created in 2003 under the umbrella of the International Association of Geodesy (IAG) to foster scientific research related to the French satellite tracking system DORIS and to deliver scientific products, mostly related to the International Earth rotation and Reference systems Service (IERS). Since its start, the organization has continuously evolved, leading to additional and improved operational products from an expanded set of DORIS Analysis Centers. In addition, IDS has developed services for sharing data and products with the users. Metadata and interoperable web applications are proposed to explore, visualize and download the key products such as the position time series of the geodetic points materialized at the ground tracking stations. The Global Geodetic Observing System (GGOS) encourages the IAG Services to develop such interoperable facilities on their website. The objective for GGOS is to set up an interoperable portal through which the data and products produced by the IAG Services can be served to the user community. We present the web applications proposed by IDS to visualize time series of geodetic observables or to get information about the tracking ground stations and the tracked satellites. We discuss the future plans for IDS to meet the recommendations of GGOS. The presentation also addresses the needs for the IAG Services to adopt common metadata thesaurus to describe data and products, and interoperability standards to share them.

  7. A Unified Peer-to-Peer Database Framework for XQueries over Dynamic Distributed Content and its Application for Scalable Service Discovery

    CERN Document Server

    Hoschek, Wolfgang

    In a large distributed system spanning administrative domains such as a Grid, it is desirable to maintain and query dynamic and timely information about active participants such as services, resources and user communities. The web services vision promises that programs are made more flexible and powerful by querying Internet databases (registries) at runtime in order to discover information and network attached third-party building blocks. Services can advertise themselves and related metadata via such databases, enabling the assembly of distributed higher-level components. In support of this vision, this thesis shows how to support expressive general-purpose queries over a view that integrates autonomous dynamic database nodes from a wide range of distributed system topologies. We motivate and justify the assertion that realistic ubiquitous service and resource discovery requires a rich general-purpose query language such as XQuery or SQL. Next, we introduce the Web Service Discovery Architecture (WSDA), wh...

  8. Web Monitoring of EOS Front-End Ground Operations, Science Downlinks and Level 0 Processing

    Science.gov (United States)

    Cordier, Guy R.; Wilkinson, Chris; McLemore, Bruce

    2008-01-01

    This paper addresses the efforts undertaken and the technology deployed to aggregate and distribute the metadata characterizing the real-time operations associated with NASA Earth Observing Systems (EOS) high-rate front-end systems and the science data collected at multiple ground stations and forwarded to the Goddard Space Flight Center for level 0 processing. Station operators, mission project management personnel, spacecraft flight operations personnel and data end-users for various EOS missions can retrieve the information at any time from any location having access to the internet. The users are distributed and the EOS systems are distributed but the centralized metadata accessed via an external web server provide an effective global and detailed view of the enterprise-wide events as they are happening. The data-driven architecture and the implementation of applied middleware technology, open source database, open source monitoring tools, and external web server converge nicely to fulfill the various needs of the enterprise. The timeliness and content of the information provided are key to making timely and correct decisions which reduce project risk and enhance overall customer satisfaction. The authors discuss security measures employed to limit access of data to authorized users only.

  9. A Geospatial Semantic Enrichment and Query Service for Geotagged Photographs

    Science.gov (United States)

    Ennis, Andrew; Nugent, Chris; Morrow, Philip; Chen, Liming; Ioannidis, George; Stan, Alexandru; Rachev, Preslav

    2015-01-01

    With the increasing abundance of technologies and smart devices, equipped with a multitude of sensors for sensing the environment around them, information creation and consumption has now become effortless. This, in particular, is the case for photographs with vast amounts being created and shared every day. For example, at the time of this writing, Instagram users upload 70 million photographs a day. Nevertheless, it still remains a challenge to discover the “right” information for the appropriate purpose. This paper describes an approach to create semantic geospatial metadata for photographs, which can facilitate photograph search and discovery. To achieve this we have developed and implemented a semantic geospatial data model by which a photograph can be enrich with geospatial metadata extracted from several geospatial data sources based on the raw low-level geo-metadata from a smartphone photograph. We present the details of our method and implementation for searching and querying the semantic geospatial metadata repository to enable a user or third party system to find the information they are looking for. PMID:26205265

  10. Summary Record of the First Meeting of the Radioactive Waste Repository Metadata Management (RepMet) Initiative

    International Nuclear Information System (INIS)

    2014-01-01

    National radioactive waste repository programmes are collecting large amounts of data to support the long-term management of their nations' radioactive wastes. The data and related records increase in number, type and quality as programmes proceed through the successive stages of repository development: pre-siting, siting, characterisation, construction, operation and finally closure. Regulatory and societal approvals are included in this sequence. Some programmes are also documenting past repository projects and facing a challenge in allowing both current and future generations to understand actions carried out in the past. Metadata allows context to be stored with data and information so that it can be located, used, updated and maintained. Metadata helps waste management organisations better utilise their data in carrying out their statutory tasks and can also help verify and demonstrate that their programmes are appropriately driven. The NEA Radioactive Waste Repository Metadata Management (RepMet) initiative aims to bring about a better understanding of the identification and administration of metadata - a key aspect of data management - to support national programmes in managing their radioactive waste repository data, information and records in a way that is both harmonised internationally and suitable for long-term management and use. This is a summary record of the 1. meeting of the RepMet initiative. The actions and decisions from this meeting were sent separately to the group after the meeting, but are also included in this document (Annex A). The list of participants is attached as well (Annex B)

  11. Metadata and Tools for Integration and Preservation of Cultural Heritage 3D Information

    Directory of Open Access Journals (Sweden)

    Achille Felicetti

    2011-12-01

    Full Text Available In this paper we investigate many of the various storage, portability and interoperability issues arising among archaeologists and cultural heritage people when dealing with 3D technologies. On the one side, the available digital repositories look often unable to guarantee affordable features in the management of 3D models and their metadata; on the other side the nature of most of the available data format for 3D encoding seem to be not satisfactory for the necessary portability required nowadays by 3D information across different systems. We propose a set of possible solutions to show how integration can be achieved through the use of well known and wide accepted standards for data encoding and data storage. Using a set of 3D models acquired during various archaeological campaigns and a number of open source tools, we have implemented a straightforward encoding process to generate meaningful semantic data and metadata. We will also present the interoperability process carried out to integrate the encoded 3D models and the geographic features produced by the archaeologists. Finally we will report the preliminary (rather encouraging development of a semantic enabled and persistent digital repository, where 3D models (but also any kind of digital data and metadata can easily be stored, retrieved and shared with the content of other digital archives.

  12. Measuring the service level in the 2.4 GHz ISM band

    NARCIS (Netherlands)

    van Bloem, J.W.H.; Schiphorst, Roelof

    2011-01-01

    In this report we provide the findings of the 2.4 GHz service level research. Here service level means the following: can all devices in the 2.4 GHz band fulfill their communication needs. In other words this corresponds to the overall Quality of Service (QoS). The project is a short research

  13. A Spatio-Temporal Enhanced Metadata Model for Interdisciplinary Instant Point Observations in Smart Cities

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2017-02-01

    Full Text Available Due to the incomprehensive and inconsistent description of spatial and temporal information for city data observed by sensors in various fields, it is a great challenge to share the massive, multi-source and heterogeneous interdisciplinary instant point observation data resources. In this paper, a spatio-temporal enhanced metadata model for point observation data sharing was proposed. The proposed Data Meta-Model (DMM focused on the spatio-temporal characteristics and formulated a ten-tuple information description structure to provide a unified and spatio-temporal enhanced description of the point observation data. To verify the feasibility of the point observation data sharing based on DMM, a prototype system was established, and the performance improvement of Sensor Observation Service (SOS for the instant access and insertion of point observation data was realized through the proposed MongoSOS, which is a Not Only SQL (NoSQL SOS based on the MongoDB database and has the capability of distributed storage. For example, the response time of the access and insertion for navigation and positioning data can be realized at the millisecond level. Case studies were conducted, including the gas concentrations monitoring for the gas leak emergency response and the smart city public vehicle monitoring based on BeiDou Navigation Satellite System (BDS used for recording the dynamic observation information. The results demonstrated the versatility and extensibility of the DMM, and the spatio-temporal enhanced sharing for interdisciplinary instant point observations in smart cities.

  14. The NCAR Digital Asset Services Hub (DASH): Implementing Unified Data Discovery and Access

    Science.gov (United States)

    Stott, D.; Worley, S. J.; Hou, C. Y.; Nienhouse, E.

    2017-12-01

    The National Center for Atmospheric Research (NCAR) Directorate created the Data Stewardship Engineering Team (DSET) to plan and implement an integrated single entry point for uniform digital asset discovery and access across the organization in order to improve the efficiency of access, reduce the costs, and establish the foundation for interoperability with other federated systems. This effort supports new policies included in federal funding mandates, NSF data management requirements, and journal citation recommendations. An inventory during the early planning stage identified diverse asset types across the organization that included publications, datasets, metadata, models, images, and software tools and code. The NCAR Digital Asset Services Hub (DASH) is being developed and phased in this year to improve the quality of users' experiences in finding and using these assets. DASH serves to provide engagement, training, search, and support through the following four nodes (see figure). DASH MetadataDASH provides resources for creating and cataloging metadata to the NCAR Dialect, a subset of ISO 19115. NMDEdit, an editor based on a European open source application, has been configured for manual entry of NCAR metadata. CKAN, an open source data portal platform, harvests these XML records (along with records output directly from databases) from a Web Accessible Folder (WAF) on GitHub for validation. DASH SearchThe NCAR Dialect metadata drives cross-organization search and discovery through CKAN, which provides the display interface of search results. DASH search will establish interoperability by facilitating metadata sharing with other federated systems. DASH ConsultingThe DASH Data Curation & Stewardship Coordinator assists with Data Management (DM) Plan preparation and advises on Digital Object Identifiers. The coordinator arranges training sessions on the DASH metadata tools and DM planning, and provides one-on-one assistance as requested. DASH Repository

  15. Pembuatan Service Level Requirement, Service Level Agreement dan Operational Level Agreement pada layanan help desk SAP berdasarkan kerangka kerja ITIL versi 2011 (Studi Kasus : Pupuk Indonesia Holding Company

    Directory of Open Access Journals (Sweden)

    Nur Shabrina Prameswari

    2017-01-01

    Full Text Available PT. Pupuk Indonesia Holding Company baru saja melakukan implementasi SAP pada tahun 2014. Dalam penerapannya, perusahaan merasa perlu membuat help desk SAP sebagai pusat penanganan masalah bagi perusahaan dan 7 anak perusahaannya yang kemudian dapat berfungsi juga sebagai knowledge base yang berguna apabila ada masalah yang berulang diwaktu selanjutnya. Untuk merancang layanan help desk yang baik, perlu didefinisikannya target layanan dalam sebuah kontrak perjanjian antara pengguna layanan dan penyedia layanan. Selain itu,Perjanjian layanan tersebut diperlukan juga sebagai jaminan kualitas help desk yang dapat disepakati oleh penyedia layanan dan pengguna layanan yang merupakan pengguna SAP pada PT. Pupuk Indonesia dan anak perusahaannya. Hal tersebut bertujuan untuk menyelaraskan bisnis dengan kualitas layanan serta menentukan kebutuhan dan harapan pelanggan dalam sebuah perjanjian antara penyedia layanan dan pengguna layanan. Dari permasalahan tersebut, maka diperlukan pembuatan dokumen Service Level Requirement, Service Level Agreement dan juga Operational Level Agreement pada help desk SAP, dengan dilakukan observasi dokumen dan wawancara pada pihak pengguna layanan dan penyedia layanan, maka setelah itu dibuatlah dokumen Service Level management tersebut berdasarkan ITIL Versi 2011.

  16. Witnesses' memory for events and faces under elevated levels of intoxication.

    Science.gov (United States)

    Altman, Christopher M; Schreiber Compo, Nadja; McQuiston, Dawn; Hagsand, Angelica V; Cervera, Jiselle

    2018-08-01

    Research on alcohol and witness memory has burgeoned over the last decade. However, most studies have tested participants at relatively low breath alcohol concentration (BAC) levels, unrepresentative of those encountered by officers in the field. To examine how higher intoxication levels might impair witness memory for events and faces, the current research tested participants' ability to recall a mock crime at elevated BAC levels (>.08%). The BAC levels of bar patrons (N = 138) were recorded before witnessing a video-taped mock crime. Participants were then interviewed using free recall and cued questions and shown a six-person target-present or target-absent lineup. Results show that alcohol negatively affected both the quantity and quality of recall. Regardless of question format, alcohol also reduced the percentage of accurate information elicited from witnesses; however, only cued questions increased the percentage of inaccurate information reported. Intoxication had no effect on identification accuracy. These findings suggest that the encoding and storage systems for faces and events may be impacted differently by alcohol. Our results also highlight the importance of including higher BAC levels when examining the effects of alcohol on witness memory.

  17. Open Access Metadata, Catalogers, and Vendors: The Future of Cataloging Records

    Science.gov (United States)

    Flynn, Emily Alinder

    2013-01-01

    The open access (OA) movement is working to transform scholarly communication around the world, but this philosophy can also apply to metadata and cataloging records. While some notable, large academic libraries, such as Harvard University, the University of Michigan, and the University of Cambridge, released their cataloging records under OA…

  18. Multi-level Reconfigurable Self-organization in Overlay Services

    NARCIS (Netherlands)

    Pournaras, E.

    2013-01-01

    Large-scale decentralized systems organized in overlay networks are complex to manage. Such systems integrate organizational complexity in the application-level resulting in low abstraction and modularity in their services. This thesis introduces a multi-level conceptual architecture for overlay

  19. Procedure for conducting probabilistic safety assessment: level 1 full power internal event analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Won Dae; Lee, Y. H.; Hwang, M. J. [and others

    2003-07-01

    This report provides guidance on conducting a Level I PSA for internal events in NPPs, which is based on the method and procedure that was used in the PSA for the design of Korea Standard Nuclear Plants (KSNPs). Level I PSA is to delineate the accident sequences leading to core damage and to estimate their frequencies. It has been directly used for assessing and modifying the system safety and reliability as a key and base part of PSA. Also, Level I PSA provides insights into design weakness and into ways of preventing core damage, which in most cases is the precursor to accidents leading to major accidents. So Level I PSA has been used as the essential technical bases for risk-informed application in NPPs. The report consists six major procedural steps for Level I PSA; familiarization of plant, initiating event analysis, event tree analysis, system fault tree analysis, reliability data analysis, and accident sequence quantification. The report is intended to assist technical persons performing Level I PSA for NPPs. A particular aim is to promote a standardized framework, terminology and form of documentation for PSAs. On the other hand, this report would be useful for the managers or regulatory persons related to risk-informed regulation, and also for conducting PSA for other industries.

  20. [Perceptions of nursing service managers in the South African Military Health Service on their level of motivation].

    Science.gov (United States)

    Fischer, A; Muller, M

    2000-12-01

    The process of transformation in the South African Military Health Services, has influenced the nursing service managers' level of motivation and the following research question is applicable: what are the perceptions of the nursing service managers within the South African Military Health Services on their level of motivation? The purpose with this study was to explore and describe the perceptions of nursing service managers on their level of motivation within these health services. A qualitative research design was utilized and four focus group interviews were conducted with 33 nursing service managers country wide. The transcribed interviews were exposed to a content analysis. The results confirm that the level of motivation amongst these nursing service managers is low. The demotivators relate mainly to the following: inadequate acknowledgement, job insecurity in relation to the future, problems with the process of integration, transformation and rationalization, problems with management, many labour related issues, poor/inadequate communication, inadequate support, increased work load, poor physical environment, negative publicity and poor self motivation. Although there were a few motivators identified, they were of less importance. These results were interpreted within Herzberg's motivation theory to identify the hygiene/maintenance factors and to assess whether the important motivators were in place. During any process of change, and/or when the level of motivation amongst employees is low, it is important to adequately manage the environment (hygiene/maintenance factors within the Herzberg theory). But it is even more important to ensure that the motivators are in place or to intensify them. It is therefore recommended that a motivation strategy, based on the Herzberg theory as well as the Hackman-Oldham job enrichment model, be developed, implemented and evaluated.