WorldWideScience

Sample records for adaptive metadata generation

  1. Metadata based mediator generation

    Energy Technology Data Exchange (ETDEWEB)

    Critchlow, T

    1998-03-01

    Mediators are a critical component of any data warehouse, particularly one utilizing partially materialized views; they transform data from its source format to the warehouse representation while resolving semantic and syntactic conflicts. The close relationship between mediators and databases, requires a mediator to be updated whenever an associated schema is modified. This maintenance may be a significant undertaking if a warehouse integrates several dynamic data sources. However, failure to quickly perform these updates significantly reduces the reliability of the warehouse because queries do not have access to the m current data. This may result in incorrect or misleading responses, and reduce user confidence in the warehouse. This paper describes a metadata framework, and associated software designed to automate a significant portion of the mediator generation task and thereby reduce the effort involved in adapting the schema changes. By allowing the DBA to concentrate on identifying the modifications at a high level, instead of reprogramming the mediator, turnaround time is reduced and warehouse reliability is improved.

  2. Automatic Metadata Generation using Associative Networks

    CERN Document Server

    Rodriguez, Marko A; Van de Sompel, Herbert

    2008-01-01

    In spite of its tremendous value, metadata is generally sparse and incomplete, thereby hampering the effectiveness of digital information services. Many of the existing mechanisms for the automated creation of metadata rely primarily on content analysis which can be costly and inefficient. The automatic metadata generation system proposed in this article leverages resource relationships generated from existing metadata as a medium for propagation from metadata-rich to metadata-poor resources. Because of its independence from content analysis, it can be applied to a wide variety of resource media types and is shown to be computationally inexpensive. The proposed method operates through two distinct phases. Occurrence and co-occurrence algorithms first generate an associative network of repository resources leveraging existing repository metadata. Second, using the associative network as a substrate, metadata associated with metadata-rich resources is propagated to metadata-poor resources by means of a discrete...

  3. THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA

    Energy Technology Data Exchange (ETDEWEB)

    Devarakonda, Ranjeet [ORNL; Shrestha, Biva [ORNL; Palanisamy, Giri [ORNL; Hook, Leslie A [ORNL; Killeffer, Terri S [ORNL; Boden, Thomas A [ORNL; Cook, Robert B [ORNL; Zolly, Lisa [United States Geological Service (USGS); Hutchison, Viv [United States Geological Service (USGS); Frame, Mike [United States Geological Service (USGS); Cialella, Alice [Brookhaven National Laboratory (BNL); Lazer, Kathy [Brookhaven National Laboratory (BNL)

    2014-01-01

    Nobody is better suited to describe data than the scientist who created it. This description about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset [1]. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [2][4]. OME is part of ORNL s Mercury software fleet [2][3]. It was jointly developed to support projects funded by the United States Geological Survey (USGS), U.S. Department of Energy (DOE), National Aeronautics and Space Administration (NASA) and National Oceanic and Atmospheric Administration (NOAA). OME s architecture provides a customizable interface to support project-specific requirements. Using this new architecture, the ORNL team developed OME instances for USGS s Core Science Analytics, Synthesis, and Libraries (CSAS&L), DOE s Next Generation Ecosystem Experiments (NGEE) and Atmospheric Radiation Measurement (ARM) Program, and the international Surface Ocean Carbon Dioxide ATlas (SOCAT). Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. From the information on the form, the Metadata Editor can create an XML file on the server that the editor is installed or to the user s personal computer. Researchers can also use the ORNL Metadata Editor to modify existing XML metadata files. As an example, an NGEE Arctic scientist use OME to register

  4. Meta-data based mediator generation

    Energy Technology Data Exchange (ETDEWEB)

    Critchlaw, T

    1998-06-28

    Mediators are a critical component of any data warehouse; they transform data from source formats to the warehouse representation while resolving semantic and syntactic conflicts. The close relationship between mediators and databases requires a mediator to be updated whenever an associated schema is modified. Failure to quickly perform these updates significantly reduces the reliability of the warehouse because queries do not have access to the most current data. This may result in incorrect or misleading responses, and reduce user confidence in the warehouse. Unfortunately, this maintenance may be a significant undertaking if a warehouse integrates several dynamic data sources. This paper describes a meta-data framework, and associated software, designed to automate a significant portion of the mediator generation task and thereby reduce the effort involved in adapting to schema changes. By allowing the DBA to concentrate on identifying the modifications at a high level, instead of reprogramming the mediator, turnaround time is reduced and warehouse reliability is improved.

  5. The Common Metadata Repository: A High Performance, High Quality Metadata Engine for Next Generation EOSDIS Applications

    Science.gov (United States)

    Pilone, D.; Baynes, K.; Farley, J. D.; Murphy, K. J.; Ritz, S.; Northcutt, R.; Cherry, T. A.; Gokey, C.; Wanchoo, L.

    2013-12-01

    As data archives grow and more data becomes accessible online, cataloging, searching, and extracting relevant data from these archives becomes a critical part of Earth Science research. Current metadata systems such as ECHO, EMS, and GCMD require metadata providers to maintain multiple, disparate systems utilizing different formats and different mechanisms for submitting and updating their entries. As an end user or application developer, this inconsistency reduces the value of the metadata and complicates finding and using earth science data. Building on the results of the ESDIS Metadata Harmony Study of 2012, we completed a Metadata Harmony Study 2 in 2013 to identify specific areas where metadata quality, consistency, and availability could be improved while reducing the burden on metadata providers. In this talk we discuss the results of the Metadata Harmony 2 study and the impacts on the EOSDIS community. Specifically, we'll discuss: - The Unified Metadata Model (UMM) that unifies the ECHO, GCMD, and EMS metadata models - The Common Metadata Repository (CMR) which will provide a high performance common repository for both EOSDIS and non-EOSDIS metadata unifying the ECHO, GCMD, and EMS metadata stores - The CMR's approach to automated metadata assessment and review combined with a dedicated a science support team to significantly improve quality and consistency across Earth Science metadata - Future expandability of the CMR beyond basic science metadata to incorporate multiple metadata concepts including visualization, data attributes, services, documentation, and tool metadata - The CMR's relationship with evolving metadata standards such as work from the MENDS group and ISO19115 NASA Best Practices This talk is targeted at metadata providers, consumers, and Earth Science Data end users to introduce components that will support next generation EOSDIS applications.

  6. Metadata

    CERN Document Server

    Pomerantz, Jeffrey

    2015-01-01

    When "metadata" became breaking news, appearing in stories about surveillance by the National Security Agency, many members of the public encountered this once-obscure term from information science for the first time. Should people be reassured that the NSA was "only" collecting metadata about phone calls -- information about the caller, the recipient, the time, the duration, the location -- and not recordings of the conversations themselves? Or does phone call metadata reveal more than it seems? In this book, Jeffrey Pomerantz offers an accessible and concise introduction to metadata. In the era of ubiquitous computing, metadata has become infrastructural, like the electrical grid or the highway system. We interact with it or generate it every day. It is not, Pomerantz tell us, just "data about data." It is a means by which the complexity of an object is represented in a simpler form. For example, the title, the author, and the cover art are metadata about a book. When metadata does its job well, it fades i...

  7. A quick scan on possibilities for automatic metadata generation

    NARCIS (Netherlands)

    Benneker, Frank

    2006-01-01

    The Quick Scan is a report on research into useable solutions for automatic generation of metadata or parts of metadata. The aim of this study is to explore possibilities for facilitating the process of attaching metadata to learning objects. This document is aimed at developers of digital learning

  8. A quick scan on possibilities for automatic metadata generation

    NARCIS (Netherlands)

    Benneker, Frank

    2006-01-01

    The Quick Scan is a report on research into useable solutions for automatic generation of metadata or parts of metadata. The aim of this study is to explore possibilities for facilitating the process of attaching metadata to learning objects. This document is aimed at developers of digital learning

  9. Metadata

    CERN Document Server

    Zeng, Marcia Lei

    2016-01-01

    Metadata remains the solution for describing the explosively growing, complex world of digital information, and continues to be of paramount importance for information professionals. Providing a solid grounding in the variety and interrelationships among different metadata types, Zeng and Qin's thorough revision of their benchmark text offers a comprehensive look at the metadata schemas that exist in the world of library and information science and beyond, as well as the contexts in which they operate. Cementing its value as both an LIS text and a handy reference for professionals already in the field, this book: * Lays out the fundamentals of metadata, including principles of metadata, structures of metadata vocabularies, and metadata descriptions * Surveys metadata standards and their applications in distinct domains and for various communities of metadata practice * Examines metadata building blocks, from modelling to defining properties, and from designing application profiles to implementing value vocabu...

  10. METADATA DRIVEN EFFICIENT KEY GENERATION AND DISTRIBUTION IN CLOUD SECURITY

    Directory of Open Access Journals (Sweden)

    R. Anitha

    2014-01-01

    Full Text Available With rapid development of cloud computing to a greater extent IT industries outsource their sensitive data at cloud data storage location. To keep the stored data confidential against untrusted cloud service providers, a natural way is to store only encrypted data in the cloud severs and providing an efficient access control mechanism using a competent cipher key-Cmxn, which is becoming a promising cryptographic solution. In this proposed model the cipher key is generated based on attributes of metadata. The key problems of this approach includes, the generation of cipher key-Cmxn and establishing an access control mechanism for the encrypted data using cipher key, where keys cannot be revoked without the involvement of data owner and the Metadata Data Server (MDS, hence makes data owner feels comfortable about the data stored. From this study, we propose a novel Metadata driven efficient key generation and distribution policies for cloud data security system by exploiting the characteristic of the metadata stored. Our design enforces security by providing two novel features. 1. Generation of Cipher key-Cmxn using modified feistel network, which holds good for the avalanche effect as each round of the feistel function, depends on the previous round. 2. A novel key distribution policy is designed where the encryption and decryption keys cannot be compromised without the involvement of data owner and the Metadata Data Server (MDS, hence makes data owner comfortable about the data stored. We have implemented a security model that incorporates our ideas and evaluated the performance and scalability of the secured model.

  11. A case for user-generated sensor metadata

    Science.gov (United States)

    Nüst, Daniel

    2015-04-01

    Cheap and easy to use sensing technology and new developments in ICT towards a global network of sensors and actuators promise previously unthought of changes for our understanding of the environment. Large professional as well as amateur sensor networks exist, and they are used for specific yet diverse applications across domains such as hydrology, meteorology or early warning systems. However the impact this "abundance of sensors" had so far is somewhat disappointing. There is a gap between (community-driven) sensor networks that could provide very useful data and the users of the data. In our presentation, we argue this is due to a lack of metadata which allows determining the fitness of use of a dataset. Syntactic or semantic interoperability for sensor webs have made great progress and continue to be an active field of research, yet they often are quite complex, which is of course due to the complexity of the problem at hand. But still, we see the most generic information to determine fitness for use is a dataset's provenance, because it allows users to make up their own minds independently from existing classification schemes for data quality. In this work we will make the case how curated user-contributed metadata has the potential to improve this situation. This especially applies for scenarios in which an observed property is applicable in different domains, and for set-ups where the understanding about metadata concepts and (meta-)data quality differs between data provider and user. On the one hand a citizen does not understand the ISO provenance metadata. On the other hand a researcher might find issues in publicly accessible time series published by citizens, which the latter might not be aware of or care about. Because users will have to determine fitness for use for each application on their own anyway, we suggest an online collaboration platform for user-generated metadata based on an extremely simplified data model. In the most basic fashion

  12. Generation of Multiple Metadata Formats from a Geospatial Data Repository

    Science.gov (United States)

    Hudspeth, W. B.; Benedict, K. K.; Scott, S.

    2012-12-01

    The Earth Data Analysis Center (EDAC) at the University of New Mexico is partnering with the CYBERShARE and Environmental Health Group from the Center for Environmental Resource Management (CERM), located at the University of Texas, El Paso (UTEP), the Biodiversity Institute at the University of Kansas (KU), and the New Mexico Geo- Epidemiology Research Network (GERN) to provide a technical infrastructure that enables investigation of a variety of climate-driven human/environmental systems. Two significant goals of this NASA-funded project are: a) to increase the use of NASA Earth observational data at EDAC by various modeling communities through enabling better discovery, access, and use of relevant information, and b) to expose these communities to the benefits of provenance for improving understanding and usability of heterogeneous data sources and derived model products. To realize these goals, EDAC has leveraged the core capabilities of its Geographic Storage, Transformation, and Retrieval Engine (Gstore) platform, developed with support of the NSF EPSCoR Program. The Gstore geospatial services platform provides general purpose web services based upon the REST service model, and is capable of data discovery, access, and publication functions, metadata delivery functions, data transformation, and auto-generated OGC services for those data products that can support those services. Central to the NASA ACCESS project is the delivery of geospatial metadata in a variety of formats, including ISO 19115-2/19139, FGDC CSDGM, and the Proof Markup Language (PML). This presentation details the extraction and persistence of relevant metadata in the Gstore data store, and their transformation into multiple metadata formats that are increasingly utilized by the geospatial community to document not only core library catalog elements (e.g. title, abstract, publication data, geographic extent, projection information, and database elements), but also the processing steps used to

  13. An Approach to Metadata Generation for Learning Objects

    Science.gov (United States)

    Menendez D., Victor; Zapata G., Alfredo; Vidal C., Christian; Segura N., Alejandra; Prieto M., Manuel

    Metadata describe instructional resources and define their nature and use. Metadata are required to guarantee reusability and interchange of instructional resources into e-Learning systems. However, fulfilment of large metadata attributes is a hard and complex task for almost all LO developers. As a consequence many mistakes are made. This can cause the impoverishment of data quality in indexing, searching and recovering process. We propose a methodology to build Learning Objects from digital resources. The first phase includes automatic preprocessing of resources using techniques from information retrieval. Initial metadata obtained in this first phase are then used to search similar LO to propose missed metadata. The second phase considers assisted activities that merge computer advice with human decisions. Suggestions are based on metadata of similar Learning Object using fuzzy logic theory.

  14. Precision Pointing Reconstruction and Geometric Metadata Generation for Cassini Images

    Science.gov (United States)

    French, R. S.; Showalter, M. R.; Gordon, M. K.

    2017-06-01

    We are reconstructing accurate pointing for 400,000 images taken by Cassini at Saturn. The results will be provided to the public along with per-pixel metadata describing precise image contents such as geographical location and viewing geometry.

  15. Evaluation of Semi-Automatic Metadata Generation Tools: A Survey of the Current State of the Art

    Directory of Open Access Journals (Sweden)

    Jung-ran Park

    2015-09-01

    Full Text Available Assessment of the current landscape of semi-automatic metadata generation tools is particularly important considering the rapid development of digital repositories and the recent explosion of big data. Utilization of (semiautomatic metadata generation is critical in addressing these environmental changes and may be unavoidable in the future considering the costly and complex operation of manual metadata creation. To address such needs, this study examines the range of semi-automatic metadata generation tools (n=39 while providing an analysis of their techniques, features, and functions. The study focuses on open-source tools that can be readily utilized in libraries and other memory institutions.  The challenges and current barriers to implementation of these tools were identified. The greatest area of difficulty lies in the fact that  the piecemeal development of most semi-automatic generation tools only addresses part of the issue of semi-automatic metadata generation, providing solutions to one or a few metadata elements but not the full range elements.  This indicates that significant local efforts will be required to integrate the various tools into a coherent set of a working whole.  Suggestions toward such efforts are presented for future developments that may assist information professionals with incorporation of semi-automatic tools within their daily workflows.

  16. Metadata Guidelines

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This document provides guidelines on metadata and metadata requirements for ServCat documents. Information on metadata is followed by an instructional flowchart and...

  17. Social Web Content Enhancement in a Distance Learning Environment: Intelligent Metadata Generation for Resources

    Science.gov (United States)

    García-Floriano, Andrés; Ferreira-Santiago, Angel; Yáñez-Márquez, Cornelio; Camacho-Nieto, Oscar; Aldape-Pérez, Mario; Villuendas-Rey, Yenny

    2017-01-01

    Social networking potentially offers improved distance learning environments by enabling the exchange of resources between learners. The existence of properly classified content results in an enhanced distance learning experience in which appropriate materials can be retrieved efficiently; however, for this to happen, metadata needs to be present.…

  18. Simplified Metadata Curation via the Metadata Management Tool

    Science.gov (United States)

    Shum, D.; Pilone, D.

    2015-12-01

    The Metadata Management Tool (MMT) is the newest capability developed as part of NASA Earth Observing System Data and Information System's (EOSDIS) efforts to simplify metadata creation and improve metadata quality. The MMT was developed via an agile methodology, taking into account inputs from GCMD's science coordinators and other end-users. In its initial release, the MMT uses the Unified Metadata Model for Collections (UMM-C) to allow metadata providers to easily create and update collection records in the ISO-19115 format. Through a simplified UI experience, metadata curators can create and edit collections without full knowledge of the NASA Best Practices implementation of ISO-19115 format, while still generating compliant metadata. More experienced users are also able to access raw metadata to build more complex records as needed. In future releases, the MMT will build upon recent work done in the community to assess metadata quality and compliance with a variety of standards through application of metadata rubrics. The tool will provide users with clear guidance as to how to easily change their metadata in order to improve their quality and compliance. Through these features, the MMT allows data providers to create and maintain compliant and high quality metadata in a short amount of time.

  19. Generation of a Solar Cycle of Sunspot Metadata Using the AIA Event Detection Framework - A Test of the System

    Science.gov (United States)

    Slater, G. L.; Zharkov, S.

    2008-12-01

    The soon-to-be-launched Solar Dynamics Observatory (SDO) will generate roughly 2 TB of image data per day, far more than previous solar missions. Because of the difficulty of widely distributing this enormous volume of data and in order to maximize discovery and scientific return, a sophisticated automated metadata extraction system is being developed at Stanford University and Lockheed Martin Solar and Astrophysics Laboratory in Palo Alto, CA. A key component in this system is the Event Detection System, which will supervise the execution of a set of feature and event extraction algorithms running in parallel, in real time, on all images recorded by the four telescopes of the key imaging instrument, the Atmospheric Imaging Assembly (AIA). The system will run on a beowulf cluster of 160 processors. As a test of the new system, we will run feature extraction software developed under the European Grid of Solar Observatories (EGSO) program to extract sunspot metadata from the 12 year SOHO MDI mission archive of full disk continuum and magnetogram images and also from the TRACE high resolution image archive. Although the main goal will be to test the performance of the production line framework, the resulting database will have applications for both research and space weather prediction. We examine some of these applications and compare the databases generated with others currently available.

  20. Using Semantic Web technologies for the generation of domain-specific templates to support clinical study metadata standards.

    Science.gov (United States)

    Jiang, Guoqian; Evans, Julie; Endle, Cory M; Solbrig, Harold R; Chute, Christopher G

    2016-01-01

    The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.

  1. Fast Adaptation in Generative Models with Generative Matching Networks

    OpenAIRE

    Bartunov, Sergey; Vetrov, Dmitry P.

    2016-01-01

    Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. Both problems may be addressed by conditional generative models that are trained to adapt the generative distribution to additional input data. So far this idea was explored only under certain limitations such as restricting the input data to be a single object or multiple objects representing the same con...

  2. A Neural Network for Generating Adaptive Lessons

    Directory of Open Access Journals (Sweden)

    Hassina Seridi-Bouchelaghem

    2005-01-01

    Full Text Available Traditional sequencing technology developed in the field of intelligent tutoring systems have not find an immediate place in large-scale Web-based education. This study investigates the use of computational intelligence for adaptive lesson generation in a distance learning environment over the Web. An approach for adaptive pedagogical hypermedia document generation is proposed and implemented in a prototype called KnowledgeClass. This approach is based on a specialized artificial neural network model. The system allows automatic generation of individualised courses according to the learner’s goal and previous knowledge and can dynamically adapt the course according to the learner’s success in acquiring knowledge. Several experiments showed the effectiveness of the proposed method.

  3. Adaptive Control Algorithm of the Synchronous Generator

    Directory of Open Access Journals (Sweden)

    Shevchenko Victor

    2017-01-01

    Full Text Available The article discusses the the problem of controlling a synchronous generator, namely, maintaining the stability of the control object in the conditions of occurrence of noise and disturbances in the regulatory process. The model of a synchronous generator is represented by a system of differential equations of Park-Gorev, where state variables are computed relative to synchronously rotating d, q-axis. Management of synchronous generator is proposed to organize on the basis of the position-path control using algorithms to adapt with the reference model. Basic control law directed on the stabilizing indicators the frequency generated by the current and the required power level, which is achieved by controlling the mechanical torque on the shaft of the turbine and the value of the excitation voltage of the synchronous generator. Modification of the classic adaptation algorithm using the reference model, allowing to minimize the error of the reference regulation and the model under investigation within the prescribed limits, produced by means of the introduction of additional variables controller adaptation in the model. Сarried out the mathematical modeling of control provided influence on the studied model of continuous nonlinear and unmeasured the disturbance. Simulation results confirm the high level accuracy of tracking and adaptation investigated model with respect to the reference, and the present value of the loop error depends on parameters performance of regulator.

  4. mzML2ISA & nmrML2ISA: generating enriched ISA-Tab metadata files from metabolomics XML data.

    Science.gov (United States)

    Larralde, Martin; Lawson, Thomas N; Weber, Ralf J M; Moreno, Pablo; Haug, Kenneth; Rocca-Serra, Philippe; Viant, Mark R; Steinbeck, Christoph; Salek, Reza M

    2017-08-15

    Submission to the MetaboLights repository for metabolomics data currently places the burden of reporting instrument and acquisition parameters in ISA-Tab format on users, who have to do it manually, a process that is time consuming and prone to user input error. Since the large majority of these parameters are embedded in instrument raw data files, an opportunity exists to capture this metadata more accurately. Here we report a set of Python packages that can automatically generate ISA-Tab metadata file stubs from raw XML metabolomics data files. The parsing packages are separated into mzML2ISA (encompassing mzML and imzML formats) and nmrML2ISA (nmrML format only). Overall, the use of mzML2ISA & nmrML2ISA reduces the time needed to capture metadata substantially (capturing 90% of metadata on assay and sample levels), is much less prone to user input errors, improves compliance with minimum information reporting guidelines and facilitates more finely grained data exploration and querying of datasets. mzML2ISA & nmrML2ISA are available under version 3 of the GNU General Public Licence at https://github.com/ISA-tools. Documentation is available from http://2isa.readthedocs.io/en/latest/. reza.salek@ebi.ac.uk or isatools@googlegroups.com. Supplementary data are available at Bioinformatics online.

  5. Creating preservation metadata from XML-metadata profiles

    Science.gov (United States)

    Ulbricht, Damian; Bertelmann, Roland; Gebauer, Petra; Hasler, Tim; Klump, Jens; Kirchner, Ingo; Peters-Kottig, Wolfgang; Mettig, Nora; Rusch, Beate

    2014-05-01

    Registration of dataset DOIs at DataCite makes research data citable and comes with the obligation to keep data accessible in the future. In addition, many universities and research institutions measure data that is unique and not repeatable like the data produced by an observational network and they want to keep these data for future generations. In consequence, such data should be ingested in preservation systems, that automatically care for file format changes. Open source preservation software that is developed along the definitions of the ISO OAIS reference model is available but during ingest of data and metadata there are still problems to be solved. File format validation is difficult, because format validators are not only remarkably slow - due to variety in file formats different validators return conflicting identification profiles for identical data. These conflicts are hard to resolve. Preservation systems have a deficit in the support of custom metadata. Furthermore, data producers are sometimes not aware that quality metadata is a key issue for the re-use of data. In the project EWIG an university institute and a research institute work together with Zuse-Institute Berlin, that is acting as an infrastructure facility, to generate exemplary workflows for research data into OAIS compliant archives with emphasis on the geosciences. The Institute for Meteorology provides timeseries data from an urban monitoring network whereas GFZ Potsdam delivers file based data from research projects. To identify problems in existing preservation workflows the technical work is complemented by interviews with data practitioners. Policies for handling data and metadata are developed. Furthermore, university teaching material is created to raise the future scientists awareness of research data management. As a testbed for ingest workflows the digital preservation system Archivematica [1] is used. During the ingest process metadata is generated that is compliant to the

  6. The essential guide to metadata for books

    CERN Document Server

    Register, Renee

    2013-01-01

    In The Essential Guide to Metadata for Books, you will learn exactly what you need to know to effectively generate, handle and disseminate metadata for books and ebooks. This comprehensive but digestible document will explain the life-cycle of book metadata, industry standards, XML, ONIX and the essential elements of metadata. It will also show you how effective, well-organized metadata can improve your efforts to sell a book, especially when it comes to marketing, discoverability and converting at the point of sale. This information-packed document also includes a glossary of terms

  7. Active Data Archive Product Tracking and Automated SPASE Metadata Generation in Support of the Heliophysics Data Environment

    Science.gov (United States)

    Bargatze, L. F.

    2013-12-01

    The understanding of Solar interaction with the Earth and other bodies in the solar system is a primary goal of Heliophysics as outlined in the NASA Science Mission Directive Science Plan. Heliophysics researchers need access to a vast collection of satellite and ground-based observations coupled with numerical simulation data to study complex processes some of which, as in the case of space weather, pose danger to physical elements of modern society. The infrastructure of the Heliophysics data environment plays a vital role in furthering the understanding of space physics processes by providing researchers with means for data discovery and access. The Heliophysics data environment is highly dynamic with thousands of data products involved. Access to data is facilitated via the Heliophysics Virtual Observatories (VxO) but routine access is possible only if the VxO SPASE metadata repositories contain accurate and up to date information. The Heliophysics Data Consortium has the stated goal of providing routine access to all relevant data products inclusively. Currently, only a small fraction of the data products relevant to Heliophysics studies have been described and registered in a VxO repository. And, for those products that have been described in SPASE, there is a significant time lag from when new data becomes available to when VxO metadata are updated to provide access. It is possible to utilize automated tools to shorten the response time of VxO data product registration via active data archive product tracking. Such a systematic approach is designed to address data access reliability by embracing the highly dynamic nature of the Heliophysics data environment. For example, the CDAWEB data repository located at the NASA Space Science Physics Data facility maintains logs of the data products served to the community. These files include two that pertain to full directory list information, updated daily, and a set of SHA1SUM hash value files, one for each of more

  8. Moving towards shareable metadata

    OpenAIRE

    Shreeves, Sarah L.; Riley, Jenn; Milewicz, Liz

    2006-01-01

    A focus of digital libraries, particularly since the advent of the Open Archives Initiative Protocol for Metadata Harvesting, is aggregating from multiple collections metadata describing digital content. However, the quality and interoperability of the metadata often prevents such aggregations from offering much more than very simple search and discovery services. Shareable metadata is metadata which can be understood and used outside of its local environment by aggregators to provide more ad...

  9. On the Origin of Metadata

    Directory of Open Access Journals (Sweden)

    Sam Coppens

    2012-12-01

    Full Text Available Metadata has been around and has evolved for centuries, albeit not recognized as such. Medieval manuscripts typically had illuminations at the start of each chapter, being both a kind of signature for the author writing the script and a pictorial chapter anchor for the illiterates at the time. Nowadays, there is so much fragmented information on the Internet that users sometimes fail to distinguish the real facts from some bended truth, let alone being able to interconnect different facts. Here, the metadata can both act as noise-reductors for detailed recommendations to the end-users, as it can be the catalyst to interconnect related information. Over time, metadata thus not only has had different modes of information, but furthermore, metadata’s relation of information to meaning, i.e., “semantics”, evolved. Darwin’s evolutionary propositions, from “species have an unlimited reproductive capacity”, over “natural selection”, to “the cooperation of mutations leads to adaptation to the environment” show remarkable parallels to both metadata’s different modes of information and to its relation of information to meaning over time. In this paper, we will show that the evolution of the use of (metadata can be mapped to Darwin’s nine evolutionary propositions. As mankind and its behavior are products of an evolutionary process, the evolutionary process of metadata with its different modes of information is on the verge of a new-semantic-era.

  10. Mapping Methods Metadata for Research Data

    Directory of Open Access Journals (Sweden)

    Tiffany Chao

    2015-02-01

    Full Text Available Understanding the methods and processes implemented by data producers to generate research data is essential for fostering data reuse. Yet, producing the metadata that describes these methods remains a time-intensive activity that data producers do not readily undertake. In particular, researchers in the long tail of science often lack the financial support or tools for metadata generation, thereby limiting future access and reuse of data produced. The present study investigates research journal publications as a potential source for identifying descriptive metadata about methods for research data. Initial results indicate that journal articles provide rich descriptive content that can be sufficiently mapped to existing metadata standards with methods-related elements, resulting in a mapping of the data production process for a study. This research has implications for enhancing the generation of robust metadata to support the curation of research data for new inquiry and innovation.

  11. Active Marine Station Metadata

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Active Marine Station Metadata is a daily metadata report for active marine bouy and C-MAN (Coastal Marine Automated Network) platforms from the National Data...

  12. Metadata in CHAOS

    DEFF Research Database (Denmark)

    Lykke, Marianne; Skov, Mette; Lund, Haakon

    CHAOS (Cultural Heritage Archive Open System) provides streaming access to more than 500.000 broad-casts by the Danish Broadcast Corporation from 1931 and onwards. The archive is part of the LARM project with the purpose of enabling researchers to search, annotate, and interact with recordings....... To optimally sup-port the researchers a user-centred approach was taken to develop the platform and related metadata scheme. Based on the requirements a three level metadata scheme was developed: (1) core archival metadata, (2) LARM metadata, and (3) project-specific metadata. The paper analyses how.......fm’s strength in providing streaming access to a large, shared corpus of broadcasts....

  13. Structured adaptive grid generation using algebraic methods

    Science.gov (United States)

    Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.

    1993-01-01

    The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration

  14. Next generation intelligent environments ambient adaptive systems

    CERN Document Server

    Nothdurft, Florian; Heinroth, Tobias; Minker, Wolfgang

    2016-01-01

    This book covers key topics in the field of intelligent ambient adaptive systems. It focuses on the results worked out within the framework of the ATRACO (Adaptive and TRusted Ambient eCOlogies) project. The theoretical background, the developed prototypes, and the evaluated results form a fertile ground useful for the broad intelligent environments scientific community as well as for industrial interest groups. The new edition provides: Chapter authors comment on their work on ATRACO with final remarks as viewed in retrospective Each chapter has been updated with follow-up work emerging from ATRACO An extensive introduction to state-of-the-art statistical dialog management for intelligent environments Approaches are introduced on how Trust is reflected during the dialog with the system.

  15. New challenges in grid generation and adaptivity for scientific computing

    CERN Document Server

    Formaggia, Luca

    2015-01-01

    This volume collects selected contributions from the “Fourth Tetrahedron Workshop on Grid Generation for Numerical Computations”, which was held in Verbania, Italy in July 2013. The previous editions of this Workshop were hosted by the Weierstrass Institute in Berlin (2005), by INRIA Rocquencourt in Paris (2007), and by Swansea University (2010). This book covers different, though related, aspects of the field: the generation of quality grids for complex three-dimensional geometries; parallel mesh generation algorithms; mesh adaptation, including both theoretical and implementation aspects; grid generation and adaptation on surfaces – all with an interesting mix of numerical analysis, computer science and strongly application-oriented problems.

  16. Predicting Privacy Attitudes Using Phone Metadata

    OpenAIRE

    2016-01-01

    With the increasing usage of smartphones, there is a corresponding increase in the phone metadata generated by individuals using these devices. Managing the privacy of personal information on these devices can be a complex task. Recent research has suggested the use of social and behavioral data for automatically recommending privacy settings. This paper is the first effort to connect users' phone use metadata with their privacy attitudes. Based on a 10-week long field study involving phone m...

  17. USGIN ISO metadata profile

    Science.gov (United States)

    Richard, S. M.

    2011-12-01

    The USGIN project has drafted and is using a specification for use of ISO 19115/19/39 metadata, recommendations for simple metadata content, and a proposal for a URI scheme to identify resources using resolvable http URI's(see http://lab.usgin.org/usgin-profiles). The principal target use case is a catalog in which resources can be registered and described by data providers for discovery by users. We are currently using the ESRI Geoportal (Open Source), with configuration files for the USGIN profile. The metadata offered by the catalog must provide sufficient content to guide search engines to locate requested resources, to describe the resource content, provenance, and quality so users can determine if the resource will serve for intended usage, and finally to enable human users and sofware clients to obtain or access the resource. In order to achieve an operational federated catalog system, provisions in the ISO specification must be restricted and usage clarified to reduce the heterogeneity of 'standard' metadata and service implementations such that a single client can search against different catalogs, and the metadata returned by catalogs can be parsed reliably to locate required information. Usage of the complex ISO 19139 XML schema allows for a great deal of structured metadata content, but the heterogenity in approaches to content encoding has hampered development of sophisticated client software that can take advantage of the rich metadata; the lack of such clients in turn reduces motivation for metadata producers to produce content-rich metadata. If the only significant use of the detailed, structured metadata is to format into text for people to read, then the detailed information could be put in free text elements and be just as useful. In order for complex metadata encoding and content to be useful, there must be clear and unambiguous conventions on the encoding that are utilized by the community that wishes to take advantage of advanced metadata

  18. An Adaptive Multivariable Control System for Hydroelectric Generating Units

    Directory of Open Access Journals (Sweden)

    Gunne J. Hegglid

    1983-04-01

    Full Text Available This paper describes an adaptive multivariable control system for hydroelectric generating units. The system is based on a detailed mathematical model of the synchronous generator, the water turbine, the exiter system and turbine control servo. The models of the water penstock and the connected power system are static. These assumptions are not considered crucial. The system uses a Kalman filter for optimal estimation of the state variables and the parameters of the electric grid equivalent. The multivariable control law is computed from a Riccatti equation and is made adaptive to the generators running condition by means of a least square technique.

  19. GENERATION AND APPLICATION OF UNSTRUCTURED ADAPTIVE MESHES WITH MOVING BOUNDARIES

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents a method to generate unstructured adaptive meshes with moving boundaries and its application to CFD. Delaunay triangulation criterion in conjunction with the automatic point creation is used to generate 2-D and 3-D unstructured grids. A local grid regeneration method is proposed to cope with moving boundaries. Numerical examples include the interactions of shock waves with movable bodies and the movement of a projectile within a ram accelerator, illustrating an efficient and robust mesh generation method developed.``

  20. Learning resource metadata

    Directory of Open Access Journals (Sweden)

    Silvana Temesio

    2015-10-01

    Full Text Available Metadata of educational resources are subject of analysis including LOM, OBAA and in a particular way LOM-ES Profile and accesibility VII annex. Conclusions are the importance of getting quality descriptions of resources to fulfill discovery, localization and reuse operations. The information professionals have a principal importance in the metadata registration.

  1. Visualization of JPEG Metadata

    Science.gov (United States)

    Malik Mohamad, Kamaruddin; Deris, Mustafa Mat

    There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.

  2. Mdmap: A Tool for Metadata Collection and Matching

    Directory of Open Access Journals (Sweden)

    Rico Simke

    2014-10-01

    Full Text Available This paper describes a front-end for the semi-automatic collection, matching, and generation of bibliographic metadata obtained from different sources for use within a digitization architecture. The Library of a Billion Words project is building an infrastructure for digitizing text that requires high-quality bibliographic metadata, but currently only sparse metadata from digitized editions is available. The project’s approach is to collect metadata for each digitized item from as many sources as possible. An expert user can then use an intuitive front-end tool to choose matching metadata. The collected metadata are centrally displayed in an interactive grid view. The user can choose which metadata they want to assign to a certain edition, and export these data as MARCXML. This paper presents a new approach to bibliographic work and metadata correction. We try to achieve a high quality of the metadata by generating a large amount of metadata to choose from, as well as by giving librarians an intuitive tool to manage their data.

  3. Generating a Style-Adaptive Trajectory from Multiple Demonstrations

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2014-07-01

    Full Text Available Trajectory learning and generation from demonstration have been widely discussed in recent years, with promising progress made. Existing approaches, including the Gaussian Mixture Model (GMM, affine functions and Dynamic Movement Primitives (DMPs have proven their applicability to learning the features and styles of existing trajectories and generating similar trajectories that can adapt to different dynamic situations. However, in many applications, such as grasping an object, shooting a ball, etc., different goals require trajectories of different styles. An issue that must be resolved is how to reproduce a trajectory with a suitable style. In this paper, we propose a style-adaptive trajectory generation approach based on DMPs, by which the style of the reproduced trajectories can change smoothly as the new goal changes. The proposed approach first adopts a Point Distribution Model (PDM to get the principal trajectories for different styles, then learns the model of each principal trajectory independently using DMPs, and finally adapts the parameters of the trajectory model smoothly according to the new goal using an adaptive goal-to-style mechanism. This paper further discusses the application of the approach on small-sized robots for an adaptive shooting task and on a humanoid robot arm to generate motions for table tennis-playing with different styles.

  4. Generating a Style-adaptive Trajectory from Multiple Demonstrations

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2014-07-01

    Full Text Available Trajectory learning and generation from demonstration have been widely discussed in recent years, with promising progress made. Existing approaches, including the Gaussian Mixture Model (GMM, affine functions and Dynamic Movement Primitives (DMPs have proven their applicability to learning the features and styles of existing trajectories and generating similar trajectories that can adapt to different dynamic situations. However, in many applications, such as grasping an object, shooting a ball, etc., different goals require trajectories of different styles. An issue that must be resolved is how to reproduce a trajectory with a suitable style. In this paper, we propose a style-adaptive trajectory generation approach based on DMPs, by which the style of the reproduced trajectories can change smoothly as the new goal changes. The proposed approach first adopts a Point Distribution Model (PDM to get the principal trajectories for different styles, then learns the model of each principal trajectory independently using DMPs, and finally adapts the parameters of the trajectory model smoothly according to the new goal using an adaptive goal-to-style mechanism. This paper further discusses the application of the approach on small-sized robots for an adaptive shooting task and on a humanoid robot arm to generate motions for table tennis-playing with different styles.

  5. GEOSS Clearinghouse Quality Metadata Analysis

    Science.gov (United States)

    Masó, J.; Díaz, P.; Ninyerola, M.; Sevillano, E.; Pons, X.

    2012-04-01

    The proliferation of similar Earth observation digital data products increases the relevance of data quality information of those datasets. GEOSS is investing important efforts in promoting the acknowledgment of the data quality in Earth observation. Activities, such as the regular meeting of QA4EO and projects as GeoViQua have the aim to make the data quality available and visible in the GEOSS Common Infrastructure (GCI). The clearinghouse is one of the main components of the GCI, which catalogues all the known Earth observation resources and provide it via the GEO Portal. Actually, after several initiatives to stimulate that (such as AIP4) most of the relevant international data providers referenced their data in the GEOSS Component and Service Registry, therefore, the GEOSS clearinghouse can be considered a global catalogue of the main Earth observation products. However, there are some important catalogues still in the process of being integrated. We developed an exhaustive study of the data quality elements available on the metadata catalogue in the GEOSS clearinghouse, to elaborate a state-of-the-art report on data quality. The clearinghouse is harvested using the OGC CSW port. Metadata following the standard ISO 19115 is saved in XML-ISO 19139 files. The semi-automatic methodology, previously applied in regional SDIs studies, generates a big metadata database that can be further analyzed. The number of metadata records harvested was 97203 (October 2011). The two main metadata nodes studied are directly related with data quality information package (DQ_DataQuality) in ISO. These are the quality indicators (DQ_Element) and the lineage information (LI_Lineage). Moreover, we also considered the usage information (MD_Usage). The results reveal 19107 (19.66%) metadata records containing quality indicators; which include a total of 52187 quality indicators. The results show also a main representation of the positional accuracy, with 37.19% of the total

  6. Metadata in CHAOS

    DEFF Research Database (Denmark)

    Lykke, Marianne; Skov, Mette; Lund, Haakon

    is to provide access to broadcasts and provide tools to segment and manage concrete segments of radio broadcasts. Although the assigned metadata are project-specific, they serve as invaluable access points for fellow researchers due to their factual and neutral nature. The researchers particularly stress LARM.fm...... researchers apply the metadata scheme in their research work. The study consists of two studies, a) a qualitative study of subjects and vo-cabulary of the applied metadata and annotations, and 5 semi-structured interviews about goals for tagging. The findings clearly show that the primary role of LARM.fm...

  7. Adaptive mesh generation for viscous flows using Delaunay triangulation

    Science.gov (United States)

    Mavriplis, Dimitri J.

    1990-01-01

    A method for generating an unstructured triangular mesh in two dimensions, suitable for computing high Reynolds number flows over arbitrary configurations is presented. The method is based on a Delaunay triangulation, which is performed in a locally stretched space, in order to obtain very high aspect ratio triangles in the boundary layer and the wake regions. It is shown how the method can be coupled with an unstructured Navier-Stokes solver to produce a solution adaptive mesh generation procedure for viscous flows.

  8. GSN Photo Metadata

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — GSN Photo Metadata contains photographs of Global Climate Observing System (GCOS) Surface Network (GSN) stations that have been submitted to the National Climatic...

  9. Data, Metadata - Who Cares?

    Science.gov (United States)

    Baumann, Peter

    2013-04-01

    There is a traditional saying that metadata are understandable, semantic-rich, and searchable. Data, on the other hand, are big, with no accessible semantics, and just downloadable. Not only has this led to an imbalance of search support form a user perspective, but also underneath to a deep technology divide often using relational databases for metadata and bespoke archive solutions for data. Our vision is that this barrier will be overcome, and data and metadata become searchable likewise, leveraging the potential of semantic technologies in combination with scalability technologies. Ultimately, in this vision ad-hoc processing and filtering will not distinguish any longer, forming a uniformly accessible data universe. In the European EarthServer initiative, we work towards this vision by federating database-style raster query languages with metadata search and geo broker technology. We present our approach taken, how it can leverage OGC standards, the benefits envisaged, and first results.

  10. NAIP National Metadata

    Data.gov (United States)

    Farm Service Agency, Department of Agriculture — The NAIP National Metadata Map contains USGS Quarter Quad and NAIP Seamline boundaries for every year NAIP imagery has been collected. Clicking on the map also makes...

  11. The RBV metadata catalog

    Science.gov (United States)

    Andre, Francois; Fleury, Laurence; Gaillardet, Jerome; Nord, Guillaume

    2015-04-01

    RBV (Réseau des Bassins Versants) is a French initiative to consolidate the national efforts made by more than 15 elementary observatories funded by various research institutions (CNRS, INRA, IRD, IRSTEA, Universities) that study river and drainage basins. The RBV Metadata Catalogue aims at giving an unified vision of the work produced by every observatory to both the members of the RBV network and any external person interested by this domain of research. Another goal is to share this information with other existing metadata portals. Metadata management is heterogeneous among observatories ranging from absence to mature harvestable catalogues. Here, we would like to explain the strategy used to design a state of the art catalogue facing this situation. Main features are as follows : - Multiple input methods: Metadata records in the catalog can either be entered with the graphical user interface, harvested from an existing catalogue or imported from information system through simplified web services. - Hierarchical levels: Metadata records may describe either an observatory, one of its experimental site or a single dataset produced by one instrument. - Multilingualism: Metadata can be easily entered in several configurable languages. - Compliance to standards : the backoffice part of the catalogue is based on a CSW metadata server (Geosource) which ensures ISO19115 compatibility and the ability of being harvested (globally or partially). On going tasks focus on the use of SKOS thesaurus and SensorML description of the sensors. - Ergonomy : The user interface is built with the GWT Framework to offer a rich client application with a fully ajaxified navigation. - Source code sharing : The work has led to the development of reusable components which can be used to quickly create new metadata forms in other GWT applications You can visit the catalogue (http://portailrbv.sedoo.fr/) or contact us by email rbv@sedoo.fr.

  12. Xeml Lab: a tool that supports the design of experiments at a graphical interface and generates computer-readable metadata files, which capture information about genotypes, growth conditions, environmental perturbations and sampling strategy.

    Science.gov (United States)

    Hannemann, Jan; Poorter, Hendrik; Usadel, Björn; Bläsing, Oliver E; Finck, Alex; Tardieu, Francois; Atkin, Owen K; Pons, Thijs; Stitt, Mark; Gibon, Yves

    2009-09-01

    Data mining depends on the ability to access machine-readable metadata that describe genotypes, environmental conditions, and sampling times and strategy. This article presents Xeml Lab. The Xeml Interactive Designer provides an interactive graphical interface at which complex experiments can be designed, and concomitantly generates machine-readable metadata files. It uses a new eXtensible Mark-up Language (XML)-derived dialect termed XEML. Xeml Lab includes a new ontology for environmental conditions, called Xeml Environment Ontology. However, to provide versatility, it is designed to be generic and also accepts other commonly used ontology formats, including OBO and OWL. A review summarizing important environmental conditions that need to be controlled, monitored and captured as metadata is posted in a Wiki (http://www.codeplex.com/XeO) to promote community discussion. The usefulness of Xeml Lab is illustrated by two meta-analyses of a large set of experiments that were performed with Arabidopsis thaliana during 5 years. The first reveals sources of noise that affect measurements of metabolite levels and enzyme activities. The second shows that Arabidopsis maintains remarkably stable levels of sugars and amino acids across a wide range of photoperiod treatments, and that adjustment of starch turnover and the leaf protein content contribute to this metabolic homeostasis.

  13. SM4AM: A Semantic Metamodel for Analytical Metadata

    DEFF Research Database (Denmark)

    Varga, Jovan; Romero, Oscar; Pedersen, Torben Bach

    2014-01-01

    . We present SM4AM, a Semantic Metamodel for Analytical Metadata created as an RDF formalization of the Analytical Metadata artifacts needed for user assistance exploitation purposes in next generation BI systems. We consider the Linked Data initiative and its relevance for user assistance...

  14. Dr. Hadoop: an infinite scalable metadata management for Hadoop How the baby elephant becomes immortal

    Institute of Scientific and Technical Information of China (English)

    Dipayan DEV; Ripon PATGIRI

    2016-01-01

    In this Exa byte scale era, data increases at an exponential rate. This is in turn generating a massive amount of metadata in the file system. Hadoop is the most widely used framework to deal with big data. Due to this growth of huge amount of metadata, however, the efficiency of Hadoop is questioned numerous times by many researchers. Therefore, it is essential to create an efficient and scalable metadata management for Hadoop. Hash-based mapping and subtree partitioning are suitable in distributed metadata management schemes. Subtrce partitioning does not uniformly distribute workload among the metadata servers, and metadata needs to be migrated to keep the load roughly balanced. Hash-based mapping suffers from a constraint on the locality of metadata, though it uniformly distributes the load among NameNodes, which are the metadata servers of Hadoop. In this paper, we present a circular metadata management mechanism named dynamic circular metadata splitting (DCMS). DCMS preserves metadata locality using consistent hashing and locality-preserving hashing, keeps replicated metadata for excellent reliability, and dynamically distributes metadata among the NameNodes to keep load balancing. NameNode is a centralized heart of the Hadoop. Keeping the directory tree of all files, failure of which causes the single point of failure (SPOF). DCMS removes Hadoop's SPOF and provides an efficient and scalable metadata management. The new framework is named 'Dr. Hadoop' after the name of the authors.

  15. NCPP's Use of Standard Metadata to Promote Open and Transparent Climate Modeling

    Science.gov (United States)

    Treshansky, A.; Barsugli, J. J.; Guentchev, G.; Rood, R. B.; DeLuca, C.

    2012-12-01

    The National Climate Predictions and Projections (NCPP) Platform is developing comprehensive regional and local information about the evolving climate to inform decision making and adaptation planning. This includes both creating and providing tools to create metadata about the models and processes used to create its derived data products. NCPP is using the Common Information Model (CIM), an ontology developed by a broad set of international partners in climate research, as its metadata language. This use of a standard ensures interoperability within the climate community as well as permitting access to the ecosystem of tools and services emerging alongside the CIM. The CIM itself is divided into a general-purpose (UML & XML) schema which structures metadata documents, and a project or community-specific (XML) Controlled Vocabulary (CV) which constraints the content of metadata documents. NCPP has already modified the CIM Schema to accommodate downscaling models, simulations, and experiments. NCPP is currently developing a CV for use by the downscaling community. Incorporating downscaling into the CIM will lead to several benefits: easy access to the existing CIM Documents describing CMIP5 models and simulations that are being downscaled, access to software tools that have been developed in order to search, manipulate, and visualize CIM metadata, and coordination with national and international efforts such as ES-DOC that are working to make climate model descriptions and datasets interoperable. Providing detailed metadata descriptions which include the full provenance of derived data products will contribute to making that data (and, the models and processes which generated that data) more open and transparent to the user community.

  16. An Adaptation of the ADA Language for Machine Generated Compilers.

    Science.gov (United States)

    1980-12-01

    Ada Augusta, Lady Lovelace , the daughter of the poet, Lord Byron, and Charles Babbage’s programmer.# 2UNIX is a Trademark/Service Mark of the Bell...AN ADAPTATION OF THE ADA LANGUAGE FOR MACHINE GENERATED COMPILE-ETC(U) JNLSIIO DEC AG M A ROGERS, L P MYERS 7k .A9 22NVLPSTRDASHOOLMONEREYCAF EE9...mmhhhhhhmhhhhlLEhhhhhmmh LEV EU NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ~ELECTEf All 0 3 198 /12 )THESIS 7 ,AN *DAPTATION OF THE ADA

  17. A standard for measuring metadata quality in spectral libraries

    Science.gov (United States)

    Rasaiah, B.; Jones, S. D.; Bellman, C.

    2013-12-01

    A standard for measuring metadata quality in spectral libraries Barbara Rasaiah, Simon Jones, Chris Bellman RMIT University Melbourne, Australia barbara.rasaiah@rmit.edu.au, simon.jones@rmit.edu.au, chris.bellman@rmit.edu.au ABSTRACT There is an urgent need within the international remote sensing community to establish a metadata standard for field spectroscopy that ensures high quality, interoperable metadata sets that can be archived and shared efficiently within Earth observation data sharing systems. Metadata are an important component in the cataloguing and analysis of in situ spectroscopy datasets because of their central role in identifying and quantifying the quality and reliability of spectral data and the products derived from them. This paper presents approaches to measuring metadata completeness and quality in spectral libraries to determine reliability, interoperability, and re-useability of a dataset. Explored are quality parameters that meet the unique requirements of in situ spectroscopy datasets, across many campaigns. Examined are the challenges presented by ensuring that data creators, owners, and data users ensure a high level of data integrity throughout the lifecycle of a dataset. Issues such as field measurement methods, instrument calibration, and data representativeness are investigated. The proposed metadata standard incorporates expert recommendations that include metadata protocols critical to all campaigns, and those that are restricted to campaigns for specific target measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. Approaches towards an operational and logistically viable implementation of a quality standard are discussed. This paper also proposes a way forward for adapting and enhancing current geospatial metadata standards to the unique requirements of field spectroscopy metadata quality. [0430] BIOGEOSCIENCES / Computational methods and data processing [0480

  18. Metadata and the Web

    Directory of Open Access Journals (Sweden)

    Mehdi Safari

    2004-12-01

    Full Text Available The rapid increase in the number and variety of resources on the World Wide Web has made the problem of resource description and discovery central to discussions about the efficiency and evolution of this medium. The inappropriateness of traditional schemas of resource description for web resources has encouraged significant activities recently on defining web-compatible schemas named "metadata". While conceptually old for library and information professionals, metadata has taken more significant and paramount role than ever before and is considered as the golden key for the next evolution of the web in the form of semantic web. This article is intended to be a brief introduction to metadata and tries to present its overview in the web.

  19. The Metadata Anonymization Toolkit

    OpenAIRE

    Voisin, Julien; Guyeux, Christophe; Bahi, Jacques M.

    2012-01-01

    This document summarizes the experience of Julien Voisin during the 2011 edition of the well-known \\emph{Google Summer of Code}. This project is a first step in the domain of metadata anonymization in Free Software. This article is articulated in three parts. First, a state of the art and a categorization of usual metadata, then the privacy policy is exposed/discussed in order to find the right balance between information lost and privacy enhancement. Finally, the specification of the Metadat...

  20. The Metadata Anonymization Toolkit

    OpenAIRE

    2012-01-01

    This document summarizes the experience of Julien Voisin during the 2011 edition of the well-known \\emph{Google Summer of Code}. This project is a first step in the domain of metadata anonymization in Free Software. This article is articulated in three parts. First, a state of the art and a categorization of usual metadata, then the privacy policy is exposed/discussed in order to find the right balance between information lost and privacy enhancement. Finally, the specification of the Metadat...

  1. Omics Metadata Management Software v. 1 (OMMS)

    Energy Technology Data Exchange (ETDEWEB)

    2013-09-09

    Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and to perform bioinformatics analyses and information management tasks via a simple and intuitive web-based interface. Several use cases with short-read sequence datasets are provided to showcase the full functionality of the OMMS, from metadata curation tasks, to bioinformatics analyses and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for web-based deployment supporting geographically dispersed research teams. Our software was developed with open-source bundles, is flexible, extensible and easily installed and run by operators with general system administration and scripting language literacy.

  2. Metadata for Electronic Information Resources

    Science.gov (United States)

    2004-12-01

    among digital libraries . METS provides an XML DTD that can point to metadata in other schemes by declaring the scheme that is being used. For example...site: www.niso.org/news/Metadata_simpler.pdf International Federation of Library Associations and Institutions (IFLA). (2002). Digital Libraries : Metadata

  3. A programmatic view of metadata, metadata services, and metadata flow in ATLAS

    CERN Document Server

    Malon, D; The ATLAS collaboration; Gallas, E; Stewart, G

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS are considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. Trigger information and data from the Large Hadron Collider itself provide cases in point, but examples abound. Metadata about logical or physics constructs, such as data-taking periods and runs and luminosity blocks and events and algorithms, often need to be mapped to deployment and production constructs, such as datasets and jobs and files and software versions, and vice versa. Metadata at one level of granularity may have implications at another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and integr...

  4. A Programmatic View of Metadata, Metadata Services, and Metadata Flow in ATLAS

    CERN Document Server

    CERN. Geneva

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS is considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. Trigger information and data from the Large Hadron Collider itself provide cases in point, but examples abound. Metadata about logical or physics constructs, such as data-taking periods and runs and luminosity blocks and events and algorithms, often need to be mapped to deployment and production constructs, such as datasets and jobs and files and software versions, and vice versa. Metadata at one level of granularity may have implications at another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and ...

  5. A Programmatic View of Metadata, Metadata Services, and Metadata Flow in ATLAS

    CERN Document Server

    Malon, D; The ATLAS collaboration; Gallas, E; Stewart, G

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS are considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. Trigger information and data from the Large Hadron Collider itself provide cases in point, but examples abound. Metadata about logical or physics constructs, such as data-taking periods and runs and luminosity blocks and events and algorithms, often need to be mapped to deployment and production constructs, such as datasets and jobs and files and software versions, and vice versa. Metadata at one level of granularity may have implications at another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and integr...

  6. Adaptive Dynamic Surface Control for Generator Excitation Control System

    Directory of Open Access Journals (Sweden)

    Zhang Xiu-yu

    2014-01-01

    Full Text Available For the generator excitation control system which is equipped with static var compensator (SVC and unknown parameters, a novel adaptive dynamic surface control scheme is proposed based on neural network and tracking error transformed function with the following features: (1 the transformation of the excitation generator model to the linear systems is omitted; (2 the prespecified performance of the tracking error can be guaranteed by combining with the tracking error transformed function; (3 the computational burden is greatly reduced by estimating the norm of the weighted vector of neural network instead of the weighted vector itself; therefore, it is more suitable for the real time control; and (4 the explosion of complicity problem inherent in the backstepping control can be eliminated. It is proved that the new scheme can make the system semiglobally uniformly ultimately bounded. Simulation results show the effectiveness of this control scheme.

  7. Adaptive microfluidic gradient generator for quantitative chemotaxis experiments

    Science.gov (United States)

    Anielski, Alexander; Pfannes, Eva K. B.; Beta, Carsten

    2017-03-01

    Chemotactic motion in a chemical gradient is an essential cellular function that controls many processes in the living world. For a better understanding and more detailed modelling of the underlying mechanisms of chemotaxis, quantitative investigations in controlled environments are needed. We developed a setup that allows us to separately address the dependencies of the chemotactic motion on the average background concentration and on the gradient steepness of the chemoattractant. In particular, both the background concentration and the gradient steepness can be kept constant at the position of the cell while it moves along in the gradient direction. This is achieved by generating a well-defined chemoattractant gradient using flow photolysis. In this approach, the chemoattractant is released by a light-induced reaction from a caged precursor in a microfluidic flow chamber upstream of the cell. The flow photolysis approach is combined with an automated real-time cell tracker that determines changes in the cell position and triggers movement of the microscope stage such that the cell motion is compensated and the cell remains at the same position in the gradient profile. The gradient profile can be either determined experimentally using a caged fluorescent dye or may be alternatively determined by numerical solutions of the corresponding physical model. To demonstrate the function of this adaptive microfluidic gradient generator, we compare the chemotactic motion of Dictyostelium discoideum cells in a static gradient and in a gradient that adapts to the position of the moving cell.

  8. The XML Metadata Editor of GFZ Data Services

    Science.gov (United States)

    Ulbricht, Damian; Elger, Kirsten; Tesei, Telemaco; Trippanera, Daniele

    2017-04-01

    Following the FAIR data principles, research data should be Findable, Accessible, Interoperable and Reuseable. Publishing data under these principles requires to assign persistent identifiers to the data and to generate rich machine-actionable metadata. To increase the interoperability, metadata should include shared vocabularies and crosslink the newly published (meta)data and related material. However, structured metadata formats tend to be complex and are not intended to be generated by individual scientists. Software solutions are needed that support scientists in providing metadata describing their data. To facilitate data publication activities of 'GFZ Data Services', we programmed an XML metadata editor that assists scientists to create metadata in different schemata popular in the earth sciences (ISO19115, DIF, DataCite), while being at the same time usable by and understandable for scientists. Emphasis is placed on removing barriers, in particular the editor is publicly available on the internet without registration [1] and the scientists are not requested to provide information that may be generated automatically (e.g. the URL of a specific licence or the contact information of the metadata distributor). Metadata are stored in browser cookies and a copy can be saved to the local hard disk. To improve usability, form fields are translated into the scientific language, e.g. 'creators' of the DataCite schema are called 'authors'. To assist filling in the form, we make use of drop down menus for small vocabulary lists and offer a search facility for large thesauri. Explanations to form fields and definitions of vocabulary terms are provided in pop-up windows and a full documentation is available for download via the help menu. In addition, multiple geospatial references can be entered via an interactive mapping tool, which helps to minimize problems with different conventions to provide latitudes and longitudes. Currently, we are extending the metadata editor

  9. Adaptive Grid Generation Using Elliptic Generating Equations with Precise Coordinate Controls

    Science.gov (United States)

    1986-07-08

    Sciences Meeting, 6-9 January 1986,. Reno, NV. Abstract not available. V COMMUNICATIONS IN APPLIED NUMERICAL METHODS . Vol. 4, 471-481 (1988) ON THE...COMMUNICATIONS IN APPLIED NUMERICAL METHODS , Vol. 7, 345-354 (1991) HYBRID ADAPTIVE POISSON GRID GENERATION AND GRID SMOOTHNESS PATRICK J. ROACHE, KAMBIZ...U.S.A. 0045-7825/90/S3.50 @ 1990. Elsevier Science Publishers By. (North-Holland) Atcepted for publication in Communications in Applied Numerical Methods . COMPLETED

  10. DEAM:Decoupled, Expressive, Area-Efficient Metadata Cache

    Institute of Scientific and Technical Information of China (English)

    ‘刘鹏; 方磊; 黄巍

    2014-01-01

    Chip multiprocessor presents brand new opportunities for holistic on-chip data and coherence management solutions. An intelligent protocol should be adaptive to the fine-grain accessing behavior. And in terms of storage of metadata, the size of conventional directory grows as the square of the number of processors, making it very expensive in large-scale systems. In this paper, we propose a metadata cache framework to achieve three goals: 1) reducing the latency of data access and coherence activities, 2) saving the storage of metadata, and 3) providing support for other optimization techniques. The metadata is implemented with compact structures and tracks the dynamically changing access pattern. The pattern information is used to guide the delegation and replication of decoupled data and metadata to allow fast access. We also use our metadata cache as a building block to enhance stream prefetching. Using detailed execution-driven simulation, we demonstrate that our protocol achieves an average speedup of 1.12X compared with a shared cache protocol with 1/5 of the storage of metadata.

  11. Generating adaptive behaviour within a memory-prediction framework.

    Directory of Open Access Journals (Sweden)

    David Rawlinson

    Full Text Available The Memory-Prediction Framework (MPF and its Hierarchical-Temporal Memory implementation (HTM have been widely applied to unsupervised learning problems, for both classification and prediction. To date, there has been no attempt to incorporate MPF/HTM in reinforcement learning or other adaptive systems; that is, to use knowledge embodied within the hierarchy to control a system, or to generate behaviour for an agent. This problem is interesting because the human neocortex is believed to play a vital role in the generation of behaviour, and the MPF is a model of the human neocortex.We propose some simple and biologically-plausible enhancements to the Memory-Prediction Framework. These cause it to explore and interact with an external world, while trying to maximize a continuous, time-varying reward function. All behaviour is generated and controlled within the MPF hierarchy. The hierarchy develops from a random initial configuration by interaction with the world and reinforcement learning only. Among other demonstrations, we show that a 2-node hierarchy can learn to successfully play "rocks, paper, scissors" against a predictable opponent.

  12. Generating adaptive behaviour within a memory-prediction framework.

    Science.gov (United States)

    Rawlinson, David; Kowadlo, Gideon

    2012-01-01

    The Memory-Prediction Framework (MPF) and its Hierarchical-Temporal Memory implementation (HTM) have been widely applied to unsupervised learning problems, for both classification and prediction. To date, there has been no attempt to incorporate MPF/HTM in reinforcement learning or other adaptive systems; that is, to use knowledge embodied within the hierarchy to control a system, or to generate behaviour for an agent. This problem is interesting because the human neocortex is believed to play a vital role in the generation of behaviour, and the MPF is a model of the human neocortex.We propose some simple and biologically-plausible enhancements to the Memory-Prediction Framework. These cause it to explore and interact with an external world, while trying to maximize a continuous, time-varying reward function. All behaviour is generated and controlled within the MPF hierarchy. The hierarchy develops from a random initial configuration by interaction with the world and reinforcement learning only. Among other demonstrations, we show that a 2-node hierarchy can learn to successfully play "rocks, paper, scissors" against a predictable opponent.

  13. Robust Adaptive Reactive Power Control for Doubly Fed Induction Generator

    Directory of Open Access Journals (Sweden)

    Huabin Wen

    2014-01-01

    Full Text Available The problem of reactive power control for mains-side inverter (MSI in doubly fed induction generator (DFIG is studied in this paper. To accommodate the modelling nonlinearities and inherent uncertainties, a novel robust adaptive control algorithm for MSI is proposed by utilizing Lyapunov theory that ensures asymptotic stability of the system under unpredictable external disturbances and significant parametric uncertainties. The distinguishing benefit of the aforementioned scheme consists in its capabilities to maintain satisfactory performance under varying operation conditions without the need for manually redesigning or reprogramming the control gains in contrast to the commonly used PI/PID control. Simulations are also built to confirm the correctness and benefits of the control scheme.

  14. A novel adaptive Cuckoo search for optimal query plan generation.

    Science.gov (United States)

    Gomathi, Ramalingam; Sharmila, Dhandapani

    2014-01-01

    The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.

  15. A Novel Adaptive Cuckoo Search for Optimal Query Plan Generation

    Directory of Open Access Journals (Sweden)

    Ramalingam Gomathi

    2014-01-01

    Full Text Available The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C standard for storing semantic web data is the resource description framework (RDF. To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.

  16. Dr. Hadoop:an infinite scalable metadata management for Hadoop-How the baby elephant becomes immortal

    Institute of Scientific and Technical Information of China (English)

    Dipayan DEV‡; Ripon PATGIRI

    2016-01-01

    In this Exa byte scale era, data increases at an exponential rate. This is in turn generating a massive amount of metadata in the fi le system. Hadoop is the most widely used framework to deal with big data. Due to this growth of huge amount of metadata, however, the efficiency of Hadoop is questioned numerous times by many researchers. Therefore, it is essential to create an efficient and scalable metadata management for Hadoop. Hash-based mapping and subtree partitioning are suitable in distributed metadata management schemes. Subtree partitioning does not uniformly distribute workload among the metadata servers, and metadata needs to be migrated to keep the load roughly balanced. Hash-based mapping suffers from a constraint on the locality of metadata, though it uniformly distributes the load among NameNodes, which are the metadata servers of Hadoop. In this paper, we present a circular metadata management mechanism named dynamic circular metadata splitting (DCMS). DCMS preserves metadata locality using consistent hashing and locality-preserving hashing, keeps replicated metadata for excellent reliability, and dynamically distributes metadata among the NameNodes to keep load balancing. NameNode is a centralized heart of the Hadoop. Keeping the directory tree of all fi les, failure of which causes the single point of failure (SPOF). DCMS removes Hadoop’s SPOF and provides an efficient and scalable metadata management. The new framework is named‘Dr. Hadoop’ after the name of the authors.

  17. Cytometry metadata in XML

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.

    2016-04-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). CytometryML will serve as a common metadata standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). The datatypes are primarily based on the Flow Cytometry and the Digital Imaging and Communication (DICOM) standards. A small section of the code was formatted with standard HTML formatting elements (p, h1, h2, etc.). Results:1) The part of MIFlowCyt that describes the Experimental Overview including the specimen and substantial parts of several other major elements has been implemented as CytometryML XML schemas (www.cytometryml.org). 2) The feasibility of using MIFlowCyt to provide the combination of an overview, table of contents, and/or an index of a scientific paper or a report has been demonstrated. Previously, a sample electronic publication, EPUB, was created that could contain both MIFlowCyt metadata as well as the binary data. Conclusions: The use of CytometryML technology together with XHTML5 and CSS permits the metadata to be directly formatted and together with the binary data to be stored in an EPUB container. This will facilitate: formatting, data- mining, presentation, data verification, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate a publication's adherence to the MIFlowCyt standard, promote interoperability and should also result in the textual and numeric data being published using web technology without any change in composition.

  18. Federating Metadata Catalogs

    Science.gov (United States)

    Baru, C.; Lin, K.

    2009-04-01

    The Geosciences Network project (www.geongrid.org) has been developing cyberinfrastructure for data sharing in the Earth Science community based on a service-oriented architecture. The project defines a standard "software stack", which includes a standardized set of software modules and corresponding service interfaces. The system employs Grid certificates for distributed user authentication. The GEON Portal provides online access to these services via a set of portlets. This service-oriented approach has enabled the GEON network to easily expand to new sites and deploy the same infrastructure in new projects. To facilitate interoperation with other distributed geoinformatics environments, service standards are being defined and implemented for catalog services and federated search across distributed catalogs. The need arises because there may be multiple metadata catalogs in a distributed system, for example, for each institution, agency, geographic region, and/or country. Ideally, a geoinformatics user should be able to search across all such catalogs by making a single search request. In this paper, we describe our implementation for such a search capability across federated metadata catalogs in the GEON service-oriented architecture. The GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, thus, can be imbedded in any software application. The need for federated catalogs in GEON arises because, (i) GEON collaborators at the University of Hyderabad, India have deployed their own catalog, as part of the iGEON-India effort, to register information about local resources for broader access across the network, (ii) GEON collaborators in the GEO Grid (Global Earth Observations Grid) project at AIST, Japan have implemented a catalog for their ASTER data products, and (iii) we have recently deployed a search service to access all data products from the EarthScope project in the US

  19. Applied Parallel Metadata Indexing

    Energy Technology Data Exchange (ETDEWEB)

    Jacobi, Michael R [Los Alamos National Laboratory

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  20. Mercury Toolset for Spatiotemporal Metadata

    Science.gov (United States)

    Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James

    2010-01-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  1. Mercury Toolset for Spatiotemporal Metadata

    Science.gov (United States)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris

    2010-06-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  2. New-generation Migrant Workers’ Urban Adaptation: A Case Study of Jianggan District in Hangzhou City

    Institute of Scientific and Technical Information of China (English)

    Fei; SU; Bo; LI; Jianyi; HUANG; Zhimei; LI; Jiamiao; ZENG

    2015-01-01

    New-generation migrant workers are the " elite" among migrant workers,and whether they can really adapt to the city is one of the real problems to be urgently solved during China’s new urbanization,related to the success of new urbanization construction. From the perspective of livelihood capital,this paper uses the measuring indicators in line with the new-generation migrant workers’ livelihood characteristics,to analyze the typical characteristics and causes of new-generation migrant workers’ urban adaptation in Jianggan District of Hangzhou City based on field survey data. In the study,it is found that the new-generation migrant workers’ urban adaptation characteristics are focused on life adaptation,work adaptation and cultural adaptation,but the adaptation in the three areas is not good and there is a big room for improvement.

  3. Linking ESMF Applications With Data Portals Using Standard Metadata

    Science.gov (United States)

    Dunlap, R.; Chastang, J.; Cinquini, L.; Deluca, C.; Middleton, D.; Murphy, S.; O'Kuinghttons, R.

    2008-12-01

    This talk describes the development of a prototype data portal to support a NCAR Advanced Study Program colloquium entitled Numerical Techniques for Global Atmospheric Models, held in Boulder during July, 2008. The colloquium focused on the comparison of thirteen atmospheric dynamical cores, a key element of next- generation models. Dynamical cores solve the governing equations that describe the properties of the atmosphere over time, including its motion. An efficient, accurate dynamical core is needed to achieve the high spatial resolutions that can improve model fidelity and enable the model to span predictive scales. In support of this event, ESMF, the Earth System Curator project, and the Earth System Grid (ESG) collaborated on the creation of a prototype portal that relies on standardized metadata to directly link datasets generated at the colloquium with information about the model components that generated them. The system offers tools such as dynamically generated comparison tables, faceted search, and trackback pages that link datasets to model configurations. During the colloquium, the metadata describing the dynamical cores was provided by the participants and manually added to the portal. Since then two developments have occurred to facilitate two important steps in the metadata lifecycle: creation of the metadata and ingestion into data archives. First, ESMF has been modified to enable users to output metadata in XML format. Because ESMF data structures already contain information about grids, fields, timestepping, and components, it is natural for ESMF to write out internal information in a standardized way for use by external systems. Second, modifications to the prototype portal were completed this summer to enable XML files output by ESMF to be ingested automatically into the portal. Taken together with the prototype web portal, the new metadata-writing capabilities of ESMF form part of an emerging infrastructure in support of the full modeling

  4. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    Fulachier, Jerome; The ATLAS collaboration

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, JavaScript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  5. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    Fulachier, Jerome; The ATLAS collaboration

    2016-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, Javascript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  6. Specification and Generation of Adapters for System Integration

    NARCIS (Netherlands)

    Mooij, A.J.; Voorhoeve, M.

    2013-01-01

    Large systems-of-systems are developed by integrating several smaller systems that have been developed independently. System integration often requires adaptation mechanisms for bridging any technical incompatibilities between the systems. In order to develop adapters in a faster way, we study ways

  7. Assessing Field Spectroscopy Metadata Quality

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2015-04-01

    Full Text Available This paper presents the proposed criteria for measuring the quality and completeness of field spectroscopy metadata in a spectral archive. Definitions for metadata quality and completeness for field spectroscopy datasets are introduced. Unique methods for measuring quality and completeness of metadata to meet the requirements of field spectroscopy datasets are presented. Field spectroscopy metadata quality can be defined in terms of (but is not limited to logical consistency, lineage, semantic and syntactic error rates, compliance with a quality standard, quality assurance by a recognized authority, and reputational authority of the data owners/data creators. Two spectral libraries are examined as case studies of operationalized metadata policies, and the degree to which they are aligned with the needs of field spectroscopy scientists. The case studies reveal that the metadata in publicly available spectral datasets are underperforming on the quality and completeness measures. This paper is part two in a series examining the issues central to a metadata standard for field spectroscopy datasets.

  8. Developing the CUAHSI Metadata Profile

    Science.gov (United States)

    Piasecki, M.; Bermudez, L.; Islam, S.; Beran, B.

    2004-12-01

    The Hydrologic Information System (HIS), of the Consortium of Universities for the Advancement of Hydrologic Science Inc., (CUAHSI), has as one of its goals to improve access to large volume, high quality, and heterogeneous hydrologic data sets. This will be attained in part by adopting a community metadata profile to achieve consistent descriptions that will facilitate data discovery. However, common standards are quite general in nature and typically lack domain specific vocabularies, complicating the adoption of standards for specific communities. We will show and demonstrate the problems encountered in the process of adopting ISO standards to create a CUAHSI metadata profile. The final schema is expressed in a simple metadata format, Metadata Template File (MTF), to leverage metadata annotations/viewer tools already developed by the San Diego Super Computer Center. The steps performed to create an MTF starting from ISO 19115:2003 are the following: 1) creation of ontologies using the Web Ontology Language (OWL) for ISO:19115 2003 and related ISO/TC 211 documents; 2) conceptualization in OWL of related hydrologic vocabularies such as NASA's Global Change Master Directory and units from the Hydrologic Handbook; 3) definition of CUAHSI profile by importing and extending the previous ontologies; 4) explicit creation of CUAHSI core set 5) export of the core set to MTF); 6) definition of metadata blocks for arbitrary digital objects (e.g. time series vs static-spatial data) using ISO's methodology for feature cataloguing; and 7) export of metadata blocks to MTF.

  9. Quality Metadata Management for Geospatial Scientific Workflows: from Retrieving to Assessing with Online Tools

    Science.gov (United States)

    Leibovici, D. G.; Pourabdollah, A.; Jackson, M.

    2011-12-01

    Experts and decision-makers use or develop models to monitor global and local changes of the environment. Their activities require the combination of data and processing services in a flow of operations and spatial data computations: a geospatial scientific workflow. The seamless ability to generate, re-use and modify a geospatial scientific workflow is an important requirement but the quality of outcomes is equally much important [1]. Metadata information attached to the data and processes, and particularly their quality, is essential to assess the reliability of the scientific model that represents a workflow [2]. Managing tools, dealing with qualitative and quantitative metadata measures of the quality associated with a workflow, are, therefore, required for the modellers. To ensure interoperability, ISO and OGC standards [3] are to be adopted, allowing for example one to define metadata profiles and to retrieve them via web service interfaces. However these standards need a few extensions when looking at workflows, particularly in the context of geoprocesses metadata. We propose to fill this gap (i) at first through the provision of a metadata profile for the quality of processes, and (ii) through providing a framework, based on XPDL [4], to manage the quality information. Web Processing Services are used to implement a range of metadata analyses on the workflow in order to evaluate and present quality information at different levels of the workflow. This generates the metadata quality, stored in the XPDL file. The focus is (a) on the visual representations of the quality, summarizing the retrieved quality information either from the standardized metadata profiles of the components or from non-standard quality information e.g., Web 2.0 information, and (b) on the estimated qualities of the outputs derived from meta-propagation of uncertainties (a principle that we have introduced [5]). An a priori validation of the future decision-making supported by the

  10. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    Science.gov (United States)

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  11. The metadata manual a practical workbook

    CERN Document Server

    Lubas, Rebecca; Schneider, Ingrid

    2013-01-01

    Cultural heritage professionals have high levels of training in metadata. However, the institutions in which they practice often depend on support staff, volunteers, and students in order to function. With limited time and funding for training in metadata creation for digital collections, there are often many questions about metadata without a reliable, direct source for answers. The Metadata Manual provides such a resource, answering basic metadata questions that may appear, and exploring metadata from a beginner's perspective. This title covers metadata basics, XML basics, Dublin Core, VRA C

  12. Using Metadata Description for Agriculture and Aquaculture Papers

    Directory of Open Access Journals (Sweden)

    P. Šimek, J. Vaněk, V. Očenášek, M. Stočes, T. Vogeltanzova

    2012-12-01

    Full Text Available The paper deals with the most used metadata formats and thesauri suitable for describing scientific and research papers in the domains agriculture, food industry, aquaculture, environment and rural areas. These include the Dublin Core (DC, Metadata Object Description Schema (MODS, Virtual Open Access Agriculture and Aquaculture Repository Metadata Application Profile (VOA3R AP and the AGROVOC thesaurus. Having analyzed the metadata formats and research paper lifecycle, the authors would recommend that each paper should entail metadata description as soon as it is published. The metadata are to describe the content and properties of the paper. One of the most suitable metadata formats is the VOA3R AP that is partially patterned on the DC and combined with the AGROVOC thesaurus. As a result, an effective description, availability and automatic data exchange between and among local and central repositories should be attained.The knowledge and data presented in the present paper were obtained as a result of the following research programs and grant schemes: the Grant No. 20121044 of the Internal Grant Agency titled „Using Automatic Metadata Generation for Research Papers“, the Grant agreement No. 250525 funded by the European Commission corresponding to the VOA3R Project (Virtual Open Access Agriculture & Aquaculture Repository: Sharing Scientific and Scholarly Research related to Agriculture, Food, and Environment, http://voa3r.eu and the Research Program titled „Economy of the Czech Agriculture Resources and their Efficient Use within the Framework of the Multifunctional Agrifood Systems“ of the Czech Ministry of Education, Youth and Sport number VZ MSM 6046070906.

  13. Evaluating the privacy properties of telephone metadata

    OpenAIRE

    2016-01-01

    Privacy protections against government surveillance are often scoped to communications content and exclude communications metadata. In the United States, the National Security Agency operated a particularly controversial program, collecting bulk telephone metadata nationwide. We investigate the privacy properties of telephone metadata to assess the impact of policies that distinguish between content and metadata. We find that telephone metadata is densely interconnected, can trivially be reid...

  14. FSA 2002 Digital Orthophoto Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the 2002 FSA Color Orthophotos Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the quarter-quad...

  15. phosphorus retention data and metadata

    Data.gov (United States)

    U.S. Environmental Protection Agency — phosphorus retention in wetlands data and metadata. This dataset is associated with the following publication: Lane , C., and B. Autrey. Phosphorus retention of...

  16. Towards Exascale Scientific Metadata Management

    OpenAIRE

    Blanas, Spyros; Byna, Surendra

    2015-01-01

    Advances in technology and computing hardware are enabling scientists from all areas of science to produce massive amounts of data using large-scale simulations or observational facilities. In this era of data deluge, effective coordination between the data production and the analysis phases hinges on the availability of metadata that describe the scientific datasets. Existing workflow engines have been capturing a limited form of metadata to provide provenance information about the identity ...

  17. Metadata and Service at the GFZ ISDC Portal

    Science.gov (United States)

    Ritschel, B.

    2008-05-01

    an explicit identification of single data files and the set-up of a comprehensive Earth science data catalog. The huge ISDC data catalog is realized by product type dependent tables filled with data file related metadata, which have relations to corresponding metadata tables. The product type describing parent DIF XML metadata documents are stored and managed in ORACLE's XML storage structures. In order to improve the interoperability of the ISDC service portal, the existing proprietary catalog system will be extended by an ISO 19115 based web catalog service. In addition to this development there is ISDC related concerning semantic network of different kind of metadata resources, like different kind of standardized and not-standardized metadata documents and literature as well as Web 2.0 user generated information derived from tagging activities and social navigation data.

  18. Pragmatic Metadata Management for Integration into Multiple Spatial Data Infrastructure Systems and Platforms

    Science.gov (United States)

    Benedict, K. K.; Scott, S.

    2013-12-01

    While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into

  19. Five modified boundary scan adaptive test generation algorithms

    Institute of Scientific and Technical Information of China (English)

    Niu Chunping; Ren Zheping; Yao Zongzhong

    2006-01-01

    To study the diagnostic problem of Wire-OR (W-O) interconnect fault of PCB (Printed Circuit Board), five modified boundary scan adaptive algorithms for interconnect test are put forward. These algorithms apply Global-diagnosis sequence algorithm to replace the equal weight algorithm of primary test, and the test time is shortened without changing the fault diagnostic capability. The descriptions of five modified adaptive test algorithms are presented, and the capability comparison between the modified algorithm and the original algorithm is made to prove the validity of these algorithms.

  20. Metadata Management in Scientific Computing

    CERN Document Server

    Seidel, Eric L

    2012-01-01

    Complex scientific codes and the datasets they generate are in need of a sophisticated categorization environment that allows the community to store, search, and enhance metadata in an open, dynamic system. Currently, data is often presented in a read-only format, distilled and curated by a select group of researchers. We envision a more open and dynamic system, where authors can publish their data in a writeable format, allowing users to annotate the datasets with their own comments and data. This would enable the scientific community to collaborate on a higher level than before, where researchers could for example annotate a published dataset with their citations. Such a system would require a complete set of permissions to ensure that any individual's data cannot be altered by others unless they specifically allow it. For this reason datasets and codes are generally presented read-only, to protect the author's data; however, this also prevents the type of social revolutions that the private sector has seen...

  1. Meta-Data Objects as the Basis for System Evolution

    CERN Document Server

    Estrella, Florida; Tóth, N; Kovács, Z; Le Goff, J M; Clatchey, Richard Mc; Toth, Norbert; Kovacs, Zsolt; Goff, Jean-Marie Le

    2001-01-01

    One of the main factors driving object-oriented software development in the Web- age is the need for systems to evolve as user requirements change. A crucial factor in the creation of adaptable systems dealing with changing requirements is the suitability of the underlying technology in allowing the evolution of the system. A reflective system utilizes an open architecture where implicit system aspects are reified to become explicit first-class (meta-data) objects. These implicit system aspects are often fundamental structures which are inaccessible and immutable, and their reification as meta-data objects can serve as the basis for changes and extensions to the system, making it self- describing. To address the evolvability issue, this paper proposes a reflective architecture based on two orthogonal abstractions - model abstraction and information abstraction. In this architecture the modeling abstractions allow for the separation of the description meta-data from the system aspects they represent so that th...

  2. CPG-inspired workspace trajectory generation and adaptive locomotion control for quadruped robots.

    Science.gov (United States)

    Liu, Chengju; Chen, Qijun; Wang, Danwei

    2011-06-01

    This paper deals with the locomotion control of quadruped robots inspired by the biological concept of central pattern generator (CPG). A control architecture is proposed with a 3-D workspace trajectory generator and a motion engine. The workspace trajectory generator generates adaptive workspace trajectories based on CPGs, and the motion engine realizes joint motion imputes. The proposed architecture is able to generate adaptive workspace trajectories online by tuning the parameters of the CPG network to adapt to various terrains. With feedback information, a quadruped robot can walk through various terrains with adaptive joint control signals. A quadruped platform AIBO is used to validate the proposed locomotion control system. The experimental results confirm the effectiveness of the proposed control architecture. A comparison by experiments shows the superiority of the proposed method against the traditional CPG-joint-space control method.

  3. On the Computer Generation of Adaptive Numerical Libraries

    Science.gov (United States)

    2010-05-01

    1998]), sparse linear algebra ( OSKI [Vuduc et al., 2005]), sort- ing (Adaptive Sorting Library [Li et al., 2004]), and linear transforms (FFTW [Frigo and...of domains, including basic dense linear algebra (ATLAS [Whaley and Dongarra, 1998]), sparse linear algebra ( OSKI [Vuduc et al., 2005]), sorting...73, 82, 108 Vuduc, Richard; Demmel, James W.; Yelick, Katherine A. OSKI : A library of automatically tuned sparse matrix kernels. In Journal of Physics

  4. Evolving Metadata in NASA Earth Science Data Systems

    Science.gov (United States)

    Mitchell, A.; Cechini, M. F.; Walter, J.

    2011-12-01

    NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 3500 data products ranging from various types of science disciplines. EOSDIS is currently comprised of 12 discipline specific data centers that are collocated with centers of science discipline expertise. Metadata is used in all aspects of NASA's Earth Science data lifecycle from the initial measurement gathering to the accessing of data products. Missions use metadata in their science data products when describing information such as the instrument/sensor, operational plan, and geographically region. Acting as the curator of the data products, data centers employ metadata for preservation, access and manipulation of data. EOSDIS provides a centralized metadata repository called the Earth Observing System (EOS) ClearingHouse (ECHO) for data discovery and access via a service-oriented-architecture (SOA) between data centers and science data users. ECHO receives inventory metadata from data centers who generate metadata files that complies with the ECHO Metadata Model. NASA's Earth Science Data and Information System (ESDIS) Project established a Tiger Team to study and make recommendations regarding the adoption of the international metadata standard ISO 19115 in EOSDIS. The result was a technical report recommending an evolution of NASA data systems towards a consistent application of ISO 19115 and related standards including the creation of a NASA-specific convention for core ISO 19115 elements. Part of

  5. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    Science.gov (United States)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  6. Metadata-Centric Discovery Service

    Science.gov (United States)

    Huang, T.; Chung, N. T.; Gangl, M. E.; Armstrong, E. M.

    2011-12-01

    It is data about data. It is the information describing a picture without looking at the picture. Through the years, the Earth Science community seeks better methods to describe science artifacts to improve the quality and efficiency in information exchange. One the purposes are to provide information to the users to guide them into identifies the science artifacts of their interest. The NASA Distributed Active Archive Centers (DAACs) are the building blocks of a data centric federation, designed for processing and archiving from NASA's Earth Observation missions and their distribution as well as provision of specialized services to users. The Physical Oceanography Distributed Active Archive Center (PO.DAAC), at the Jet Propulsion Laboratory, archives and distributes science artifacts pertain to the physical state of the ocean. As part of its high-performance operational Data Management and Archive System (DMAS) is a fast data discovery RESTful web service called the Oceanographic Common Search Interface (OCSI). The web service searches and delivers metadata on all data holdings within PO.DAAC. Currently OCSI supports metadata standards such as ISO-19115, OpenSearch, GCMD, and FGDC, with new metadata standards still being added. While we continue to seek the silver bullet in metadata standard, the Earth Science community is in fact consists of various standards due to the specific needs of its users and systems. This presentation focuses on the architecture behind OCSI as a reference implementation on building a metadata-centric discovery service.

  7. Adaptive protection coordination scheme for distribution network with distributed generation using ABC

    Directory of Open Access Journals (Sweden)

    A.M. Ibrahim

    2016-09-01

    Full Text Available This paper presents an adaptive protection coordination scheme for optimal coordination of DOCRs in interconnected power networks with the impact of DG, the used coordination technique is the Artificial Bee Colony (ABC. The scheme adapts to system changes; new relays settings are obtained as generation-level or system-topology changes. The developed adaptive scheme is applied on the IEEE 30-bus test system for both single- and multi-DG existence where results are shown and discussed.

  8. Logs Analysis of Adapted Pedagogical Scenarios Generated by a Simulation Serious Game Architecture

    Science.gov (United States)

    Callies, Sophie; Gravel, Mathieu; Beaudry, Eric; Basque, Josianne

    2017-01-01

    This paper presents an architecture designed for simulation serious games, which automatically generates game-based scenarios adapted to learner's learning progression. We present three central modules of the architecture: (1) the learner model, (2) the adaptation module and (3) the logs module. The learner model estimates the progression of the…

  9. A Qualitative Study of Adaptation Experiences of 1.5-Generation Asian Americans.

    Science.gov (United States)

    Kim, Bryan S. K.; Brenner, Bradley R.; Liang, Christopher T. H.; Asay, Penelope A.

    2003-01-01

    Adaptation experiences of 1.5-generation Asian American college students (N=10) were examined using the consensual qualitative research method. Results indicated 4 domains of adaptation experiences: preimmigration experiences, acculturation and enculturation experiences, intercultural relationships, and support systems. Participants reported that…

  10. Research on Community Competition and Adaptive Genetic Algorithm for Automatic Generation of Tang Poetry

    OpenAIRE

    Wujian Yang; Yining Cheng; Jie He; Wenqiong Hu; Xiaojia Lin

    2016-01-01

    As there are many researches about traditional Tang poetry, among which automatically generated Tang poetry has arouse great concern in recent years. This study presents a community-based competition and adaptive genetic algorithm for automatically generating Tang poetry. The improved algorithm with community-based competition that has been added aims to maintain the diversity of genes during evolution; meanwhile, the adaptation means that the probabilities of crossover and mutation are varie...

  11. Adaptive removal and revival of underheated thermoelectric generation modules

    DEFF Research Database (Denmark)

    Chen, Min

    2014-01-01

    The output power of thermoelectric generation systems (TEGSs) is significantly susceptible to the dynamically varying temperature profile of the heat sources, owing to the electrical mismatch of the interconnected thermoelectric modules (TEMs) constituting a TEGS. This paper proposes a new control...

  12. Adaptive mesh generation for image registration and segmentation

    DEFF Research Database (Denmark)

    Fogtmann, Mads; Larsen, Rasmus

    2013-01-01

    This paper deals with the problem of generating quality tetrahedral meshes for image registration. From an initial coarse mesh the approach matches the mesh to the image volume by combining red-green subdivision and mesh evolution through mesh-to-image matching regularized with a mesh quality...

  13. Using of Automatic Metadata Providing

    Directory of Open Access Journals (Sweden)

    P. Šimek

    2013-12-01

    Full Text Available The paper deals with the necessity of systemic solution for metadata providing by local archives into central repositories and its subsequent implementatiton by the Department of Information Technologies, Faculty of Economics and Management, Czech University of Life Sciences in Prague, for the needs of the agrarian WWW AGRIS portal. The system supports the OAI-PMH (Open Archive Initiative – Protocol for Metadata Harvesting protocol, several metadata formats and thesauri and meets the quality requirements: functionality, high level of reliability, applicability, sustainability and transferability. The SW application for the OAI-PMH requests’ servicing is run in the setting of the WWW Apache server using an efficient PHP framework Nette and database dibi layer.

  14. A multilevel adaptive mesh generation scheme using Kd-trees

    Directory of Open Access Journals (Sweden)

    Alfonso Limon

    2009-04-01

    Full Text Available We introduce a mesh refinement strategy for PDE based simulations that benefits from a multilevel decomposition. Using Harten's MRA in terms of Schroder-Pander linear multiresolution analysis [20], we are able to bound discontinuities in $mathbb{R}$. This MRA is extended to $mathbb{R}^n$ in terms of n-orthogonal linear transforms and utilized to identify cells that contain a codimension-one discontinuity. These refinement cells become leaf nodes in a balanced Kd-tree such that a local dyadic MRA is produced in $mathbb{R}^n$, while maintaining a minimal computational footprint. The nodes in the tree form an adaptive mesh whose density increases in the vicinity of a discontinuity.

  15. An Adaptive Mesh Algorithm: Mesh Structure and Generation

    Energy Technology Data Exchange (ETDEWEB)

    Scannapieco, Anthony J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-21

    The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented by a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally

  16. U.S. EPA Metadata Editor (EME)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Metadata Editor (EME) allows users to create geospatial metadata that meets EPA's requirements. The tool has been developed as a desktop application that...

  17. FIR: An Effective Scheme for Extracting Useful Metadata from Social Media.

    Science.gov (United States)

    Chen, Long-Sheng; Lin, Zue-Cheng; Chang, Jing-Rong

    2015-11-01

    Recently, the use of social media for health information exchange is expanding among patients, physicians, and other health care professionals. In medical areas, social media allows non-experts to access, interpret, and generate medical information for their own care and the care of others. Researchers paid much attention on social media in medical educations, patient-pharmacist communications, adverse drug reactions detection, impacts of social media on medicine and healthcare, and so on. However, relatively few papers discuss how to extract useful knowledge from a huge amount of textual comments in social media effectively. Therefore, this study aims to propose a Fuzzy adaptive resonance theory network based Information Retrieval (FIR) scheme by combining Fuzzy adaptive resonance theory (ART) network, Latent Semantic Indexing (LSI), and association rules (AR) discovery to extract knowledge from social media. In our FIR scheme, Fuzzy ART network firstly has been employed to segment comments. Next, for each customer segment, we use LSI technique to retrieve important keywords. Then, in order to make the extracted keywords understandable, association rules mining is presented to organize these extracted keywords to build metadata. These extracted useful voices of customers will be transformed into design needs by using Quality Function Deployment (QFD) for further decision making. Unlike conventional information retrieval techniques which acquire too many keywords to get key points, our FIR scheme can extract understandable metadata from social media.

  18. Enriching The Metadata On CDS

    CERN Document Server

    Chhibber, Nalin

    2014-01-01

    The project report revolves around the open source software package called Invenio. It provides the tools for management of digital assets in a repository and drives CERN Document Server. Primary objective is to enhance the existing metadata in CDS with data from other libraries. An implicit part of this task is to manage disambiguation (within incoming data), removal of multiple entries and handle replications between new and existing records. All such elements and their corresponding changes are integrated within Invenio to make the upgraded metadata available on the CDS. Latter part of the report discuss some changes related to the Invenio code-base itself.

  19. Adaptive planning : Generating conditions for urban adaptability.Lessons from Dutch organic development strategies

    NARCIS (Netherlands)

    Rauws, Ward; de Roo, Gert

    2016-01-01

    The development of cities includes a wide variety of uncertainties which challenge spatial planners and decision makers. In response, planning approaches which move away from the ambition to achieve predefined outcomes are being explored in the literature. One of them is an adaptive approach to

  20. Metadata Access Tool for Climate and Health

    Science.gov (United States)

    Trtanji, J.

    2012-12-01

    The need for health information resources to support climate change adaptation and mitigation decisions is growing, both in the United States and around the world, as the manifestations of climate change become more evident and widespread. In many instances, these information resources are not specific to a changing climate, but have either been developed or are highly relevant for addressing health issues related to existing climate variability and weather extremes. To help address the need for more integrated data, the Interagency Cross-Cutting Group on Climate Change and Human Health, a working group of the U.S. Global Change Research Program, has developed the Metadata Access Tool for Climate and Health (MATCH). MATCH is a gateway to relevant information that can be used to solve problems at the nexus of climate science and public health by facilitating research, enabling scientific collaborations in a One Health approach, and promoting data stewardship that will enhance the quality and application of climate and health research. MATCH is a searchable clearinghouse of publicly available Federal metadata including monitoring and surveillance data sets, early warning systems, and tools for characterizing the health impacts of global climate change. Examples of relevant databases include the Centers for Disease Control and Prevention's Environmental Public Health Tracking System and NOAA's National Climate Data Center's national and state temperature and precipitation data. This presentation will introduce the audience to this new web-based geoportal and demonstrate its features and potential applications.

  1. Metadata for semantic and social applications

    OpenAIRE

    2008-01-01

    Metadata is a key aspect of our evolving infrastructure for information management, social computing, and scientific collaboration. DC-2008 will focus on metadata challenges, solutions, and innovation in initiatives and activities underlying semantic and social applications. Metadata is part of the fabric of social computing, which includes the use of wikis, blogs, and tagging for collaboration and participation. Metadata also underlies the development of semantic applications, and the Semant...

  2. A Corrective Training Algorithm for Adaptive Learning in Bag Generation

    CERN Document Server

    Chen, H H; Chen, Hsin-Hsi; Lee, Yue-Shi

    1994-01-01

    The sampling problem in training corpus is one of the major sources of errors in corpus-based applications. This paper proposes a corrective training algorithm to best-fit the run-time context domain in the application of bag generation. It shows which objects to be adjusted and how to adjust their probabilities. The resulting techniques are greatly simplified and the experimental results demonstrate the promising effects of the training algorithm from generic domain to specific domain. In general, these techniques can be easily extended to various language models and corpus-based applications.

  3. Adapted Gaussian basis sets for atoms from Li through Xe generated with the generator coordinate Hartree-Fock method

    Directory of Open Access Journals (Sweden)

    CASTRO EUSTÁQUIO V. R. DE

    2001-01-01

    Full Text Available The generator coordinate Hartree-Fock method is used to generate adapted Gaussian basis sets for the atoms from Li (Z=3 through Xe (Z=54. In this method the Griffin-Hill-Wheeler-Hartree-Fock equations are integrated through the integral discretization technique. The wave functions generated in this work are compared with the widely used Roothaan-Hartree-Fock wave functions of Clementi and Roetti (1974, and with other basis sets reported in the literature. For all atoms studied, the errors in our total energy values relatively to the numerical Hartree-Fock limits are always less than 7.426 mhartree.

  4. Evolution of the ATLAS Metadata Interface (AMI)

    CERN Document Server

    Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian

    2015-01-01

    The ATLAS Metadata Interface (AMI) can be considered to be a mature application because it has existed for at least 10 years. Over the years, the number of users and the number of functions provided for these users has increased. It has been necessary to adapt the hardware infrastructure in a seamless way so that the Quality of Service remains high. We will describe the evolution of the application from the initial one, using single server with a MySQL backend database, to the current state, where we use a cluster of Virtual Machines on the French Tier 1 Cloud at Lyon, an ORACLE database backend also at Lyon, with replication to CERN using ORACLE streams behind a back-up server.

  5. DataStaR: Bridging XML and OWL in Science Metadata Management

    Science.gov (United States)

    Lowe, Brian

    DataStaR is a science data “staging repository” developed by Albert R. Mann Library at Cornell University that produces semantic metadata while enabling the publication of data sets and accompanying metadata to discipline-specific data centers or to Cornell’s institutional repository. DataStaR, which employs OWL and RDF in its metadata store, serves as a Web-based platform for production and management of metadata and aims to reduce redundant manual input by reusing named ontology individuals. A key requirement of DataStaR is the ability to produce metadata records conforming to existing XML schemas that have been adopted by scientific communities. To facilitate this, DataStaR integrates ontologies that directly reflect XML schemas, generates HTML editing forms, and “lowers” ontology axioms into XML documents compliant with existing schemas. This paper describes our approach and implementation, and discusses the challenges involved.

  6. METHOD FOR ADAPTIVE MESH GENERATION BASED ON GEOMETRICAL FEATURES OF 3D SOLID

    Institute of Scientific and Technical Information of China (English)

    HUANG Xiaodong; DU Qungui; YE Bangyan

    2006-01-01

    In order to provide a guidance to specify the element size dynamically during adaptive finite element mesh generation, adaptive criteria are firstly defined according to the relationships between the geometrical features and the elements of 3D solid. Various modes based on different datum geometrical elements, such as vertex, curve, surface, and so on, are then designed for generating local refmed mesh. With the guidance of the defined criteria, different modes are automatically selected to apply on the appropriate datum objects to program the element size in the local special areas. As a result, the control information of element size is successfully programmed coveting the entire domain based on the geometrical features of 3D solid. A new algorithm based on Delaunay triangulation is then developed for generating 3D adaptive fmite element mesh, in which the element size is dynamically specified to catch the geometrical features and suitable tetrahedron facets are selected to locate interior nodes continuously. As a result, adaptive mesh with good-quality elements is generated. Examples show that the proposed method can be successfully applied to adaptive finite element mesh automatic generation based on the geometrical features of 3D solid.

  7. ADAPTIVE LAYERED CARTESIAN CUT CELL METHOD FOR THE UNSTRUCTURED HEXAHEDRAL GRIDS GENERATION

    Institute of Scientific and Technical Information of China (English)

    WU Peining; TAN Jianrong; LIU Zhenyu

    2007-01-01

    Adaptive layered Cartesian cut cell method is presented to solve the difficulty of the unstructured hexahedral anisotropic Cartesian grids generation from the complex CAD model. Vertex merging algorithm based on relaxed AVL tree is investigated to construct topological structure for stereo lithography (STL) files, and a topology-based self-adaptive layered slicing algorithm with special features control strategy is brought forward. With the help of convex hull, a new points-in-polygon method is employed to improve the Cartesian cut cell method. By integrating the self-adaptive layered slicing algorithm and the improved Cartesian cut cell method, the adaptive layered Cartesian cut cell method gains the volume data of the complex CAD model in STL file and generates the unstructured hexahedral anisotropic Cartesian grids.

  8. Discovering Physical Samples Through Identifiers, Metadata, and Brokering

    Science.gov (United States)

    Arctur, D. K.; Hills, D. J.; Jenkyns, R.

    2015-12-01

    Physical samples, particularly in the geosciences, are key to understanding the Earth system, its history, and its evolution. Our record of the Earth as captured by physical samples is difficult to explain and mine for understanding, due to incomplete, disconnected, and evolving metadata content. This is further complicated by differing ways of classifying, cataloguing, publishing, and searching the metadata, especially when specimens do not fit neatly into a single domain—for example, fossils cross disciplinary boundaries (mineral and biological). Sometimes even the fundamental classification systems evolve, such as the geological time scale, triggering daunting processes to update existing specimen databases. Increasingly, we need to consider ways of leveraging permanent, unique identifiers, as well as advancements in metadata publishing that link digital records with physical samples in a robust, adaptive way. An NSF EarthCube Research Coordination Network (RCN) called the Internet of Samples (iSamples) is now working to bridge the metadata schemas for biological and geological domains. We are leveraging the International Geo Sample Number (IGSN) that provides a versatile system of registering physical samples, and working to harmonize this with the DataCite schema for Digital Object Identifiers (DOI). A brokering approach for linking disparate catalogues and classification systems could help scale discovery and access to the many large collections now being managed (sometimes millions of specimens per collection). This presentation is about our community building efforts, research directions, and insights to date.

  9. Using URIs to effectively transmit sensor data and metadata

    Science.gov (United States)

    Kokkinaki, Alexandra; Buck, Justin; Darroch, Louise; Gardner, Thomas

    2017-04-01

    Autonomous ocean observation is massively increasing the number of sensors in the ocean. Accordingly, the continuing increase in datasets produced, makes selecting sensors that are fit for purpose a growing challenge. Decision making on selecting quality sensor data, is based on the sensor's metadata, i.e. manufacturer specifications, history of calibrations etc. The Open Geospatial Consortium (OGC) has developed the Sensor Web Enablement (SWE) standards to facilitate integration and interoperability of sensor data and metadata. The World Wide Web Consortium (W3C) Semantic Web technologies enable machine comprehensibility promoting sophisticated linking and processing of data published on the web. Linking the sensor's data and metadata according to the above-mentioned standards can yield practical difficulties, because of internal hardware bandwidth restrictions and a requirement to constrain data transmission costs. Our approach addresses these practical difficulties by uniquely identifying sensor and platform models and instances through URIs, which resolve via content negotiation to either OGC's sensor meta language, sensorML or W3C's Linked Data. Data transmitted by a sensor incorporate the sensor's unique URI to refer to its metadata. Sensor and platform model URIs and descriptions are created and hosted by the British Oceanographic Data Centre (BODC) linked systems service. The sensor owner creates the sensor and platform instance URIs prior and during sensor deployment, through an updatable web form, the Sensor Instance Form (SIF). SIF enables model and instance URI association but also platform and sensor linking. The use of URIs, which are dynamically generated through the SIF, offers both practical and economical benefits to the implementation of SWE and Linked Data standards in near real time systems. Data can be linked to metadata dynamically in-situ while saving on the costs associated to the transmission of long metadata descriptions. The transmission

  10. Log-less metadata management on metadata server for parallel file systems.

    Science.gov (United States)

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  11. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    Directory of Open Access Journals (Sweden)

    Jianwei Liao

    2014-01-01

    Full Text Available This paper presents a novel metadata management mechanism on the metadata server (MDS for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  12. Evolution in Metadata Quality: Common Metadata Repository's Role in NASA Curation Efforts

    Science.gov (United States)

    Gilman, Jason; Shum, Dana; Baynes, Katie

    2016-01-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This poster covers how we use humanizers, a technique for dealing with the symptoms of metadata issues, as well as our plans for future metadata validation enhancements. The CMR currently indexes 35K collections and 300M granules.

  13. COMPLEX QUERY AND METADATA

    OpenAIRE

    Nakatoh, Tetsuya; Omori, Keisuke; Yamada, Yasuhiro; Hirokawa, Sachio

    2003-01-01

    We are developing a search system DAISEn which integrates multiple search engines and generates a metasearch engine automatically. The target search engines of DAISEn are not general search engines, but are search engines specialized in some area. Integration of such engines yields efficiency and quality. There are search engines of new type which accept complex query and return structured data. Integration of such search engines is much harder than that of simple search engines which accept ...

  14. Adapting the serial Alpgen event generator to simulate LHC collisions on millions of parallel threads

    CERN Document Server

    Childers, J T; LeCompte, T J; Papka, M E; Benjamin, D P

    2015-01-01

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.

  15. SCEC Community Modeling Environment (SCEC/CME) - Data and Metadata Management Issues

    Science.gov (United States)

    Minster, J.; Faerman, M.; Ely, G.; Maechling, P.; Gupta, A.; Xin, Q.; Kremenek, G.; Shkoller, B.; Olsen, K.; Day, S.; Moore, R.

    2003-12-01

    One of the goals of the SCEC Community Modeling Environment is to facilitate the execution of substantial collections of large numerical simulations. Since such simulations are resource-intensive, and can generate extremely large outputs, implementing this concept raises a host of data and metadata management challenges. Due to the high computational cost involved in running these simulations, one must balance the cost of repeating such simulations against the burden of archiving the produced datasets making them accessible for future use such as post processing or visualization, without the need of re-computation. Further, a carefully selected collection of such data sets might be used as benchmarks for assessing accuracy and performance of future simulations, developing post-processing software such as visualization tools, and testing data and metadata management strategies. The problem is rapidly compounded if one contemplates the possibility of computing ensemble averages for simulations of complex nonlinear systems. The definition and organization of a complete set of metadata to describe fully any given simulation is a surprisingly complex task, which we approach from the point of view of developing a community digital library, which provides the means to organize the material, as well as standard metadata attributes. Web-based discovery mechanisms are then used to support browsing and retrieval of data. A key component is the selection of appropriate descriptive metadata. We compare existing metadata standards from the digital library community, federal standards, and discipline specific metadata attributes. The digital library community has developed a standard for organizing metadata, called the Metadata Encoding and Transmission Standard (METS). This schema supports descriptive (provenance), administrative (location), structural (component relationships), and behavioral (display and manipulation applications). The organization can be augmented with

  16. Adaptive Backstepping Control Based on Floating Offshore High Temperature Superconductor Generator for Wind Turbines

    Directory of Open Access Journals (Sweden)

    Feng Yang

    2014-01-01

    Full Text Available With the rapid development of offshore wind power, the doubly fed induction generator and permanent magnet synchronous generator cannot meet the increasing request of power capacity. Therefore, superconducting generator should be used instead of the traditional motor, which can improve generator efficiency, reduce the weight of wind turbines, and increase system reliability. This paper mainly focuses on nonlinear control in the offshore wind power system which is consisted of a wind turbine and a high temperature superconductor generator. The proposed control approach is based on the adaptive backstepping method. Its main purpose is to regulate the rotor speed and generator voltage, therefore, achieving the maximum power point tracking (MPPT, improving the efficiency of a wind turbine, and then enhancing the system’s stability and robustness under large disturbances. The control approach can ensure high precision of generator speed tracking, which is confirmed in both the theoretical analysis and numerical simulation.

  17. Multi Agent System Based Adaptive Protection for Dispersed Generation Integrated Distribution Systems

    DEFF Research Database (Denmark)

    Liu, Leo; Rather, Zakir Hussain; Bak, Claus Leth

    2013-01-01

    The increasing penetration of dispersed generation (DG) brings challenges to conventional protection approaches of distribution system, mainly due to bi-directional power flow and variable fault current contribution from different generation technology-based DG units. Moreover, the trend of allow......The increasing penetration of dispersed generation (DG) brings challenges to conventional protection approaches of distribution system, mainly due to bi-directional power flow and variable fault current contribution from different generation technology-based DG units. Moreover, the trend......) is proposed. The adaptive protection intelligently adopts suitable settings for the variation of fault current from diversified DG units. Furthermore, the structure of mobile MAS with additional flexibility is capable of adapting the changes of system topology in a short period, e.g. radial/meshed, grid...

  18. Ontology Based Metadata Management for National Healthcare Data Dictionary

    Directory of Open Access Journals (Sweden)

    Yasemin Yüksek

    2012-02-01

    Full Text Available Ontology based metadata is based on ontologies that give formal semantics to information for content level. In this study, ontology based metadata management that intended the metadata modeling developed for National Health Data Dictionary (NHDD was proposed. NHDD is used as a reference to all health institutions in Turkey and it provides great contribution in terms of the terminology. The approach of the proposed ontology based metadata management was achieved by using modeling methodology of metadata requirements. This methodology includes determination of metadata beneficiaries, listing of metadata requirements for each beneficiary, identification of the source of metadata, categorizing of metadata and a metamodel building.

  19. Adaptive H-infinity control of synchronous generators with steam valve via Hamiltonian function method

    Institute of Scientific and Technical Information of China (English)

    Shujuan LI; Yuzhen WANG

    2006-01-01

    Based on Hamiltonian formulation, this paper proposes a design approach to nonlinear feedback excitation control of synchronous generators with steam valve control, disturbances and unknown parameters. It is shown that the dynamics of the synchronous generators can be expressed as a dissipative Hamiltonian system, based on which an adaptive H-infinity controller is then designed for the systems by using the structure properties of dissipative Hamiltonian systems.Simulations show that the controller obtained in this paper is very effective.

  20. PSYCHO-EMOTIONAL CHARACTERISTIC OF YOUNG GENERATION WITH A VARYING ADAPTATION POTENTIAL

    Directory of Open Access Journals (Sweden)

    Анатолий Степанович Пуликов

    2013-08-01

    Full Text Available Goal:  to study certain characteristics of emotional constitution of young generation in Siberia and their dependence on adaptive capability.Method and methodology:  124 apparently healthy young male students of Zheleznogorsk branch ofKrasnoyarskStatePedagogicalUniversity named after V.P. Astafyev were examined on a volunteer basis upon their informed consent.Up-to-date conventional techniques were used for making anthropometric measurements and conducting functional study.Personality questionnaire by G.Y. Eysenck was used for referring students to emotional types.Results: body weight of young males from Zheleznogorsk increases from asthenic to normosthenic and picnic types of build, while the height decreases from normosthenic and asthenic to picnic types. About 40% of young males are of normosthenic build and about 30% each can be referred to picnic or asthenic types.As far as adaptation potential is concerned, in the majority of young males strain of adaptation mechanisms or poor adaptation (90-92% are present, and only 8-10% have normal adaptation characteristics.Parameters of adaptation capabilities in characteristics of emotional status adequately reflect functional activity of central links of regulation and adaptation ‘resources’. Among characteristic features of emotional constitution of young males from Zheleznogorsk are moderate and considerable introversion, a high degree of emotional stability in situations when adaptation mechanisms are strained, and rather high emotional instability among young males with normal adaptation characteristics and even higher among those with poor adaptation mechanisms. All these are probably connected with radioecological situation at Zheleznogorsk.Scope of application of results: medicine, psychology, age-specific physiology, anthropology, neurology.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-21

  1. XML for catalogers and metadata librarians

    CERN Document Server

    Cole, Timothy W

    2013-01-01

    How are today's librarians to manage and describe the everexpanding volumes of resources, in both digital and print formats? The use of XML in cataloging and metadata workflows can improve metadata quality, the consistency of cataloging workflows, and adherence to standards. This book is intended to enable current and future catalogers and metadata librarians to progress beyond a bare surfacelevel acquaintance with XML, thereby enabling them to integrate XML technologies more fully into their cataloging workflows. Building on the wealth of work on library descriptive practices, cataloging, and metadata, XML for Catalogers and Metadata Librarians explores the use of XML to serialize, process, share, and manage library catalog and metadata records. The authors' expert treatment of the topic is written to be accessible to those with little or no prior practical knowledge of or experience with how XML is used. Readers will gain an educated appreciation of the nuances of XML and grasp the benefit of more advanced ...

  2. Security in a Replicated Metadata Catalogue

    CERN Document Server

    Koblitz, B

    2007-01-01

    The gLite-AMGA metadata has been developed by NA4 to provide simple relational metadata access for the EGEE user community. As advanced features, which will be the focus of this presentation, AMGA provides very fine-grained security also in connection with the built-in support for replication and federation of metadata. AMGA is extensively used by the biomedical community to store medical images metadata, digital libraries, in HEP for logging and bookkeeping data and in the climate community. The biomedical community intends to deploy a distributed metadata system for medical images consisting of various sites, which range from hospitals to computing centres. Only safe sharing of the highly sensitive metadata as provided in AMGA makes such a scenario possible. Other scenarios are digital libraries, which federate copyright protected (meta-) data into a common catalogue. The biomedical and digital libraries have been deployed using a centralized structure already for some time. They now intend to decentralize ...

  3. A Distributed Infrastructure for Metadata about Metadata: The HDMM Architectural Style and PORTAL-DOORS System

    Directory of Open Access Journals (Sweden)

    Carl Taswell

    2010-06-01

    Full Text Available Both the IRIS-DNS System and the PORTAL-DOORS System share a common architectural style for pervasive metadata networks that operate as distributed metadata management systems with hierarchical authorities for entity registering and attribute publishing. Hierarchical control of metadata redistribution throughout the registry-directory networks constitutes an essential characteristic of this architectural style called Hierarchically Distributed Mobile Metadata (HDMM with its focus on moving the metadata for who what where as fast as possible from servers in response to requests from clients. The novel concept of multilevel metadata about metadata has also been defined for the PORTAL-DOORS System with the use of entity, record, infoset, representation and message metadata. Other new features implemented include the use of aliases, priorities and metaresources.

  4. On the communication of scientific data: The Full-Metadata Format

    DEFF Research Database (Denmark)

    Riede, Moritz; Schueppel, Rico; Sylvester-Hvid, Kristian O.

    2010-01-01

    In this paper, we introduce a scientific format for text-based data files, which facilitates storing and communicating tabular data sets. The so-called Full-Metadata Format builds on the widely used INI-standard and is based on four principles: readable self-documentation, flexible structure, fail......-safe compatibility, and searchability. As a consequence, all metadata required to interpret the tabular data are stored in the same file, allowing for the automated generation of publication-ready tables and graphs and the semantic searchability of data file collections. The Full-Metadata Format is introduced...

  5. Metadata salad at the Cordoba Observatory

    CERN Document Server

    Lencinas, Verónica

    2016-01-01

    The Plate Archive of the Cordoba Observatory includes 20.000 photographs and spectra on glass plates dating from 1893 to 1983. This contribution describes the work performed since the plate archive was transferred to the Observatory Library in 2011. In 2014 an interdisciplinary team was assembled and a research grant from the National University of Cordoba was obtained with the objectives of preserving the glass plates and generate public access for astronomers and other audiences. The preservation work not only includes practical intervention to improve conservation conditions for the whole archive, but also a diagnose of the preservation conditions for the plates and identification of best practices for cleaning the plates. The access envisioned through digitization requires not only the scanning of all the plates, but also careful definition and provision of metadata. In this regard, each institutional level involved -in this case: archive, library, astronomical observatory and public university - demands ...

  6. The Development of Group Interaction Patterns: How Groups become Adaptive, Generative, and Transformative Learners

    Science.gov (United States)

    London, Manuel; Sessa, Valerie I.

    2007-01-01

    This article integrates the literature on group interaction process analysis and group learning, providing a framework for understanding how patterns of interaction develop. The model proposes how adaptive, generative, and transformative learning processes evolve and vary in their functionality. Environmental triggers for learning, the group's…

  7. Critical Metadata for Spectroscopy Field Campaigns

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2014-04-01

    Full Text Available A field spectroscopy metadata standard is defined as those data elements that explicitly document the spectroscopy dataset and field protocols, sampling strategies, instrument properties and environmental and logistical variables. Standards for field spectroscopy metadata affect the quality, completeness, reliability, and usability of datasets created in situ. Currently there is no standardized methodology for documentation of in situ spectroscopy data or metadata. This paper presents results of an international experiment comprising a web-based survey and expert panel evaluation that investigated critical metadata in field spectroscopy. The survey participants were a diverse group of scientists experienced in gathering spectroscopy data across a wide range of disciplines. Overall, respondents were in agreement about a core metadataset for generic campaign metadata, allowing for a prioritization of critical metadata elements to be proposed including those relating to viewing geometry, location, general target and sampling properties, illumination, instrument properties, reference standards, calibration, hyperspectral signal properties, atmospheric conditions, and general project details. Consensus was greatest among individual expert groups in specific application domains. The results allow the identification of a core set of metadata fields that enforce long term data storage and serve as a foundation for a metadata standard. This paper is part one in a series about the core elements of a robust and flexible field spectroscopy metadata standard.

  8. Evaluating the privacy properties of telephone metadata.

    Science.gov (United States)

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C

    2016-05-17

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences.

  9. Power-generation system vulnerability and adaptation to changes in climate and water resources

    Science.gov (United States)

    van Vliet, Michelle T. H.; Wiberg, David; Leduc, Sylvain; Riahi, Keywan

    2016-04-01

    Hydropower and thermoelectric power together contribute 98% of the world’s electricity generation at present. These power-generating technologies both strongly depend on water availability, and water temperature for cooling also plays a critical role for thermoelectric power generation. Climate change and resulting changes in water resources will therefore affect power generation while energy demands continue to increase with economic development and a growing world population. Here we present a global assessment of the vulnerability of the world’s current hydropower and thermoelectric power-generation system to changing climate and water resources, and test adaptation options for sustainable water-energy security during the twenty-first century. Using a coupled hydrological-electricity modelling framework with data on 24,515 hydropower and 1,427 thermoelectric power plants, we show reductions in usable capacity for 61-74% of the hydropower plants and 81-86% of the thermoelectric power plants worldwide for 2040-2069. However, adaptation options such as increased plant efficiencies, replacement of cooling system types and fuel switches are effective alternatives to reduce the assessed vulnerability to changing climate and freshwater resources. Transitions in the electricity sector with a stronger focus on adaptation, in addition to mitigation, are thus highly recommended to sustain water-energy security in the coming decades.

  10. Adaptive Practice: Next Generation Evidence-Based Practice in Digital Environments.

    Science.gov (United States)

    Kennedy, Margaret Ann

    2016-01-01

    Evidence-based practice in nursing is considered foundational to safe, competent care. To date, rigid traditional perceptions of what constitutes 'evidence' have constrained the recognition and use of practice-based evidence and the exploitation of novel forms of evidence from data rich environments. Advancements such as the conceptualization of clinical intelligence, the prevalence of increasingly sophisticated digital health information systems, and the advancement of the Big Data phenomenon have converged to generate a new contemporary context. In today's dynamic data-rich environments, clinicians have new sources of valid evidence, and need a new paradigm supporting clinical practice that is adaptive to information generated by diverse electronic sources. This opinion paper presents adaptive practice as the next generation of evidence-based practice in contemporary evidence-rich environments and provides recommendations for the next phase of evolution.

  11. Research on Power Control of Wind Power Generation Based on Neural Network Adaptive Control

    Institute of Scientific and Technical Information of China (English)

    Hai-ying DONG; Chuan-hua SUN

    2010-01-01

    -For the characteristics of wind power generation system is multivariable,nonlinear and random,in this paper the neural network PID adaptive control is adopted.The size of pitch angle is adjusted in time to improve the performance of power control.The PID parameters are corrected by the gradient descent method,and Radial Basis Functinn(RBF)neural network is used as the system identifier in this method.Simulation results shaw that by using neural adaptive PID controller the generator power control can inhibit effectively the speed and affect the output power of generator.The dynamic performance and robustness of the controlled system is good,and the performance of wind power system is improved.

  12. Research on Community Competition and Adaptive Genetic Algorithm for Automatic Generation of Tang Poetry

    Directory of Open Access Journals (Sweden)

    Wujian Yang

    2016-01-01

    Full Text Available As there are many researches about traditional Tang poetry, among which automatically generated Tang poetry has arouse great concern in recent years. This study presents a community-based competition and adaptive genetic algorithm for automatically generating Tang poetry. The improved algorithm with community-based competition that has been added aims to maintain the diversity of genes during evolution; meanwhile, the adaptation means that the probabilities of crossover and mutation are varied from the fitness values of the Tang poetry to prevent premature convergence and generate better poems more quickly. According to the analysis of experimental results, it has been found that the improved algorithm is superior to the conventional method.

  13. Network Structure, Metadata, and the Prediction of Missing Nodes and Annotations

    Science.gov (United States)

    Hric, Darko; Peixoto, Tiago P.; Fortunato, Santo

    2016-07-01

    The empirical validation of community detection methods is often based on available annotations on the nodes that serve as putative indicators of the large-scale network structure. Most often, the suitability of the annotations as topological descriptors itself is not assessed, and without this it is not possible to ultimately distinguish between actual shortcomings of the community detection algorithms, on one hand, and the incompleteness, inaccuracy, or structured nature of the data annotations themselves, on the other. In this work, we present a principled method to access both aspects simultaneously. We construct a joint generative model for the data and metadata, and a nonparametric Bayesian framework to infer its parameters from annotated data sets. We assess the quality of the metadata not according to their direct alignment with the network communities, but rather in their capacity to predict the placement of edges in the network. We also show how this feature can be used to predict the connections to missing nodes when only the metadata are available, as well as predicting missing metadata. By investigating a wide range of data sets, we show that while there are seldom exact agreements between metadata tokens and the inferred data groups, the metadata are often informative of the network structure nevertheless, and can improve the prediction of missing nodes. This shows that the method uncovers meaningful patterns in both the data and metadata, without requiring or expecting a perfect agreement between the two.

  14. Design of nonlinear adaptive steam valve controllers for a turbo-generator system

    Energy Technology Data Exchange (ETDEWEB)

    Bekiaris-Liberis, N.K.; Paraskevopoulos, P.N. [National Technical Univ. of Athens Zographou, Athens (Greece); Boglou, A.K. [Technology Education Inst. of Kavala Agios Loukas, Kavala (Greece); Arvanitis, K.G.; Pasgianos, G.D. [Agricultural Univ. of Athens, Athens (Greece)

    2008-07-01

    This paper reported on a study that investigated the control of power systems consisting of interconnected networks of transmission lines linking generators and loads. Improving both small and large perturbation stability and dynamic performance is important because power systems have become less stable in the past 15 years due to the use of controllers that have been designed on the basis of linearized synchronous generators and turbine models. The high nonlinear nature of power system models and the resulting disturbances render conventional linear controller design techniques obsolete for use in power systems control. Power system engineers are becoming aware of the role of turbine steam valves in improving the dynamic stability of power systems and damping low frequency oscillations. Advanced nonlinear control strategies are needed since the conventional steam valve control theory cannot guarantee transient stability in cases where operational conditions and parameters vary considerably. A design approach to a nonlinear adaptive control system with unknown parameters was developed and applied to the turbine main steam valve control of a power system. A fourth order machine model was used along with an adaptive backstepping method to construct the Lyapunov function in order to obtain a nonlinear adaptive controller to solve the turbine fast valving nonlinear control problem. The newly designed nonlinear adaptive controller can make the resulting adaptive system asymptotically stable. The proposed controller is accompanied by a dynamic estimator of parameters and includes nonlinear damping terms, which guarantee input-output stability even without the use of the adaptive law. Simulation results showed that the proposed nonlinear adaptive controller performs better than other turbine main steam valve control techniques. It can face large parametric uncertainty and results in a closed-loop system that is able to face large and smaller disturbances, providing a

  15. Spike-frequency adaptation generates intensity invariance in a primary auditory interneuron.

    Science.gov (United States)

    Benda, Jan; Hennig, R Matthias

    2008-04-01

    Adaptation of the spike-frequency response to constant stimulation, as observed on various timescales in many neurons, reflects high-pass filter properties of a neuron's transfer function. Adaptation in general, however, is not sufficient to make a neuron's response independent of the mean intensity of a sensory stimulus, since low frequency components of the stimulus are still transmitted, although with reduced gain. We here show, based on an analytically tractable model, that the response of a neuron is intensity invariant, if the fully adapted steady-state spike-frequency response to constant stimuli is independent of stimulus intensity. Electrophysiological recordings from the AN1, a primary auditory interneuron of crickets, show that for intensities above 60 dB SPL (sound pressure level) the AN1 adapted with a time-constant of approximately 40 ms to a steady-state firing rate of approximately 100 Hz. Using identical random amplitude-modulation stimuli we verified that the AN1's spike-frequency response is indeed invariant to the stimulus' mean intensity above 60 dB SPL. The transfer function of the AN1 is a band pass, resulting from a high-pass filter (cutoff frequency at 4 Hz) due to adaptation and a low-pass filter (100 Hz) determined by the steady-state spike frequency. Thus, fast spike-frequency adaptation can generate intensity invariance already at the first level of neural processing.

  16. Adapting to new threats: the generation of memory by CRISPR-Cas immune systems.

    Science.gov (United States)

    Heler, Robert; Marraffini, Luciano A; Bikard, David

    2014-07-01

    Clustered, regularly interspaced, short palindromic repeats (CRISPR) loci and their associated genes (cas) confer bacteria and archaea with adaptive immunity against phages and other invading genetic elements. A fundamental requirement of any immune system is the ability to build a memory of past infections in order to deal more efficiently with recurrent infections. The adaptive feature of CRISPR-Cas immune systems relies on their ability to memorize DNA sequences of invading molecules and integrate them in between the repetitive sequences of the CRISPR array in the form of 'spacers'. The transcription of a spacer generates a small antisense RNA that is used by RNA-guided Cas nucleases to cleave the invading nucleic acid in order to protect the cell from infection. The acquisition of new spacers allows the CRISPR-Cas immune system to rapidly adapt against new threats and is therefore termed 'adaptation'. Recent studies have begun to elucidate the genetic requirements for adaptation and have demonstrated that rather than being a stochastic process, the selection of new spacers is influenced by several factors. We review here our current knowledge of the CRISPR adaptation mechanism.

  17. A Metadata Standard for Hydroinformatic Data Conforming to International Standards

    Science.gov (United States)

    Notay, Vikram; Carstens, Georg; Lehfeldt, Rainer

    2017-04-01

    The affordable availability of computing power and digital storage has been a boon for the scientific community. The hydroinformatics community has also benefitted from the so-called digital revolution, which has enabled the tackling of more and more complex physical phenomena using hydroinformatic models, instruments, sensors, etc. With models getting more and more complex, computational domains getting larger and the resolution of computational grids and measurement data getting finer, a large amount of data is generated and consumed in any hydroinformatics related project. The ubiquitous availability of internet also contributes to this phenomenon with data being collected through sensor networks connected to telecommunications networks and the internet long before the term Internet of Things existed. Although generally good, this exponential increase in the number of available datasets gives rise to the need to describe this data in a standardised way to not only be able to get a quick overview about the data but to also facilitate interoperability of data from different sources. The Federal Waterways Engineering and Research Institute (BAW) is a federal authority of the German Federal Ministry of Transport and Digital Infrastructure. BAW acts as a consultant for the safe and efficient operation of the German waterways. As part of its consultation role, BAW operates a number of physical and numerical models for sections of inland and marine waterways. In order to uniformly describe the data produced and consumed by these models throughout BAW and to ensure interoperability with other federal and state institutes on the one hand and with EU countries on the other, a metadata profile for hydroinformatic data has been developed at BAW. The metadata profile is composed in its entirety using the ISO 19115 international standard for metadata related to geographic information. Due to the widespread use of the ISO 19115 standard in the existing geodata infrastructure

  18. Metadata and API Based Environment Aware Content Delivery Architecture

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    One of the limitations of current content delivery networks is lack of support for environment aware content delivery. This paper first discusses the requirements of such support, and proposes a new metadata gateway based environment aware content delivery architecture. The paper discusses in some details key functions and technologies of environment aware content delivery architecture, including its APIs and control policies. Finally the paper presents an application to illustrate advantages of environment aware content delivery architecture in the context of next generation network.

  19. Adaptive sliding mode control of interleaved parallel boost converter for fuel cell energy generation system

    DEFF Research Database (Denmark)

    El Fadil, H.; Giri, F.; Guerrero, Josep M.

    2013-01-01

    This paper deals with the problem of controlling energy generation systems including fuel cells (FCs) and interleaved boost power converters. The proposed nonlinear adaptive controller is designed using sliding mode control (SMC) technique based on the system nonlinear model. The latter accounts...... for the boost converter large-signal dynamics as well as for the fuel-cell nonlinear characteristics. The adaptive nonlinear controller involves online estimation of the DC bus impedance ‘seen’ by the converter. The control objective is threefold: (i) asymptotic stability of the closed loop system, (ii) output...... voltage regulation under bus impedance uncertainties and (iii) equal current sharing between modules. It is formally shown, using theoretical analysis and simulations, that the developed adaptive controller actually meets its control objectives....

  20. APPLICATION OF RESTART COVARIANCE MATRIX ADAPTATION EVOLUTION STRATEGY (RCMA-ES TO GENERATION EXPANSION PLANNING PROBLEM

    Directory of Open Access Journals (Sweden)

    K. Karthikeyan

    2012-10-01

    Full Text Available This paper describes the application of an evolutionary algorithm, Restart Covariance Matrix Adaptation Evolution Strategy (RCMA-ES to the Generation Expansion Planning (GEP problem. RCMA-ES is a class of continuous Evolutionary Algorithm (EA derived from the concept of self-adaptation in evolution strategies, which adapts the covariance matrix of a multivariate normal search distribution. The original GEP problem is modified by incorporating Virtual Mapping Procedure (VMP. The GEP problem of a synthetic test systems for 6-year, 14-year and 24-year planning horizons having five types of candidate units is considered. Two different constraint-handling methods are incorporated and impact of each method has been compared. In addition, comparison and validation has also made with dynamic programming method.

  1. Leveraging Metadata to Create Better Web Services

    Science.gov (United States)

    Mitchell, Erik

    2012-01-01

    Libraries have been increasingly concerned with data creation, management, and publication. This increase is partly driven by shifting metadata standards in libraries and partly by the growth of data and metadata repositories being managed by libraries. In order to manage these data sets, libraries are looking for new preservation and discovery…

  2. Metadata for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Adrian Sterca

    2010-12-01

    Full Text Available This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  3. GlamMap : Visualizing library metadata

    NARCIS (Netherlands)

    Betti, Arianna; Gerrits, Dirk; Speckmann, Bettina; van den Berg, Hein

    2014-01-01

    Libraries provide access to large amounts of library metadata. Unfortunately, many libraries only offer textual interfaces for searching and browsing their holdings. Visualisations provide simpler, faster, and more efficient ways to navigate, search and study large quantities of metadata. This paper

  4. A Metadata-Rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2009-01-07

    Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  5. A Dynamic Metadata Community Profile for CUAHSI

    Science.gov (United States)

    Bermudez, L.; Piasecki, M.

    2004-12-01

    Common Metadata standards typically lack of domain specific elements, have limited extensibility and do not always resolve semantic heterogeneities that could occur in the annotations. To facilitate the use and extension of metadata specifications a methodology called Dynamic Community Profiles, DCP, is presented. The methodology allows to overwrite elements definitions and to specify core elements as metadata tree paths. DCP uses the Web Ontology Language (OWL), the Resource Description Framework (RDF) and XML syntax to formalize specifications and to create controlled vocabularies in ontologies, which enhances interoperability. This methodology was employed to create a metadata profile for the Consortium of Universities for the Advancement of Hydrologic Science Inc. (CUAHSI). The profile was created by extending ISO-19115:2003 geographic metadata standard and restricting the permissible values of some elements. The values used as controlled vocabularies were inferred from hydrologic keywords found in the Global Change Master Directory (GCMD) and from measurement units found in the Hydrologic Handbook. Also, a core metadata set for CUAHSI was formally expressed as tree paths, containing the ISO core set plus additional elements. Finally a tool was developed to test the extension and to allow creation of metadata instances in RDF/XML which conforms to the profile. Also this tool is able to export the core elements to other schema formats such as Metadata Template Files (MTF).

  6. Incorporating ISO Metadata Using HDF Product Designer

    Science.gov (United States)

    Jelenak, Aleksandar; Kozimor, John; Habermann, Ted

    2016-01-01

    The need to store in HDF5 files increasing amounts of metadata of various complexity is greatly overcoming the capabilities of the Earth science metadata conventions currently in use. Data producers until now did not have much choice but to come up with ad hoc solutions to this challenge. Such solutions, in turn, pose a wide range of issues for data managers, distributors, and, ultimately, data users. The HDF Group is experimenting on a novel approach of using ISO 19115 metadata objects as a catch-all container for all the metadata that cannot be fitted into the current Earth science data conventions. This presentation will showcase how the HDF Product Designer software can be utilized to help data producers include various ISO metadata objects in their products.

  7. Collection Metadata Solutions for Digital Library Applications

    Science.gov (United States)

    Hill, Linda L.; Janee, Greg; Dolin, Ron; Frew, James; Larsgaard, Mary

    1999-01-01

    Within a digital library, collections may range from an ad hoc set of objects that serve a temporary purpose to established library collections intended to persist through time. The objects in these collections vary widely, from library and data center holdings to pointers to real-world objects, such as geographic places, and the various metadata schemas that describe them. The key to integrated use of such a variety of collections in a digital library is collection metadata that represents the inherent and contextual characteristics of a collection. The Alexandria Digital Library (ADL) Project has designed and implemented collection metadata for several purposes: in XML form, the collection metadata "registers" the collection with the user interface client; in HTML form, it is used for user documentation; eventually, it will be used to describe the collection to network search agents; and it is used for internal collection management, including mapping the object metadata attributes to the common search parameters of the system.

  8. Adaptive Control and Multi-agent Interface for Infotelecommunication Systems of New Generation

    OpenAIRE

    Timofeev, Adil

    2004-01-01

    Problems for intellectualisation for man-machine interface and methods of self-organization for network control in multi-agent infotelecommunication systems have been discussed. Architecture and principles for construction of network and neural agents for telecommunication systems of new generation have been suggested. Methods for adaptive and multi-agent routing for information flows by requests of external agents- users of global telecommunication systems and computer network...

  9. Lattice QCD Data and Metadata Archives at Fermilab and the International Lattice Data Grid

    CERN Document Server

    Neilsen, E H; Simone, James

    2005-01-01

    The lattice gauge theory community produces large volumes of data. Because the data produced by completed computations form the basis for future work, the maintenance of archives of existing data and metadata describing the provenance, generation parameters, and derived characteristics of that data is essential not only as a reference, but also as a basis for future work. Development of these archives according to uniform standards both in the data and metadata formats provided and in the software interfaces to the component services could greatly simplify collaborations between institutions and enable the dissemination of meaningful results. This paper describes the progress made in the development of a set of such archives at the Fermilab lattice QCD facility. We are coordinating the development of the interfaces to these facilities and the formats of the data and metadata they provide with the efforts of the international lattice data grid (ILDG) metadata and middleware working groups, whose goals are to d...

  10. Recipes for Semantic Web Dog Food — The ESWC and ISWC Metadata Projects

    Science.gov (United States)

    Möller, Knud; Heath, Tom; Handschuh, Siegfried; Domingue, John

    Semantic Web conferences such as ESWC and ISWC offer prime opportunities to test and showcase semantic technologies. Conference metadata about people, papers and talks is diverse in nature and neither too small to be uninteresting or too big to be unmanageable. Many metadata-related challenges that may arise in the Semantic Web at large are also present here. Metadata must be generated from sources which are often unstructured and hard to process, and may originate from many different players, therefore suitable workflows must be established. Moreover, the generated metadata must use appropriate formats and vocabularies, and be served in a way that is consistent with the principles of linked data. This paper reports on the metadata efforts from ESWC and ISWC, identifies specific issues and barriers encountered during the projects, and discusses how these were approached. Recommendations are made as to how these may be addressed in the future, and we discuss how these solutions may generalize to metadata production for the Semantic Web at large.

  11. Metadata and Knowledge Management driven Web-based Learning Information System towards Web/e-Learning 3.0

    Directory of Open Access Journals (Sweden)

    Hugo Rego

    2010-06-01

    Full Text Available AHKME e-learning system main aim is to provide a modular and extensible system with adaptive and knowledge management abilities for students and teachers. This system is based on the IMS specifications representing information through metadata, granting semantics to all contents in it, giving them meaning. Metadata is used to satisfy requirements like reusability, interoperability and multipurpose. The system provides authoring tools to define learning methods with adaptive characteristics, and tools to create courses allowing users with different roles, promoting several types of collaborative and group learning. It is also endowed with tools to retrieve, import and evaluate learning objects based on metadata, where students can use quality educational contents fitting their characteristics, and teachers have the possibility of using quality educational contents to structure their courses. The metadata management and evaluation play an important role in order to get the best results in the teaching/learning process.

  12. Network structure, metadata and the prediction of missing nodes

    CERN Document Server

    Hric, Darko; Fortunato, Santo

    2016-01-01

    The empirical validation of community detection methods is often based on available annotations on the nodes that serve as putative indicators of the large-scale network structure. Most often, the suitability of the annotations as topological descriptors itself is not assessed, and without this it is not possible to ultimately distinguish between actual shortcomings of the community detection algorithms on one hand, and the incompleteness, inaccuracy or structured nature of the data annotations themselves on the other. In this work we present a principled method to access both aspects simultaneously. We construct a joint generative model for the data and metadata, and a non-parametric Bayesian framework to infer its parameters from annotated datasets. We assess the quality of the metadata not according to its direct alignment with the network communities, but rather in its capacity to predict the placement of edges in the network. We also show how this feature can be used to predict the connections to missing...

  13. Internet experiments: methods, guidelines, metadata

    Science.gov (United States)

    Reips, Ulf-Dietrich

    2009-02-01

    The Internet experiment is now a well-established and widely used method. The present paper describes guidelines for the proper conduct of Internet experiments, e.g. handling of dropout, unobtrusive naming of materials, and pre-testing. Several methods are presented that further increase the quality of Internet experiments and help to avoid frequent errors. These methods include the "seriousness check", "warm-up," "high hurdle," and "multiple site entry" techniques, control of multiple submissions, and control of motivational confounding. Finally, metadata from sites like WEXTOR (http://wextor.org) and the web experiment list (http://genpsylab-wexlist.uzh.ch/) are reported that show the current state of Internet-based research in terms of the distribution of fields, topics, and research designs used.

  14. A Method for Adaptive Mesh Generation Taking into Account the Continuity Requirements of Magnetic Field

    Science.gov (United States)

    Ishikawa, Takeo; Matsunami, Michio

    This paper proposes a method to generate adaptively 2D and 3D finite element meshes taking into account the continuity requirements of the magnetic field at the interface between two neighboring elements. First, this paper proposes a new error estimator that includes the Zienkiewicz and Zhu error norm estimator and the boundary rules in the electromagnetic field. Using a 2D simple model, this paper decides two parameters of the proposed estimator. Next, this paper presents a 3D mesh generation method based on the Voronoi-Delaunay theory, which ensures that the bounding surface of the domain is contained in the triangulation. The method has the capability to decrease the amount of information on the connectivity of boundary nodes by generating nodes not only in the interior of the domain but also on its surface. Two simple magnetostatic field problems are provided to illustrate the usefulness of the proposed method.

  15. Evolution of the Architecture of the ATLAS Metadata Interface (AMI)

    CERN Document Server

    Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian

    2015-01-01

    The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service remains high. We describe the evolution from the beginning of the application life, using one server with a MySQL backend database, to the current state in which a cluster of virtual machines on the French Tier 1 cloud at Lyon, an Oracle database also at Lyon, with replication to Oracle at CERN and a back-up server are used.

  16. Adaptive interpretation of gas well deliverability tests with generating data of the IPR curve

    Science.gov (United States)

    Sergeev, V. L.; Phuong, Nguyen T. H.; Krainov, A. I.

    2017-01-01

    The paper considers topical issues of improving accuracy of estimated parameters given by data obtained from gas well deliverability tests, decreasing test time, and reducing gas emissions into the atmosphere. The aim of the research is to develop the method of adaptive interpretation of gas well deliverability tests with a resulting IPR curve and using a technique of generating data, which allows taking into account additional a priori information, improving accuracy of determining formation pressure and flow coefficients, reducing test time. The present research is based on the previous theoretical and practical findings in the spheres of gas well deliverability tests, systems analysis, system identification, function optimization and linear algebra. To test the method, the authors used the field data of deliverability tests of two wells, run in the Urengoy gas and condensate field, Tyumen Oblast. The authors suggest the method of adaptive interpretation of gas well deliverability tests with the resulting IPR curve and the possibility of generating data of bottomhole pressure and a flow rate at different test stages. The suggested method allows defining the estimates of the formation pressure and flow coefficients, optimal in terms of preassigned measures of quality, and setting the adequate number of test stages in the course of well testing. The case study of IPR curve data processing has indicated that adaptive interpretation provides more accurate estimates on the formation pressure and flow coefficients, as well as reduces the number of test stages.

  17. Handbook of metadata, semantics and ontologies

    CERN Document Server

    Sicilia, Miguel-Angel

    2013-01-01

    Metadata research has emerged as a discipline cross-cutting many domains, focused on the provision of distributed descriptions (often called annotations) to Web resources or applications. Such associated descriptions are supposed to serve as a foundation for advanced services in many application areas, including search and location, personalization, federation of repositories and automated delivery of information. Indeed, the Semantic Web is in itself a concrete technological framework for ontology-based metadata. For example, Web-based social networking requires metadata describing people and

  18. CanCore: Metadata for Learning Objects

    Directory of Open Access Journals (Sweden)

    Norm Friesen

    2002-10-01

    Full Text Available The vision of reusable digital learning resources or objects, made accessible through coordinated repository architectures and metadata technologies, has gained considerable attention within distance education and training communities. However, the pivotal role of metadata in this vision raises important and longstanding issues about classification, description and meaning. The purpose of this paper is to provide an overview of this vision, focusing specifically on issues of semantics. It will describe the CanCore Learning Object Metadata Application Profile as an important first step in addressing these issues in the context of the discovery, reuse and management of learning resources or objects.

  19. Generating a Domain Specific Inspection Evaluation Method through an Adaptive Framework

    Directory of Open Access Journals (Sweden)

    Roobaea AlRoobaea

    2013-07-01

    Full Text Available The electronic information revolution and the use of computers as an essential part of everyday life are now more widespread than ever before, as the Internet is exploited for the speedy transfer of data and business. Social networking sites (SNSs, such as LinkedIn, Ecademy and Google+ are growing in use worldwide, and they present popular business channels on the Internet. However, they need to be continuously evaluated and monitored to measure their levels of efficiency, effectiveness and user satisfaction, ultimately to improve quality. Nearly all previous studies have used Heuristic Evaluation (HE and User Testing (UT methodologies, which have become the accepted methods for the usability evaluation of User Interface Design (UID; however, the former is general, and unlikely to encompass all usability attributes for all website domains. The latter is expensive, time consuming and misses consistency problems. To address this need, a new evaluation method is developed using traditional evaluations (HE and UT in novel ways. The lack of an adaptive methodological framework that can be used to generate a domain- specific evaluation method, which can then be used to improve the usability assessment process for a product in any chosen domain, represents a missing area in usability testing. This paper proposes an adaptive framework that is readily capable of adaptation to any domain, and then evaluates it by generating an evaluation method for assessing and improving the usability of products in a particular domain. The evaluation method is called Domain Specific Inspection (DSI, and it is empirically, analytically and statistically tested by applying it on three websites in the social networks domain. Our experiments show that the adaptive framework is able to build a formative and summative evaluation method that provides optimal results with regard to our newly identified set of comprehensive usability problem areas as well as relevant usability

  20. Multi-neuronal refractory period adapts centrally generated behaviour to reward.

    Directory of Open Access Journals (Sweden)

    Christopher A Harris

    Full Text Available Oscillating neuronal circuits, known as central pattern generators (CPGs, are responsible for generating rhythmic behaviours such as walking, breathing and chewing. The CPG model alone however does not account for the ability of animals to adapt their future behaviour to changes in the sensory environment that signal reward. Here, using multi-electrode array (MEA recording in an established experimental model of centrally generated rhythmic behaviour we show that the feeding CPG of Lymnaea stagnalis is itself associated with another, and hitherto unidentified, oscillating neuronal population. This extra-CPG oscillator is characterised by high population-wide activity alternating with population-wide quiescence. During the quiescent periods the CPG is refractory to activation by food-associated stimuli. Furthermore, the duration of the refractory period predicts the timing of the next activation of the CPG, which may be minutes into the future. Rewarding food stimuli and dopamine accelerate the frequency of the extra-CPG oscillator and reduce the duration of its quiescent periods. These findings indicate that dopamine adapts future feeding behaviour to the availability of food by significantly reducing the refractory period of the brain's feeding circuitry.

  1. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  2. USGS Digital Orthophoto Quad (DOQ) Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the USGS DOQ Orthophoto Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the quarter-quad tile...

  3. Mining Building Metadata by Data Stream Comparison

    DEFF Research Database (Denmark)

    Holmegaard, Emil; Kjærgaard, Mikkel Baun

    2016-01-01

    to handle data streams with only slightly similar patterns. We have evaluated Metafier with points and data from one building located in Denmark. We have evaluated Metafier with 903 points, and the overall accuracy, with only 3 known examples, was 94.71%. Furthermore we found that using DTW for mining...... ways to annotate sensor and actuation points. This makes it difficult to create intuitive queries for retrieving data streams from points. Another problem is the amount of insufficient or missing metadata. We introduce Metafier, a tool for extracting metadata from comparing data streams. Metafier...... enables a semi-automatic labeling of metadata to building instrumentation. Metafier annotates points with metadata by comparing the data from a set of validated points with unvalidated points. Metafier has three different algorithms to compare points with based on their data. The three algorithms...

  4. FSA 2003-2004 Digital Orthophoto Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the 2003-2004 FSA Color Orthophotos Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the...

  5. Mining Building Metadata by Data Stream Comparison

    DEFF Research Database (Denmark)

    Holmegaard, Emil; Kjærgaard, Mikkel Baun

    2017-01-01

    to handle data streams with only slightly similar patterns. We have evaluated Metafier with points and data from one building located in Denmark. We have evaluated Metafier with 903 points, and the overall accuracy, with only 3 known examples, was 94.71%. Furthermore we found that using DTW for mining...... ways to annotate sensor and actuation points. This makes it difficult to create intuitive queries for retrieving data streams from points. Another problem is the amount of insufficient or missing metadata. We introduce Metafier, a tool for extracting metadata from comparing data streams. Metafier...... enables a semi-automatic labeling of metadata to building instrumentation. Metafier annotates points with metadata by comparing the data from a set of validated points with unvalidated points. Metafier has three different algorithms to compare points with based on their data. The three algorithms...

  6. Design of computer-generated beam-shaping holograms by iterative finite-element mesh adaption.

    Science.gov (United States)

    Dresel, T; Beyerlein, M; Schwider, J

    1996-12-10

    Computer-generated phase-only holograms can be used for laser beam shaping, i.e., for focusing a given aperture with intensity and phase distributions into a pregiven intensity pattern in their focal planes. A numerical approach based on iterative finite-element mesh adaption permits the design of appropriate phase functions for the task of focusing into two-dimensional reconstruction patterns. Both the hologram aperture and the reconstruction pattern are covered by mesh mappings. An iterative procedure delivers meshes with intensities equally distributed over the constituting elements. This design algorithm adds new elementary focuser functions to what we call object-oriented hologram design. Some design examples are discussed.

  7. Organ sample generator for expected treatment dose construction and adaptive inverse planning optimization

    Energy Technology Data Exchange (ETDEWEB)

    Nie Xiaobo; Liang Jian; Yan Di [Department of Radiation Oncology, Beaumont Health System, Royal Oak, Michigan 48073 (United States)

    2012-12-15

    Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h

  8. Generating an Ordered Data Set from an OCR Text File

    Directory of Open Access Journals (Sweden)

    Jon Crump

    2014-11-01

    Full Text Available This tutorial illustrates strategies for taking raw OCR output from a scanned text, parsing it to isolate and correct essential elements of metadata, and generating an ordered data set (a python dictionary from it. These illustrations are specific to a particular text, but the overall strategy, and some of the individual procedures, can be adapted to organize any scanned text, even if it doesn’t look like this one.

  9. Implementing Metadata that Guide Digital Preservation Services

    Directory of Open Access Journals (Sweden)

    Angela Dappert

    2011-03-01

    Full Text Available Effective digital preservation depends on a set of preservation services that work together to ensure that digital objects can be preserved for the long-term. These services need digital preservation metadata, in particular, descriptions of the properties that digital objects may have and descriptions of the requirements that guide digital preservation services. This paper analyzes how these services interact and use these metadata and develops a data dictionary to support them.

  10. Science friction: data, metadata, and collaboration.

    Science.gov (United States)

    Edwards, Paul N; Mayernik, Matthew S; Batcheller, Archer L; Bowker, Geoffrey C; Borgman, Christine L

    2011-10-01

    When scientists from two or more disciplines work together on related problems, they often face what we call 'science friction'. As science becomes more data-driven, collaborative, and interdisciplinary, demand increases for interoperability among data, tools, and services. Metadata--usually viewed simply as 'data about data', describing objects such as books, journal articles, or datasets--serve key roles in interoperability. Yet we find that metadata may be a source of friction between scientific collaborators, impeding data sharing. We propose an alternative view of metadata, focusing on its role in an ephemeral process of scientific communication, rather than as an enduring outcome or product. We report examples of highly useful, yet ad hoc, incomplete, loosely structured, and mutable, descriptions of data found in our ethnographic studies of several large projects in the environmental sciences. Based on this evidence, we argue that while metadata products can be powerful resources, usually they must be supplemented with metadata processes. Metadata-as-process suggests the very large role of the ad hoc, the incomplete, and the unfinished in everyday scientific work.

  11. What Metadata Principles Apply to Scientific Data?

    Science.gov (United States)

    Mayernik, M. S.

    2014-12-01

    Information researchers and professionals based in the library and information science fields often approach their work through developing and applying defined sets of principles. For example, for over 100 years, the evolution of library cataloging practice has largely been driven by debates (which are still ongoing) about the fundamental principles of cataloging and how those principles should manifest in rules for cataloging. Similarly, the development of archival research and practices over the past century has proceeded hand-in-hand with the emergence of principles of archival arrangement and description, such as maintaining the original order of records and documenting provenance. This project examines principles related to the creation of metadata for scientific data. The presentation will outline: 1) how understandings and implementations of metadata can range broadly depending on the institutional context, and 2) how metadata principles developed by the library and information science community might apply to metadata developments for scientific data. The development and formalization of such principles would contribute to the development of metadata practices and standards in a wide range of institutions, including data repositories, libraries, and research centers. Shared metadata principles would potentially be useful in streamlining data discovery and integration, and would also benefit the growing efforts to formalize data curation education.

  12. MECHANICAL DYNAMICS ANALYSIS OF PM GENERATOR USING H-ADAPTIVE REFINEMENT

    Directory of Open Access Journals (Sweden)

    AJAY KUMAR

    2010-03-01

    Full Text Available This paper describes the dynamic analysis of permanent magnet (PM rotor generator using COMSOL Multiphysics, a Finite Element Analysis (FEA based package and Simulink, a system simulation program. Model of PM rotor generator is developed for its mechanical dynamics and computational of torque resulting from magnetic force. For the model the mesh is constructed using first order Lagrange quadratic elements and h-adaptive refinement technique based upon bank bisection is used for improving accuracy of the model. Effect of rotor moment of inertia (MI on the winding resistance and winding inductance has been studied by using Simulink. It is shown that the system MI has a significant effect on optimal winding resistance and inductance to achieve steady state operation in shortest period of time.

  13. Generalized Monge-Kantorovich optimization for grid generation and adaptation in LP

    Energy Technology Data Exchange (ETDEWEB)

    Delzanno, G L [Los Alamos National Laboratory; Finn, J M [Los Alamos National Laboratory

    2009-01-01

    The Monge-Kantorovich grid generation and adaptation scheme of is generalized from a variational principle based on L{sub 2} to a variational principle based on L{sub p}. A generalized Monge-Ampere (MA) equation is derived and its properties are discussed. Results for p > 1 are obtained and compared in terms of the quality of the resulting grid. We conclude that for the grid generation application, the formulation based on L{sub p} for p close to unity leads to serious problems associated with the boundary. Results for 1.5 {approx}< p {approx}< 2.5 are quite good, but there is a fairly narrow range around p = 2 where the results are close to optimal with respect to grid distortion. Furthermore, the Newton-Krylov methods used to solve the generalized MA equation perform best for p = 2.

  14. Developing a Metadata Infrastructure to facilitate data driven science gateway and to provide Inspire/GEMINI compliance for CLIPC

    Science.gov (United States)

    Mihajlovski, Andrej; Plieger, Maarten; Som de Cerff, Wim; Page, Christian

    2016-04-01

    The CLIPC project is developing a portal to provide a single point of access for scientific information on climate change. This is made possible through the Copernicus Earth Observation Programme for Europe, which will deliver a new generation of environmental measurements of climate quality. The data about the physical environment which is used to inform climate change policy and adaptation measures comes from several categories: satellite measurements, terrestrial observing systems, model projections and simulations and from re-analyses (syntheses of all available observations constrained with numerical weather prediction systems). These data categories are managed by different communities: CLIPC will provide a single point of access for the whole range of data. The CLIPC portal will provide a number of indicators showing impacts on specific sectors which have been generated using a range of factors selected through structured expert consultation. It will also, as part of the transformation services, allow users to explore the consequences of using different combinations of driving factors which they consider to be of particular relevance to their work or life. The portal will provide information on the scientific quality and pitfalls of such transformations to prevent misleading usage of the results. The CLIPC project will develop an end to end processing chain (indicator tool kit), from comprehensive information on the climate state through to highly aggregated decision relevant products. Indicators of climate change and climate change impact will be provided, and a tool kit to update and post process the collection of indicators will be integrated into the portal. The CLIPC portal has a distributed architecture, making use of OGC services provided by e.g., climate4impact.eu and CEDA. CLIPC has two themes: 1. Harmonized access to climate datasets derived from models, observations and re-analyses 2. A climate impact tool kit to evaluate, rank and aggregate

  15. Social tagging in the life sciences: characterizing a new metadata resource for bioinformatics

    Directory of Open Access Journals (Sweden)

    Tennis Joseph T

    2009-09-01

    Full Text Available Abstract Background Academic social tagging systems, such as Connotea and CiteULike, provide researchers with a means to organize personal collections of online references with keywords (tags and to share these collections with others. One of the side-effects of the operation of these systems is the generation of large, publicly accessible metadata repositories describing the resources in the collections. In light of the well-known expansion of information in the life sciences and the need for metadata to enhance its value, these repositories present a potentially valuable new resource for application developers. Here we characterize the current contents of two scientifically relevant metadata repositories created through social tagging. This investigation helps to establish how such socially constructed metadata might be used as it stands currently and to suggest ways that new social tagging systems might be designed that would yield better aggregate products. Results We assessed the metadata that users of CiteULike and Connotea associated with citations in PubMed with the following metrics: coverage of the document space, density of metadata (tags per document, rates of inter-annotator agreement, and rates of agreement with MeSH indexing. CiteULike and Connotea were very similar on all of the measurements. In comparison to PubMed, document coverage and per-document metadata density were much lower for the social tagging systems. Inter-annotator agreement within the social tagging systems and the agreement between the aggregated social tagging metadata and MeSH indexing was low though the latter could be increased through voting. Conclusion The most promising uses of metadata from current academic social tagging repositories will be those that find ways to utilize the novel relationships between users, tags, and documents exposed through these systems. For more traditional kinds of indexing-based applications (such as keyword-based search to

  16. 3D-SoftChip: A Novel Architecture for Next-Generation Adaptive Computing Systems

    Directory of Open Access Journals (Sweden)

    Lee Mike Myung-Ok

    2006-01-01

    Full Text Available This paper introduces a novel architecture for next-generation adaptive computing systems, which we term 3D-SoftChip. The 3D-SoftChip is a 3-dimensional (3D vertically integrated adaptive computing system combining state-of-the-art processing and 3D interconnection technology. It comprises the vertical integration of two chips (a configurable array processor and an intelligent configurable switch through an indium bump interconnection array (IBIA. The configurable array processor (CAP is an array of heterogeneous processing elements (PEs, while the intelligent configurable switch (ICS comprises a switch block, 32-bit dedicated RISC processor for control, on-chip program/data memory, data frame buffer, along with a direct memory access (DMA controller. This paper introduces the novel 3D-SoftChip architecture for real-time communication and multimedia signal processing as a next-generation computing system. The paper further describes the advanced HW/SW codesign and verification methodology, including high-level system modeling of the 3D-SoftChip using SystemC, being used to determine the optimum hardware specification in the early design stage.

  17. An adaptive random search for short term generation scheduling with network constraints.

    Science.gov (United States)

    Marmolejo, J A; Velasco, Jonás; Selley, Héctor J

    2017-01-01

    This paper presents an adaptive random search approach to address a short term generation scheduling with network constraints, which determines the startup and shutdown schedules of thermal units over a given planning horizon. In this model, we consider the transmission network through capacity limits and line losses. The mathematical model is stated in the form of a Mixed Integer Non Linear Problem with binary variables. The proposed heuristic is a population-based method that generates a set of new potential solutions via a random search strategy. The random search is based on the Markov Chain Monte Carlo method. The main key of the proposed method is that the noise level of the random search is adaptively controlled in order to exploring and exploiting the entire search space. In order to improve the solutions, we consider coupling a local search into random search process. Several test systems are presented to evaluate the performance of the proposed heuristic. We use a commercial optimizer to compare the quality of the solutions provided by the proposed method. The solution of the proposed algorithm showed a significant reduction in computational effort with respect to the full-scale outer approximation commercial solver. Numerical results show the potential and robustness of our approach.

  18. Ontology-based geographic information semantic metadata integration

    Science.gov (United States)

    Zhan, Qin; Li, Deren; Zhang, Xia; Xia, Yu

    2009-10-01

    Metadata is important to facilitate data sharing among Geospatial Information Communities in distributed environment. For unanimous understanding and standard production of metadata annotations, metadata specifications are documented such as Geographic Information Metadata Standard (ISO19115-2003), the Content Standard for Digital Geospatial Metadata (CSDGM), and so on. Though these specifications provide frameworks for description of geographic data, there are two problems which embarrass sufficiently data sharing. One problem is that specifications are lack of domainspecific semantics. Another problem is that specifications can not always solve semantic heterogeneities. To solve the former problem, an ontology-based geographic information metadata extension framework is proposed which can incorporate domain-specific semantics. Besides, for solving the later problem, metadata integration mechanism based on the proposed extension is studied. In this paper, integration of metadata is realized through integration of ontologies. So integration of ontologies is also discussed. By ontology-based geographic information semantic metadata integration, sharing of geographic data is realized more efficiently.

  19. Exposing and Harvesting Metadata Using the OAI Metadata Harvesting Protocol A Tutorial

    CERN Document Server

    Warner, Simeon

    2001-01-01

    In this article I outline the ideas behind the Open Archives Initiative metadata harvesting protocol (OAIMH), and attempt to clarify some common misconceptions. I then consider how the OAIMH protocol can be used to expose and harvest metadata. Perl code examples are given as practical illustration.

  20. Mapping metadata for SWHi : Aligning schemas with library metadata for a historical ontology

    NARCIS (Netherlands)

    Zhang, Junte; Fahmi, Ismail; Ellermann, Henk; Bouma, Gosse; Weske, M; Hacid, MS; Godart, C

    2007-01-01

    What are the possibilities of Semantic Web technologies for organizations which traditionally have lots of structured data, such as metadata, available? A library is such a particular organization. We mapped a digital library's descriptive (bibliographic) metadata for a large historical document col

  1. Exposing and harvesting metadata using the OAI metadata harvesting protocol: A tutorial

    CERN Document Server

    Warner, Simeon

    2001-01-01

    In this article I outline the ideas behind the Open Archives Initiative metadata harvesting protocol (OAIMH), and attempt to clarify some common misconceptions. I then consider how the OAIMH protocol can be used to expose and harvest metadata. Perl code examples are given as practical illustration.

  2. Streamlining geospatial metadata in the Semantic Web

    Science.gov (United States)

    Fugazza, Cristiano; Pepe, Monica; Oggioni, Alessandro; Tagliolato, Paolo; Carrara, Paola

    2016-04-01

    In the geospatial realm, data annotation and discovery rely on a number of ad-hoc formats and protocols. These have been created to enable domain-specific use cases generalized search is not feasible for. Metadata are at the heart of the discovery process and nevertheless they are often neglected or encoded in formats that either are not aimed at efficient retrieval of resources or are plainly outdated. Particularly, the quantum leap represented by the Linked Open Data (LOD) movement did not induce so far a consistent, interlinked baseline in the geospatial domain. In a nutshell, datasets, scientific literature related to them, and ultimately the researchers behind these products are only loosely connected; the corresponding metadata intelligible only to humans, duplicated on different systems, seldom consistently. Instead, our workflow for metadata management envisages i) editing via customizable web- based forms, ii) encoding of records in any XML application profile, iii) translation into RDF (involving the semantic lift of metadata records), and finally iv) storage of the metadata as RDF and back-translation into the original XML format with added semantics-aware features. Phase iii) hinges on relating resource metadata to RDF data structures that represent keywords from code lists and controlled vocabularies, toponyms, researchers, institutes, and virtually any description one can retrieve (or directly publish) in the LOD Cloud. In the context of a distributed Spatial Data Infrastructure (SDI) built on free and open-source software, we detail phases iii) and iv) of our workflow for the semantics-aware management of geospatial metadata.

  3. Parallel file system with metadata distributed across partitioned key-value store c

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  4. Reduced short term adaptation to robot generated dynamic environment in children affected by Cerebral Palsy

    Directory of Open Access Journals (Sweden)

    Di Rosa Giuseppe

    2011-05-01

    Full Text Available Abstract Background It is known that healthy adults can quickly adapt to a novel dynamic environment, generated by a robotic manipulandum as a structured disturbing force field. We suggest that it may be of clinical interest to evaluate to which extent this kind of motor learning capability is impaired in children affected by cerebal palsy. Methods We adapted the protocol already used with adults, which employs a velocity dependant viscous field, and compared the performance of a group of subjects affected by Cerebral Palsy (CP group, 7 subjects with a Control group of unimpaired age-matched children. The protocol included a familiarization phase (FA, during which no force was applied, a force field adaptation phase (CF, and a wash-out phase (WO in which the field was removed. During the CF phase the field was shut down in a number of randomly selected "catch" trials, which were used in order to evaluate the "learning index" for each single subject and the two groups. Lateral deviation, speed and acceleration peaks and average speed were evaluated for each trajectory; a directional analysis was performed in order to inspect the role of the limb's inertial anisotropy in the different experimental phases. Results During the FA phase the movements of the CP subjects were more curved, displaying greater and variable directional error; over the course of the CF phase both groups showed a decreasing trend in the lateral error and an after-effect at the beginning of the wash-out, but the CP group had a non significant adaptation rate and a lower learning index, suggesting that CP subjects have reduced ability to learn to compensate external force. Moreover, a directional analysis of trajectories confirms that the control group is able to better predict the force field by tuning the kinematic features of the movements along different directions in order to account for the inertial anisotropy of arm. Conclusions Spatial abnormalities in children affected

  5. Reduced short term adaptation to robot generated dynamic environment in children affected by Cerebral Palsy

    Science.gov (United States)

    2011-01-01

    Background It is known that healthy adults can quickly adapt to a novel dynamic environment, generated by a robotic manipulandum as a structured disturbing force field. We suggest that it may be of clinical interest to evaluate to which extent this kind of motor learning capability is impaired in children affected by cerebal palsy. Methods We adapted the protocol already used with adults, which employs a velocity dependant viscous field, and compared the performance of a group of subjects affected by Cerebral Palsy (CP group, 7 subjects) with a Control group of unimpaired age-matched children. The protocol included a familiarization phase (FA), during which no force was applied, a force field adaptation phase (CF), and a wash-out phase (WO) in which the field was removed. During the CF phase the field was shut down in a number of randomly selected "catch" trials, which were used in order to evaluate the "learning index" for each single subject and the two groups. Lateral deviation, speed and acceleration peaks and average speed were evaluated for each trajectory; a directional analysis was performed in order to inspect the role of the limb's inertial anisotropy in the different experimental phases. Results During the FA phase the movements of the CP subjects were more curved, displaying greater and variable directional error; over the course of the CF phase both groups showed a decreasing trend in the lateral error and an after-effect at the beginning of the wash-out, but the CP group had a non significant adaptation rate and a lower learning index, suggesting that CP subjects have reduced ability to learn to compensate external force. Moreover, a directional analysis of trajectories confirms that the control group is able to better predict the force field by tuning the kinematic features of the movements along different directions in order to account for the inertial anisotropy of arm. Conclusions Spatial abnormalities in children affected by cerebral palsy may be

  6. Reduced short term adaptation to robot generated dynamic environment in children affected by Cerebral Palsy.

    Science.gov (United States)

    Masia, Lorenzo; Frascarelli, Flaminia; Morasso, Pietro; Di Rosa, Giuseppe; Petrarca, Maurizio; Castelli, Enrico; Cappa, Paolo

    2011-05-21

    It is known that healthy adults can quickly adapt to a novel dynamic environment, generated by a robotic manipulandum as a structured disturbing force field. We suggest that it may be of clinical interest to evaluate to which extent this kind of motor learning capability is impaired in children affected by cerebal palsy. We adapted the protocol already used with adults, which employs a velocity dependant viscous field, and compared the performance of a group of subjects affected by Cerebral Palsy (CP group, 7 subjects) with a Control group of unimpaired age-matched children. The protocol included a familiarization phase (FA), during which no force was applied, a force field adaptation phase (CF), and a wash-out phase (WO) in which the field was removed. During the CF phase the field was shut down in a number of randomly selected "catch" trials, which were used in order to evaluate the "learning index" for each single subject and the two groups. Lateral deviation, speed and acceleration peaks and average speed were evaluated for each trajectory; a directional analysis was performed in order to inspect the role of the limb's inertial anisotropy in the different experimental phases. During the FA phase the movements of the CP subjects were more curved, displaying greater and variable directional error; over the course of the CF phase both groups showed a decreasing trend in the lateral error and an after-effect at the beginning of the wash-out, but the CP group had a non significant adaptation rate and a lower learning index, suggesting that CP subjects have reduced ability to learn to compensate external force. Moreover, a directional analysis of trajectories confirms that the control group is able to better predict the force field by tuning the kinematic features of the movements along different directions in order to account for the inertial anisotropy of arm. Spatial abnormalities in children affected by cerebral palsy may be related not only to disturbance in

  7. Automatic meta-data collection of STP observation data

    Science.gov (United States)

    Ishikura, S.; Kimura, E.; Murata, K.; Kubo, T.; Shinohara, I.

    2006-12-01

    DIME-Attachment. By introducing the DLAgent-WS, we overcame the problem that the data management policies of each data site are independent. Another important issue to be overcome is how to collect the meta-data of observation data files. So far, STARS-DB managers have added new records to the meta-database and updated them manually. We have had a lot of troubles to maintain the meta-database because observation data are generated every day and the quantity of data files increases explosively. For that purpose, we have attempted to automate collection of the meta-data. In this research, we adopted the RSS 1.0 (RDF Site Summary) as a format to exchange meta-data in the STP fields. The RSS is an RDF vocabulary that provides a multipurpose extensible meta-data description and is suitable for syndication of meta-data. Most of the data in the present study are described in the CDF (Common Data Format), which is a self- describing data format. We have converted meta-information extracted from the CDF data files into RSS files. The program to generate the RSS files is executed on data site server once a day and the RSS files provide information of new data files. The RSS files are collected by RSS collection server once a day and the meta- data are stored in the STARS-DB.

  8. Data catalog project—A browsable, searchable, metadata system

    Energy Technology Data Exchange (ETDEWEB)

    Stillerman, Joshua, E-mail: jas@psfc.mit.edu [MIT Plasma Science and Fusion Center, Cambridge, MA (United States); Fredian, Thomas; Greenwald, Martin [MIT Plasma Science and Fusion Center, Cambridge, MA (United States); Manduchi, Gabriele [Consorzio RFX, Euratom-ENEA Association, Corso Stati Uniti 4, Padova 35127 (Italy)

    2016-11-15

    Modern experiments are typically conducted by large, extended groups, where researchers rely on other team members to produce much of the data they use. The experiments record very large numbers of measurements that can be difficult for users to find, access and understand. We are developing a system for users to annotate their data products with structured metadata, providing data consumers with a discoverable, browsable data index. Machine understandable metadata captures the underlying semantics of the recorded data, which can then be consumed by both programs, and interactively by users. Collaborators can use these metadata to select and understand recorded measurements. The data catalog project is a data dictionary and index which enables users to record general descriptive metadata, use cases and rendering information as well as providing them a transparent data access mechanism (URI). Users describe their diagnostic including references, text descriptions, units, labels, example data instances, author contact information and data access URIs. The list of possible attribute labels is extensible, but limiting the vocabulary of names increases the utility of the system. The data catalog is focused on the data products and complements process-based systems like the Metadata Ontology Provenance project [Greenwald, 2012; Schissel, 2015]. This system can be coupled with MDSplus to provide a simple platform for data driven display and analysis programs. Sites which use MDSplus can describe tree branches, and if desired create ‘processed data trees’ with homogeneous node structures for measurements. Sites not currently using MDSplus can either use the database to reference local data stores, or construct an MDSplus tree whose leaves reference the local data store. A data catalog system can provide a useful roadmap of data acquired from experiments or simulations making it easier for researchers to find and access important data and understand the meaning of the

  9. ncISO Facilitating Metadata and Scientific Data Discovery

    Science.gov (United States)

    Neufeld, D.; Habermann, T.

    2011-12-01

    Increasing the usability and availability climate and oceanographic datasets for environmental research requires improved metadata and tools to rapidly locate and access relevant information for an area of interest. Because of the distributed nature of most environmental geospatial data, a common approach is to use catalog services that support queries on metadata harvested from remote map and data services. A key component to effectively using these catalog services is the availability of high quality metadata associated with the underlying data sets. In this presentation, we examine the use of ncISO, and Geoportal as open source tools that can be used to document and facilitate access to ocean and climate data available from Thematic Realtime Environmental Distributed Data Services (THREDDS) data services. Many atmospheric and oceanographic spatial data sets are stored in the Network Common Data Format (netCDF) and served through the Unidata THREDDS Data Server (TDS). NetCDF and THREDDS are becoming increasingly accepted in both the scientific and geographic research communities as demonstrated by the recent adoption of netCDF as an Open Geospatial Consortium (OGC) standard. One important source for ocean and atmospheric based data sets is NOAA's Unified Access Framework (UAF) which serves over 3000 gridded data sets from across NOAA and NOAA-affiliated partners. Due to the large number of datasets, browsing the data holdings to locate data is impractical. Working with Unidata, we have created a new service for the TDS called "ncISO", which allows automatic generation of ISO 19115-2 metadata from attributes and variables in TDS datasets. The ncISO metadata records can be harvested by catalog services such as ESSI-labs GI-Cat catalog service, and ESRI's Geoportal which supports query through a number of services, including OpenSearch and Catalog Services for the Web (CSW). ESRI's Geoportal Server provides a number of user friendly search capabilities for end users

  10. Imagery metadata development based on ISO/TC 211 standards

    Directory of Open Access Journals (Sweden)

    Rong Xie

    2007-04-01

    Full Text Available This paper reviews the present status and major problems of the existing ISO standards related to imagery metadata. An imagery metadata model is proposed to facilitate the development of imagery metadata on the basis of conformance to these standards and combination with other ISO standards related to imagery. The model presents an integrated metadata structure and content description for any imagery data for finding data and data integration. Using the application of satellite data integration in CEOP as an example, satellite imagery metadata is developed, and the resulting satellite metadata list is given.

  11. Adaptive scallop height tool path generation for robot-based incremental sheet metal forming

    Science.gov (United States)

    Seim, Patrick; Möllensiep, Dennis; Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd

    2016-10-01

    Incremental sheet metal forming is an emerging process for the production of individualized products or prototypes in low batch sizes and with short times to market. In these processes, the desired shape is produced by the incremental inward motion of the workpiece-independent forming tool in depth direction and its movement along the contour in lateral direction. Based on this shape production, the tool path generation is a key factor on e.g. the resulting geometric accuracy, the resulting surface quality, and the working time. This paper presents an innovative tool path generation based on a commercial milling CAM package considering the surface quality and working time. This approach offers the ability to define a specific scallop height as an indicator of the surface quality for specific faces of a component. Moreover, it decreases the required working time for the production of the entire component compared to the use of a commercial software package without this adaptive approach. Different forming experiments have been performed to verify the newly developed tool path generation. Mainly, this approach serves to solve the existing conflict of combining the working time and the surface quality within the process of incremental sheet metal forming.

  12. Metadata aided run selection at ATLAS

    Science.gov (United States)

    Buckingham, R. M.; Gallas, E. J.; C-L Tseng, J.; Viegas, F.; Vinek, E.; ATLAS Collaboration

    2011-12-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called "runBrowser" makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  13. NetCDF4/HDF5 and Linked Data in the Real World - Enriching Geoscientific Metadata without Bloat

    Science.gov (United States)

    Ip, Alex; Car, Nicholas; Druken, Kelsey; Poudjom-Djomani, Yvette; Butcher, Stirling; Evans, Ben; Wyborn, Lesley

    2017-04-01

    geoscientific data, much of which is being translated from proprietary formats to netCDF at NCI Australia. This data is made available through the NCI National Environmental Research Data Interoperability Platform (NERDIP) for programmatic access and interdisciplinary analysis. The netCDF files contain both scientific data variables (e.g. gravity, magnetic or radiometric values), but also domain-specific operational values (e.g. specific instrument parameters) best described fully in formal vocabularies. Our ncskos codebase provides access to multiple stores of detailed external metadata in a standardised fashion. Geophysical datasets are generated from a "survey" event, and GA maintains corporate databases of all surveys and their associated metadata. It is impractical to replicate the full source survey metadata into each netCDF dataset so, instead, we link the netCDF files to survey metadata using public Linked Data URIs. These URIs link to Survey class objects which we model as a subclass of Activity objects as defined by the PROV Ontology, and we provide URI resolution for them via a custom Linked Data API which draws current survey metadata from GA's in-house databases. We have demonstrated that Linked Data is a practical way to associate netCDF data with detailed, external metadata. This allows us to ensure that catalogued metadata is kept consistent with metadata points-of-truth, and we can infer complex conceptual relationships not possible with netCDF key-value attributes alone.

  14. BRBN-T validation: adaptation of the Selective Reminding Test and Word List Generation

    Directory of Open Access Journals (Sweden)

    Mariana Rigueiro Neves

    2015-10-01

    Full Text Available Objective This study aims to present the Selective Reminding Test(SRT and Word List Generation (WLG adaptation to the Portuguese population, within the validation of the Brief Repeatable Battery of Neuropsychological Tests (BRBN-Tfor multiple sclerosis (MS patients.Method 66 healthy participants (54.5% female recruited from the community volunteered to participate in this study.Results A combination of procedures from Classical Test Theory (CTT and Item Response Theory (ITR were applied to item analysis and selection. For each SRT list, 12 words were selected and 3 letters were chosen for WLG to constitute the final versions of these tests for the Portuguese population.Conclusion The combination of CTT and ITR maximized the decision making process in the adaptation of the SRT and WLG to a different culture and language (Portuguese. The relevance of this study lies on the production of reliable standardized neuropsychological tests, so that they can be used to facilitate a more rigorous monitoring of the evolution of MS, as well as any therapeutic effects and cognitive rehabilitation.

  15. Generation of a Tropically Adapted Energy Performance Certificate for Residential Buildings

    Directory of Open Access Journals (Sweden)

    Karl Wagner

    2014-11-01

    Full Text Available Since the 1990s, national green building certification indices have emerged around the globe as promising measurement tools for environmental-friendly housing. Since 2008, tools for countries in the Northern “colder” hemisphere have been adapted to tropical countries. In contrast, the Tropically Adapted Energy Performance Certificate (TEPC, established in 2012, translates the United Nations’ triple bottom line principle into green building sustainability (planet, thermal comfort (people and affordability (profit. The tool has been especially developed and revamped for affordable green building assessment helping to reduce global warming. Hence, by the comparably simple and transparent energy audit it provides, the TEPC examines buildings for their: (1 contribution to reduce CO2; (2 transmission rate in shielding a building’s envelope against the effects of the tropical heat; (3 generation of thermal comfort and (4 referring total cost of ownership to green the building further. All four dimensions are measured in the rainbow colour scale in compliance with national energy regulations. Accordingly, this research examines the tool’s implementation in tropical countries. Exemplified tropical case studies in residential areas seek to demonstrate the practicability of the approach and to derive a holistic certification by an internationally accredited certification board.

  16. Increasing the international visibility of research data by a joint metadata schema

    Science.gov (United States)

    Svoboda, Nikolai; Zoarder, Muquit; Gärtner, Philipp; Hoffmann, Carsten; Heinrich, Uwe

    2017-04-01

    The BonaRes Project ("Soil as a sustainable resource for the bioeconomy") was launched in 2015 to promote sustainable soil management and to avoid fragmentation of efforts (Wollschläger et al., 2016). For this purpose, an IT infrastructure is being developed to upload, manage, store, and provide research data and its associated metadata. The research data provided by the BonaRes data centre are, in principle, not subject to any restrictions on reuse. For all research data considerable standardized metadata are the key enablers for the effective use of these data. Providing proper metadata is often viewed as an extra burden with further work and resources consumed. In our lecture we underline the benefits of structured and interoperable metadata like: accessibility of data, discovery of data, interpretation of data, linking data and several more and we counter these advantages with the effort of time, personnel and further costs. Building on this, we describe the framework of metadata in BonaRes combining the standards of OGC for description, visualization, exchange and discovery of geodata as well as the schema of DataCite for the publication and citation of this research data. This enables the generation of a DOI, a unique identifier that provides a permanent link to the citable research data. By using OGC standards, data and metadata become interoperable with numerous research data provided via INSPIRE. It enables further services like CSW for harvesting WMS for visualization and WFS for downloading. We explain the mandatory fields that result from our approach and we give a general overview about our metadata architecture implementation. Literature: Wollschläger, U; Helming, K.; Heinrich, U.; Bartke, S.; Kögel-Knabner, I.; Russell, D.; Eberhardt, E. & Vogel, H.-J.: The BonaRes Centre - A virtual institute for soil research in the context of a sustainable bio-economy. Geophysical Research Abstracts, Vol. 18, EGU2016-9087, 2016.

  17. Simulation of tsunamis generated by landslides using adaptive mesh refinement on GPU

    Science.gov (United States)

    de la Asunción, M.; Castro, M. J.

    2017-09-01

    Adaptive mesh refinement (AMR) is a widely used technique to accelerate computationally intensive simulations, which consists of dynamically increasing the spatial resolution of the areas of interest of the domain as the simulation advances. During the last years there have appeared many publications that tackle the implementation of AMR-based applications in GPUs in order to take advantage of their massively parallel architecture. In this paper we present the first AMR-based application implemented on GPU for the simulation of tsunamis generated by landslides by using a two-layer shallow water system. We also propose a new strategy for the interpolation and projection of the values of the fine cells in the AMR algorithm based on the fluctuations of the state values instead of the usual approach of considering the current state values. Numerical experiments on artificial and realistic problems show the validity and efficiency of the solver.

  18. A GPU implementation of adaptive mesh refinement to simulate tsunamis generated by landslides

    Science.gov (United States)

    de la Asunción, Marc; Castro, Manuel J.

    2016-04-01

    In this work we propose a CUDA implementation for the simulation of landslide-generated tsunamis using a two-layer Savage-Hutter type model and adaptive mesh refinement (AMR). The AMR method consists of dynamically increasing the spatial resolution of the regions of interest of the domain while keeping the rest of the domain at low resolution, thus obtaining better runtimes and similar results compared to increasing the spatial resolution of the entire domain. Our AMR implementation uses a patch-based approach, it supports up to three levels, power-of-two ratios of refinement, different refinement criteria and also several user parameters to control the refinement and clustering behaviour. A strategy based on the variation of the cell values during the simulation is used to interpolate and propagate the values of the fine cells. Several numerical experiments using artificial and realistic scenarios are presented.

  19. Evaluating functional roles of phase resetting in generation of adaptive human bipedal walking with a physiologically based model of the spinal pattern generator.

    Science.gov (United States)

    Aoi, Shinya; Ogihara, Naomichi; Funato, Tetsuro; Sugimoto, Yasuhiro; Tsuchiya, Kazuo

    2010-05-01

    The central pattern generators (CPGs) in the spinal cord strongly contribute to locomotor behavior. To achieve adaptive locomotion, locomotor rhythm generated by the CPGs is suggested to be functionally modulated by phase resetting based on sensory afferent or perturbations. Although phase resetting has been investigated during fictive locomotion in cats, its functional roles in actual locomotion have not been clarified. Recently, simulation studies have been conducted to examine the roles of phase resetting during human bipedal walking, assuming that locomotion is generated based on prescribed kinematics and feedback control. However, such kinematically based modeling cannot be used to fully elucidate the mechanisms of adaptation. In this article we proposed a more physiologically based mathematical model of the neural system for locomotion and investigated the functional roles of phase resetting. We constructed a locomotor CPG model based on a two-layered hierarchical network model of the rhythm generator (RG) and pattern formation (PF) networks. The RG model produces rhythm information using phase oscillators and regulates it by phase resetting based on foot-contact information. The PF model creates feedforward command signals based on rhythm information, which consists of the combination of five rectangular pulses based on previous analyses of muscle synergy. Simulation results showed that our model establishes adaptive walking against perturbing forces and variations in the environment, with phase resetting playing important roles in increasing the robustness of responses, suggesting that this mechanism of regulation may contribute to the generation of adaptive human bipedal locomotion.

  20. Building a Disciplinary Metadata Standards Directory

    Directory of Open Access Journals (Sweden)

    Alexander Ball

    2014-07-01

    Full Text Available The Research Data Alliance (RDA Metadata Standards Directory Working Group (MSDWG is building a directory of descriptive, discipline-specific metadata standards. The purpose of the directory is to promote the discovery, access and use of such standards, thereby improving the state of research data interoperability and reducing duplicative standards development work.This work builds upon the UK Digital Curation Centre's Disciplinary Metadata Catalogue, a resource created with much the same aim in mind. The first stage of the MSDWG's work was to update and extend the information contained in the catalogue. In the current, second stage, a new platform is being developed in order to extend the functionality of the directory beyond that of the catalogue, and to make it easier to maintain and sustain. Future work will include making the directory more amenable to use by automated tools.

  1. Transgenerational epimutations induced by multi-generation drought imposition mediate rice plant’s adaptation to drought condition

    Science.gov (United States)

    Zheng, Xiaoguo; Chen, Liang; Xia, Hui; Wei, Haibin; Lou, Qiaojun; Li, Mingshou; Li, Tiemei; Luo, Lijun

    2017-01-01

    Epigenetic mechanisms are crucial mediators of appropriate plant reactions to adverse environments, but their involvement in long-term adaptation is less clear. Here, we established two rice epimutation accumulation lines by applying drought conditions to 11 successive generations of two rice varieties. We took advantage of recent technical advances to examine the role of DNA methylation variations on rice adaptation to drought stress. We found that multi-generational drought improved the drought adaptability of offspring in upland fields. At single-base resolution, we discovered non-random appearance of drought-induced epimutations. Moreover, we found that a high proportion of drought-induced epimutations maintained their altered DNA methylation status in advanced generations. In addition, genes related to transgenerational epimutations directly participated in stress-responsive pathways. Analysis based on a cluster of drought-responsive genes revealed that their DNA methylation patterns were affected by multi-generational drought. These results suggested that epigenetic mechanisms play important roles in rice adaptations to upland growth conditions. Epigenetic variations have morphological, physiological and ecological consequences and are heritable across generations, suggesting that epigenetics can be considered an important regulatory mechanism in plant long-term adaptation and evolution under adverse environments. PMID:28051176

  2. U.S. EPAs Public Geospatial Metadata Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPAs public geospatial metadata service provides external parties (Data.gov, GeoPlatform.gov, and the general public) with access to EPA's geospatial metadata...

  3. Evaluating functional roles of phase resetting in generation of adaptive human bipedal walking with a physiologically based model of the spinal pattern generator.

    OpenAIRE

    Aoi, Shinya; Ogihara, Naomichi; Funato, Tetsuro; Sugimoto, Yasuhiro; Tsuchiya, Kazuo

    2010-01-01

    The central pattern generators (CPGs) in the spinal cord strongly contribute to locomotor behavior. To achieve adaptive locomotion, locomotor rhythm generated by the CPGs is suggested to be functionally modulated by phase resetting based on sensory afferent or perturbations. Although phase resetting has been investigated during fictive locomotion in cats, its functional roles in actual locomotion have not been clarified. Recently, simulation studies have been conducted to examine the roles of...

  4. Foam Multi-Dimensional General Purpose Monte Carlo Generator With Self-Adapting Symplectic Grid

    CERN Document Server

    Jadach, Stanislaw

    2000-01-01

    A new general purpose Monte Carlo event generator with self-adapting grid consisting of simplices is described. In the process of initialization, the simplex-shaped cells divide into daughter subcells in such a way that: (a) cell density is biggest in areas where integrand is peaked, (b) cells elongate themselves along hyperspaces where integrand is enhanced/singular. The grid is anisotropic, i.e. memory of the axes directions of the primary reference frame is lost. In particular, the algorithm is capable of dealing with distributions featuring strong correlation among variables (like ridge along diagonal). The presented algorithm is complementary to others known and commonly used in the Monte Carlo event generators. It is, in principle, more effective then any other one for distributions with very complicated patterns of singularities - the price to pay is that it is memory-hungry. It is therefore aimed at a small number of integration dimensions (<10). It should be combined with other methods for higher ...

  5. A Critical Review of Sentinel-3 Metadata for Scientific and Operational Applications

    Science.gov (United States)

    Pons Fernandez, Xavier; Zabala Torres, Alaitz; Domingo Marimon, Cristina

    2015-12-01

    Sentinel-3 is a mission designed for Copernicus/GMES to ensure long term collection of data of uniform quality, generated and delivered in an operational manner for several sea and land applications. This paper considers and makes a critical review of the data and metadata which will be distributed as Sentinel-3 OLCI, SLSTR and SYN products, evaluating this information according to the specifications, guidelines and characteristics described by the International Organization of Standardization, ISO. The paper reviews the data and metadata currently included on the Test Data Set, provided by ESA and points out recommendations both to increase metadata usability and to avoid metadata misunderstanding. Moreover, some recommendation of how this data and metadata should be encoded are included on the paper, making special emphasis on “ISO-19115-1: Fundamentals” and “ISO-19115-2: Extensions for imagery and gridded data”, “ISO-19139: XML schema implementation” and “ISO-19157: Data quality” (quality elements). Proposals related to quality derived from the GeoViQua FP7 project are also indicated.

  6. Metadata Schema Used in OCLC Sampled Web Pages

    OpenAIRE

    Fei Yu

    2005-01-01

    The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the o...

  7. INSPIRE: Managing Metadata in a Global Digital Library for High-Energy Physics

    CERN Document Server

    Martin Montull, Javier

    2011-01-01

    Four leading laboratories in the High-Energy Physics (HEP) field are collaborating to roll-out the next-generation scientific information portal: INSPIRE. The goal of this project is to replace the popular 40 year-old SPIRES database. INSPIRE already provides access to about 1 million records and includes services such as fulltext search, automatic keyword assignment, ingestion and automatic display of LaTeX, citation analysis, automatic author disambiguation, metadata harvesting, extraction of figures from fulltext and search in figure captions. In order to achieve high quality metadata both automatic processing and manual curation are needed. The different tools available in the system use modern web technologies to provide the curators of the maximum efficiency, while dealing with the MARC standard format. The project is under heavy development in order to provide new features including semantic analysis, crowdsourcing of metadata curation, user tagging, recommender systems, integration of OAIS standards a...

  8. From CLARIN Component Metadata to Linked Open Data

    NARCIS (Netherlands)

    Durco, M.; Windhouwer, Menzo

    2014-01-01

    In the European CLARIN infrastructure a growing number of resources are described with Component Metadata. In this paper we describe a transformation to make this metadata available as linked data. After this first step it becomes possible to connect the CLARIN Component Metadata with other valuable

  9. Handling multiple metadata streams regarding digital learning material

    NARCIS (Netherlands)

    Roes, J.B.M.; Vuuren, J. van; Verbeij, N.; Nijstad, H.

    2010-01-01

    This paper presents the outcome of a study performed in the Netherlands on handling multiple metadata streams regarding digital learning material. The paper describes the present metadata architecture in the Netherlands, the present suppliers and users of metadata and digital learning materials. It

  10. Multimedia Learning Systems Based on IEEE Learning Object Metadata (LOM).

    Science.gov (United States)

    Holzinger, Andreas; Kleinberger, Thomas; Muller, Paul

    One of the "hottest" topics in recent information systems and computer science is metadata. Learning Object Metadata (LOM) appears to be a very powerful mechanism for representing metadata, because of the great variety of LOM Objects. This is on of the reasons why the LOM standard is repeatedly cited in projects in the field of eLearning…

  11. Adaptive changes in early and late blind: a FMRI study of verb generation to heard nouns.

    Science.gov (United States)

    Burton, H; Snyder, A Z; Diamond, J B; Raichle, M E

    2002-12-01

    Literacy for blind people requires learning Braille. Along with others, we have shown that reading Braille activates visual cortex. This includes striate cortex (V1), i.e., banks of calcarine sulcus, and several higher visual areas in lingual, fusiform, cuneus, lateral occipital, inferior temporal, and middle temporal gyri. The spatial extent and magnitude of magnetic resonance (MR) signals in visual cortex is greatest for those who became blind early in life. Individuals who lost sight as adults, and subsequently learned Braille, still exhibited activity in some of the same visual cortex regions, especially V1. These findings suggest these visual cortex regions become adapted to processing tactile information and that this cross-modal neural change might support Braille literacy. Here we tested the alternative hypothesis that these regions directly respond to linguistic aspects of a task. Accordingly, language task performance by blind persons should activate the same visual cortex regions regardless of input modality. Specifically, visual cortex activity in blind people ought to arise during a language task involving heard words. Eight early blind, six late blind, and eight sighted subjects were studied using functional magnetic resonance imaging (fMRI) during covert generation of verbs to heard nouns. The control task was passive listening to indecipherable sounds (reverse words) matched to the nouns in sound intensity, duration, and spectral content. Functional responses were analyzed at the level of individual subjects using methods based on the general linear model and at the group level, using voxel based ANOVA and t-test analyses. Blind and sighted subjects showed comparable activation of language areas in left inferior frontal, dorsolateral prefrontal, and left posterior superior temporal gyri. The main distinction was bilateral, left dominant activation of the same visual cortex regions previously noted with Braille reading in all blind subjects. The

  12. Using a linked data approach to aid development of a metadata portal to support Marine Strategy Framework Directive (MSFD) implementation

    Science.gov (United States)

    Wood, Chris

    2016-04-01

    Under the Marine Strategy Framework Directive (MSFD), EU Member States are mandated to achieve or maintain 'Good Environmental Status' (GES) in their marine areas by 2020, through a series of Programme of Measures (PoMs). The Celtic Seas Partnership (CSP), an EU LIFE+ project, aims to support policy makers, special-interest groups, users of the marine environment, and other interested stakeholders on MSFD implementation in the Celtic Seas geographical area. As part of this support, a metadata portal has been built to provide a signposting service to datasets that are relevant to MSFD within the Celtic Seas. To ensure that the metadata has the widest possible reach, a linked data approach was employed to construct the database. Although the metadata are stored in a traditional RDBS, the metadata are exposed as linked data via the D2RQ platform, allowing virtual RDF graphs to be generated. SPARQL queries can be executed against the end-point allowing any user to manipulate the metadata. D2RQ's mapping language, based on turtle, was used to map a wide range of relevant ontologies to the metadata (e.g. The Provenance Ontology (prov-o), Ocean Data Ontology (odo), Dublin Core Elements and Terms (dc & dcterms), Friend of a Friend (foaf), and Geospatial ontologies (geo)) allowing users to browse the metadata, either via SPARQL queries or by using D2RQ's HTML interface. The metadata were further enhanced by mapping relevant parameters to the NERC Vocabulary Server, itself built on a SPARQL endpoint. Additionally, a custom web front-end was built to enable users to browse the metadata and express queries through an intuitive graphical user interface that requires no prior knowledge of SPARQL. As well as providing means to browse the data via MSFD-related parameters (Descriptor, Criteria, and Indicator), the metadata records include the dataset's country of origin, the list of organisations involved in the management of the data, and links to any relevant INSPIRE

  13. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    Science.gov (United States)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible

  14. Situational variations in ethnic identity across immigration generations: Implications for acculturative change and cross-cultural adaptation.

    Science.gov (United States)

    Noels, Kimberly A; Clément, Richard

    2015-12-01

    This study examined whether the acculturation of ethnic identity is first evident in more public situations with greater opportunity for intercultural interaction and eventually penetrates more intimate situations. It also investigated whether situational variations in identity are associated with cross-cultural adaptation. First-generation (G1), second-generation (G2) and mixed-parentage second-generation (G2.5) young adult Canadians (n = 137, n = 169, and n = 91, respectively) completed a questionnaire assessing their heritage and Canadian identities across four situational domains (family, friends, university and community), global heritage identity and cross-cultural adaptation. Consistent with the acculturation penetration hypothesis, the results showed Canadian identity was stronger than heritage identity in public domains, but the converse was true in the family domain; moreover, the difference between the identities in the family domain was attenuated in later generations. Situational variability indicated better adaptation for the G1 cohort, but poorer adaptation for the G2.5 cohort. For the G2 cohort, facets of global identity moderated the relation, such that those with a weaker global identity experienced greater difficulties and hassles with greater identity variability but those with a stronger identity did not. These results are interpreted in light of potential interpersonal issues implied by situational variation for each generation cohort.

  15. MODS: The Metadata Object Description Schema.

    Science.gov (United States)

    Guenther, Rebecca S.

    2003-01-01

    Focuses on the Metadata Object Description Schema (MODS) developed by the Library of Congress' Network Development and MARC Standards Office. Discuses reasons for MODS development; advantages of MODS; features of MODS; prospective uses for MODS; relationship with MARC and MARCXML; comparison with Dublin Core element set; and experimentation with…

  16. Distributed Version Control and Library Metadata

    Directory of Open Access Journals (Sweden)

    Galen M. Charlton

    2008-06-01

    Full Text Available Distributed version control systems (DVCSs are effective tools for managing source code and other artifacts produced by software projects with multiple contributors. This article describes DVCSs and compares them with traditional centralized version control systems, then describes extending the DVCS model to improve the exchange of library metadata.

  17. Digital Preservation and Metadata: History, Theory, Practice.

    Science.gov (United States)

    Lazinger, Susan S.

    This book addresses critical issues of digital preservation, providing guidelines for protecting resources from dealing with obsolescence, to responsibilities, methods of preservation, cost, and metadata formats. It also shows numerous national and international institutions that provide frameworks for digital libraries and archives. The first…

  18. The Metadata Approach to Accessing Government Information.

    Science.gov (United States)

    Moen, William E.

    2001-01-01

    Provides an overview of the articles in this issue, includes a history of the development of GILS (Government Information Locator Service), and offers perspectives on the importance of metadata for resource description and resource discovery. Presents interoperability as a challenge in integrating access to government information locator services.…

  19. Metadata Exporter for Scientific Photography Management

    Science.gov (United States)

    Staudigel, D.; English, B.; Delaney, R.; Staudigel, H.; Koppers, A.; Hart, S.

    2005-12-01

    Photographs have become an increasingly important medium, especially with the advent of digital cameras. It has become inexpensive to take photographs and quickly post them on a website. However informative photos may be, they still need to be displayed in a convenient way, and be cataloged in such a manner that makes them easily locatable. Managing the great number of photographs that digital cameras allow and creating a format for efficient dissemination of the information related to the photos is a tedious task. Products such as Apple's iPhoto have greatly eased the task of managing photographs, However, they often have limitations. Un-customizable metadata fields and poor metadata extraction tools limit their scientific usefulness. A solution to this persistent problem is a customizable metadata exporter. On the ALIA expedition, we successfully managed the thousands of digital photos we took. We did this with iPhoto and a version of the exporter that is now available to the public under the name "CustomHTMLExport" (http://www.versiontracker.com/dyn/moreinfo/macosx/27777), currently undergoing formal beta testing This software allows the use of customized metadata fields (including description, time, date, GPS data, etc.), which is exported along with the photo. It can also produce webpages with this data straight from iPhoto, in a much more flexible way than is already allowed. With this tool it becomes very easy to manage and distribute scientific photos.

  20. Metadata Effectiveness in Internet Discovery: An Analysis of Digital Collection Metadata Elements and Internet Search Engine Keywords

    Science.gov (United States)

    Yang, Le

    2016-01-01

    This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…

  1. Making Interoperability Easier with NASA's Metadata Management Tool (MMT)

    Science.gov (United States)

    Shum, Dana; Reese, Mark; Pilone, Dan; Baynes, Katie

    2016-01-01

    While the ISO-19115 collection level metadata format meets many users' needs for interoperable metadata, it can be cumbersome to create it correctly. Through the MMT's simple UI experience, metadata curators can create and edit collections which are compliant with ISO-19115 without full knowledge of the NASA Best Practices implementation of ISO-19115 format. Users are guided through the metadata creation process through a forms-based editor, complete with field information, validation hints and picklists. Once a record is completed, users can download the metadata in any of the supported formats with just 2 clicks.

  2. Metadata in Chaos: how researchers tag radio broadcasts

    DEFF Research Database (Denmark)

    Lykke, Marianne; Lund, Haakon; Skov, Mette

    2015-01-01

    CHAOS (Cultural Heritage Archive Open System) provides streaming access to more than 500,000 broadcasts by the Danish Broadcast Corporation from 1931 and onwards. The archive is part of the LARM project with the purpose of enabling researchers to search, annotate, and interact with recordings....... To optimally support the researchers a user-centred approach was taken to develop the platform and related metadata scheme. Based on the requirements a three level metadata scheme was developed: (1) core archival metadata, (2) LARM metadata, and (3) project-specific metadata. The paper analyses how researchers.......fm’s strength in providing streaming access to a large, shared corpus of broadcasts....

  3. Stable Adaptive Inertial Control of a Doubly-Fed Induction Generator

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Moses; Muljadi, Eduard; Hur, Kyeon; Kang, Yong Cheol

    2016-11-01

    This paper proposes a stable adaptive inertial control scheme of a doubly-fed induction generator. The proposed power reference is defined in two sections: the deceleration period and the acceleration period. The power reference in the deceleration period consists of a constant and the reference for maximum power point tracking (MPPT) operation. The latter contributes to preventing a second frequency dip (SFD) in this period because its reduction rate is large at the early stage of an event but quickly decreases with time. To improve the frequency nadir (FN), the constant value is set to be proportional to the rotor speed prior to an event. The reference ensures that the rotor speed converges to a stable operating region. To accelerate the rotor speed while causing a small SFD, when the rotor speed converges, the power reference is reduced by a small amount and maintained until it meets the MPPT reference. The results show that the scheme causes a small SFD while improving the FN and the rate of change of frequency in any wind conditions, even in a grid that has a high penetration of wind power.

  4. Adaptive Wavelets Based on Second Generation Wavelet Transform and Their Applications to Trend Analysis and Prediction

    Institute of Scientific and Technical Information of China (English)

    DUAN Chen-dong; JIANG Hong-kai; HE Zheng-jia

    2004-01-01

    In order to make trend analysis and prediction to acquisition data in a mechanical equipment condition monitoring system, a new method of trend feature extraction and prediction of acquisition data is proposed which constructs an adaptive wavelet on the acquisition data by means of second generation wavelet transform (SGWT). Firstly, taking the vanishing moment number of the predictor as a constraint, the linear predictor and updater are designed according to the acquisition data by using symmetrical interpolating scheme. Then the trend of the data is obtained through doing SGWT decomposition, threshold processing and SGWT reconstruction. Secondly, under the constraint of the vanishing moment number of the predictor, another predictor based on the acquisition data is devised to predict the future trend of the data using a non-symmetrical interpolating scheme. A one-step prediction algorithm is presented to predict the future evolution trend with historical data. The proposed method obtained a desirable effect in peak-to-peak value trend analysis for a machine set in an oil refinery.

  5. Where does transcription start? 5'-RACE adapted to next-generation sequencing.

    Science.gov (United States)

    Leenen, Fleur A D; Vernocchi, Sara; Hunewald, Oliver E; Schmitz, Stephanie; Molitor, Anne M; Muller, Claude P; Turner, Jonathan D

    2016-04-07

    The variability and complexity of the transcription initiation process was examined by adapting RNA ligase-mediated rapid amplification of 5' cDNA ends (5'-RACE) to Next-Generation Sequencing (NGS). We oligo-labelled 5'-m(7)G-capped mRNA from two genes, the simple mono-exonic Beta-2-Adrenoceptor (ADRB2R)and the complex multi-exonic Glucocorticoid Receptor (GR, NR3C1), and detected a variability in TSS location that has received little attention up to now. Transcription was not initiated at a fixed TSS, but from loci of 4 to 10 adjacent nucleotides. Individual TSSs had frequencies from capped transcripts. ADRB2R used a single locus consisting of 4 adjacent TSSs. Unstimulated, the GR used a total of 358 TSSs distributed throughout 38 loci, that were principally in the 5' UTRs and were spliced using established donor and acceptor sites. Complete demethylation of the epigenetically sensitive GR promoter with 5-azacytidine induced one new locus and 127 TSSs, 12 of which were unique. We induced GR transcription with dexamethasone and Interferon-γ, adding one new locus and 185 additional TSSs distributed throughout the promoter region. In-vitro the TSS microvariability regulated mRNA translation efficiency and the relative abundance of the different GRN-terminal protein isoform levels.

  6. FLEXBAR—Flexible Barcode and Adapter Processing for Next-Generation Sequencing Platforms

    Directory of Open Access Journals (Sweden)

    Matthias Dodt

    2012-12-01

    Full Text Available Quantitative and systems biology approaches benefit from the unprecedented depth of next-generation sequencing. A typical experiment yields millions of short reads, which oftentimes carry particular sequence tags. These tags may be: (a specific to the sequencing platform and library construction method (e.g., adapter sequences; (b have been introduced by experimental design (e.g., sample barcodes; or (c constitute some biological signal (e.g., splice leader sequences in nematodes. Our software FLEXBAR enables accurate recognition, sorting and trimming of sequence tags with maximal flexibility, based on exact overlap sequence alignment. The software supports data formats from all current sequencing platforms, including color-space reads. FLEXBAR maintains read pairings and processes separate barcode reads on demand. Our software facilitates the fine-grained adjustment of sequence tag detection parameters and search regions. FLEXBAR is a multi-threaded software and combines speed with precision. Even complex read processing scenarios might be executed with a single command line call. We demonstrate the utility of the software in terms of read mapping applications, library demultiplexing and splice leader detection. FLEXBAR and additional information is available for academic use from the website: http://sourceforge.net/projects/flexbar/.

  7. Title, Description, and Subject are the Most Important Metadata Fields for Keyword Discoverability

    Directory of Open Access Journals (Sweden)

    Laura Costello

    2016-09-01

    Full Text Available A Review of: Yang, L. (2016. Metadata effectiveness in internet discovery: An analysis of digital collection metadata elements and internet search engine keywords. College & Research Libraries, 77(1, 7-19. http://doi.org/10.5860/crl.77.1.7 Objective – To determine which metadata elements best facilitate discovery of digital collections. Design – Case study. Setting – A public research university serving over 32,000 graduate and undergraduate students in the Southwestern United States of America. Subjects – A sample of 22,559 keyword searches leading to the institution’s digital repository between August 1, 2013, and July 31, 2014. Methods – The author used Google Analytics to analyze 73,341 visits to the institution’s digital repository. He determined that 22,559 of these visits were due to keyword searches. Using Random Integer Generator, the author identified a random sample of 378 keyword searches. The author then matched the keywords with the Dublin Core and VRA Core metadata elements on the landing page in the digital repository to determine which metadata field had drawn the keyword searcher to that particular page. Many of these keywords matched to more than one metadata field, so the author also analyzed the metadata elements that generated unique keyword hits and those fields that were frequently matched together. Main Results – Title was the most matched metadata field with 279 matched keywords from searches. Description and Subject were also significant fields with 208 and 79 matches respectively. Slightly more than half of the results, 195 keywords, matched the institutional repository in one field only. Both Title and Description had significant match rates both independently and in conjunction with other elements, but Subject keywords were the sole match in only three of the sampled cases. Conclusion – The Dublin Core elements of Title, Description, and Subject were the most frequently matched fields in keyword

  8. Efficient processing of MPEG-21 metadata in the binary domain

    Science.gov (United States)

    Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas

    2005-10-01

    XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.

  9. Towards Data Value-Level Metadata for Clinical Studies.

    Science.gov (United States)

    Zozus, Meredith Nahm; Bonner, Joseph

    2017-01-01

    While several standards for metadata describing clinical studies exist, comprehensive metadata to support traceability of data from clinical studies has not been articulated. We examine uses of metadata in clinical studies. We examine and enumerate seven sources of data value-level metadata in clinical studies inclusive of research designs across the spectrum of the National Institutes of Health definition of clinical research. The sources of metadata inform categorization in terms of metadata describing the origin of a data value, the definition of a data value, and operations to which the data value was subjected. The latter is further categorized into information about changes to a data value, movement of a data value, retrieval of a data value, and data quality checks, constraints or assessments to which the data value was subjected. The implications of tracking and managing data value-level metadata are explored.

  10. Optimal reactive power and voltage control in distribution networks with distributed generators by fuzzy adaptive hybrid particle swarm optimisation method

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Su, Chi

    2015-01-01

    A new and efficient methodology for optimal reactive power and voltage control of distribution networks with distributed generators based on fuzzy adaptive hybrid PSO (FAHPSO) is proposed. The objective is to minimize comprehensive cost, consisting of power loss and operation cost of transformers...... and capacitors, and subject to constraints such as minimum and maximum reactive power limits of distributed generators, maximum deviation of bus voltages, maximum allowable daily switching operation number (MADSON). Particle swarm optimization (PSO) is used to solve the corresponding mixed integer non......-linear programming problem (MINLP) and the hybrid PSO method (HPSO), consisting of three PSO variants, is presented. In order to mitigate the local convergence problem, fuzzy adaptive inference is used to improve the searching process and the final fuzzy adaptive inference based hybrid PSO is proposed. The proposed...

  11. Recognizing the Effects of Comprehension Language Barriers and Adaptability Cultural Barriers on Selected First-Generation Undergraduate Vietnamese Students

    Science.gov (United States)

    Phan, Christian Phuoc-Lanh

    2009-01-01

    This investigation is about recognizing the effects of comprehension language barriers and adaptability cultural barriers on selected first-generation Vietnamese undergraduate students in the Puget Sound region of Washington State. Most Vietnamese students know little or no English before immigrating to the United States; as such, language and…

  12. Adaptive Hysteresis Band Current Control (AHB) with PLL of Grid Side Converter-Based Wind Power Generation System

    DEFF Research Database (Denmark)

    Guo, Yougui; Zeng, Ping; Li, Lijuan

    2011-01-01

    Adaptive hysteresis band current control(AHB CC) is used to control the three-phase grid currents by means of grid side converter in wind power generation system in this paper. AHB has reached the good purpose with PLL (Lock phase loop). First the mathematical models of each part are given...

  13. Adaptive Hysteresis Band Current Control (AHB) with PLL of Grid Side Converter-Based Wind Power Generation System

    DEFF Research Database (Denmark)

    Guo, Yougui; Zeng, Ping; Li, Lijuan

    2011-01-01

    Adaptive hysteresis band current control(AHB CC) is used to control the three-phase grid currents by means of grid side converter in wind power generation system in this paper. AHB has reached the good purpose with PLL (Lock phase loop). First the mathematical models of each part are given...

  14. Migration of the ATLAS Metadata Interface (AMI) to Web 2.0 and cloud

    Science.gov (United States)

    Odier, J.; Albrand, S.; Fulachier, J.; Lambert, F.

    2015-12-01

    The ATLAS Metadata Interface (AMI), a mature application of more than 10 years of existence, is currently under adaptation to some recently available technologies. The web interfaces, which previously manipulated XML documents using XSL transformations, are being migrated to Asynchronous JavaScript (AJAX). Web development is considerably simplified by the introduction of a framework based on JQuery and Twitter Bootstrap. Finally, the AMI services are being migrated to an OpenStack cloud infrastructure.

  15. Migration of the ATLAS Metadata Interface (AMI) to Web 2.0 and cloud

    CERN Document Server

    Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian

    2015-01-01

    The ATLAS Metadata Interface (AMI), a mature application of more than 10 years of existence, is currently under adaptation to some recently available technologies. The web interfaces, which previously manipulated XML documents using XSL transformations, are being migrated to Asynchronous JavaScript (AJAX). Web development is considerably simplified by the introduction of a framework based on JQuery and Twitter Bootstrap. Finally, the AMI services are being migrated to an OpenStack cloud infrastructure.

  16. ARIADNE: a Tracking System for Relationships in LHCb Metadata

    Science.gov (United States)

    Shapoval, I.; Clemencic, M.; Cattaneo, M.

    2014-06-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ariadne - a generic metadata relationships tracking system based on the novel NoSQL Neo4j graph database. Its aim is to track and analyze many thousands of evolving relationships for cases such as the one described above, and several others, which would otherwise remain unmanaged and potentially harmful. The highlights of the paper include the system's implementation and management details, infrastructure needed for running it, security issues, first experience of usage in the LHCb production and potential of the system to be applied to a wider set of LHCb tasks.

  17. Toward More Transparent and Reproducible Omics Studies Through a Common Metadata Checklist and Data Publications

    Science.gov (United States)

    Özdemir, Vural; Martens, Lennart; Hancock, William; Anderson, Gordon; Anderson, Nathaniel; Aynacioglu, Sukru; Baranova, Ancha; Campagna, Shawn R.; Chen, Rui; Choiniere, John; Dearth, Stephen P.; Feng, Wu-Chun; Ferguson, Lynnette; Fox, Geoffrey; Frishman, Dmitrij; Grossman, Robert; Heath, Allison; Higdon, Roger; Hutz, Mara H.; Janko, Imre; Jiang, Lihua; Joshi, Sanjay; Kel, Alexander; Kemnitz, Joseph W.; Kohane, Isaac S.; Kolker, Natali; Lancet, Doron; Lee, Elaine; Li, Weizhong; Lisitsa, Andrey; Llerena, Adrian; MacNealy-Koch, Courtney; Marshall, Jean-Claude; Masuzzo, Paola; May, Amanda; Mias, George; Monroe, Matthew; Montague, Elizabeth; Mooney, Sean; Nesvizhskii, Alexey; Noronha, Santosh; Omenn, Gilbert; Rajasimha, Harsha; Ramamoorthy, Preveen; Sheehan, Jerry; Smarr, Larry; Smith, Charles V.; Smith, Todd; Snyder, Michael; Rapole, Srikanth; Srivastava, Sanjeeva; Stanberry, Larissa; Stewart, Elizabeth; Toppo, Stefano; Uetz, Peter; Verheggen, Kenneth; Voy, Brynn H.; Warnich, Louise; Wilhelm, Steven W.; Yandl, Gregory

    2014-01-01

    Abstract Biological processes are fundamentally driven by complex interactions between biomolecules. Integrated high-throughput omics studies enable multifaceted views of cells, organisms, or their communities. With the advent of new post-genomics technologies, omics studies are becoming increasingly prevalent; yet the full impact of these studies can only be realized through data harmonization, sharing, meta-analysis, and integrated research. These essential steps require consistent generation, capture, and distribution of metadata. To ensure transparency, facilitate data harmonization, and maximize reproducibility and usability of life sciences studies, we propose a simple common omics metadata checklist. The proposed checklist is built on the rich ontologies and standards already in use by the life sciences community. The checklist will serve as a common denominator to guide experimental design, capture important parameters, and be used as a standard format for stand-alone data publications. The omics metadata checklist and data publications will create efficient linkages between omics data and knowledge-based life sciences innovation and, importantly, allow for appropriate attribution to data generators and infrastructure science builders in the post-genomics era. We ask that the life sciences community test the proposed omics metadata checklist and data publications and provide feedback for their use and improvement. PMID:24456465

  18. Towards more transparent and reproducible omics studies through a common metadata checklist and data publications

    Energy Technology Data Exchange (ETDEWEB)

    Kolker, Eugene; Ozdemir, Vural; Martens , Lennart; Hancock, William S.; Anderson, Gordon A.; Anderson, Nathaniel; Aynacioglu, Sukru; Baranova, Ancha; Campagna, Shawn R.; Chen, Rui; Choiniere, John; Dearth, Stephen P.; Feng, Wu-Chun; Ferguson, Lynnette; Fox, Geoffrey; Frishman, Dmitrij; Grossman, Robert; Heath, Allison; Higdon, Roger; Hutz, Mara; Janko, Imre; Jiang, Lihua; Joshi, Sanjay; Kel, Alexander; Kemnitz, Joseph W.; Kohane, Isaac; Kolker, Natali; Lancet, Doron; Lee, Elaine; Li, Weizhong; Lisitsa, Andrey; Llerena, Adrian; MacNealy-Koch, Courtney; Marhsall, Jean-Claude; Masuzzo, Paolo; May, Amanda; Mias, George; Monroe, Matthew E.; Montague, Elizabeth; Monney, Sean; Nesvizhskii, Alexey; Noronha, Santosh; Omenn, Gilbert; Rajasimha, Harsha; Ramamoorthy, Preveen; Sheehan, Jerry; Smarr, Larry; Smith, Charles V.; Smith, Todd; Snyder, Michael; Rapole, Srikanth; Srivastava, Sanjeeva; Stanberry, Larissa; Stewart, Elizabeth; Toppo, Stefano; Uetz, Peter; Verheggen, Kenneth; Voy, Brynn H.; Warnich, Louise; Wilhelm, Steven W.; Yandl, Gregory

    2014-01-01

    Biological processes are fundamentally driven by complex interactions between biomolecules. Integrated high-throughput omics studies enable multifaceted views of cells, organisms, or their communities. With the advent of new post-genomics technologies omics studies are becoming increasingly prevalent yet the full impact of these studies can only be realized through data harmonization, sharing, meta-analysis, and integrated research,. These three essential steps require consistent generation, capture, and distribution of the metadata. To ensure transparency, facilitate data harmonization, and maximize reproducibility and usability of life sciences studies, we propose a simple common omics metadata checklist. The proposed checklist is built on the rich ontologies and standards already in use by the life sciences community. The checklist will serve as a common denominator to guide experimental design, capture important parameters, and be used as a standard format for stand-alone data publications. This omics metadata checklist and data publications will create efficient linkages between omics data and knowledge-based life sciences innovation and importantly, allow for appropriate attribution to data generators and infrastructure science builders in the post-genomics era. We ask that the life sciences community test the proposed omics metadata checklist and data publications and provide feedback for their use and improvement.

  19. Python, Google Sheets, and the Thesaurus for Graphic Materials for Efficient Metadata Project Workflows

    Directory of Open Access Journals (Sweden)

    Jeremy Bartczak

    2017-01-01

    Full Text Available In 2017, the University of Virginia (U.Va. will launch a two year initiative to celebrate the bicentennial anniversary of the University’s founding in 1819. The U.Va. Library is participating in this event by digitizing some 20,000 photographs and negatives that document student life on the U.Va. grounds in the 1960s and 1970s. Metadata librarians and archivists are well-versed in the challenges associated with generating digital content and accompanying description within the context of limited resources. This paper describes how technology and new approaches to metadata design have enabled the University of Virginia’s Metadata Analysis and Design Department to rapidly and successfully generate accurate description for these digital objects. Python’s pandas module improves efficiency by cleaning and repurposing data recorded at digitization, while the lxml module builds MODS XML programmatically from CSV tables. A simplified technique for subject heading selection and assignment in Google Sheets provides a collaborative environment for streamlined metadata creation and data quality control.

  20. Intersegmental coordination of cockroach locomotion: adaptive control of centrally coupled pattern generator circuits

    Directory of Open Access Journals (Sweden)

    Einat eFuchs

    2011-01-01

    Full Text Available Animals’ ability to demonstrate both stereotyped and adaptive locomotor behavior is largely dependent on the interplay between centrally-generated motor patterns and the sensory inputs that shape them. We utilized a combined experimental and theoretical approach to investigate the relative importance of CPG interconnections vs. intersegmental afferents in the cockroach: an animal that is renowned for rapid and stable locomotion. We simultaneously recorded coxal levator and depressor motor neurons (MN in the thoracic ganglia of Periplaneta americana, while sensory feedback was completely blocked or allowed only from one intact stepping leg. In the absence of sensory feedback, we observed a coordination pattern with consistent phase relationship that shares similarities with a double tripod gait, suggesting central, feedforward control. This intersegmental coordination pattern was then reinforced in the presence of sensory feedback from a single stepping leg. Specifically, we report on transient stabilization of phase differences between activity recorded in the middle and hind thoracic MN following individual front-leg steps, suggesting a role for afferent phasic information in the coordination of motor circuits at the different hemiganglia. Data were further analyzed using stochastic models of coupled oscillators and maximum likelihood techniques to estimate underlying physiological parameters, such as uncoupled endogenous frequencies of hemisegmental oscillators and coupling strengths and directions. We found that descending ipsilateral coupling is stronger than ascending coupling, while left-right coupling in both the meso- and meta-thoracic ganglia appear to be symmetrical. We discuss our results in comparison with recent findings in stick insects that share similar neural and body architectures, and argue that the two species may exemplify opposite extremes of a fast-slow locomotion continuum, mediated through different intersegmental

  1. MFE revisited : part 1: adaptive grid-generation using the heat equation

    NARCIS (Netherlands)

    Zegeling, P.A.

    2001-01-01

    In this paper the moving-nite-element method (MFE) is used to solve the heat equation, with an articial time component, to give a non-uniform (steady-state) grid that is adapted to a given prole. It is known from theory and experiments that MFE, applied to parabolic PDEs, gives adaptive grids which

  2. Adaptive Control and Parameter Identification of a Doubly-Fed Induction Generator for Wind Power

    Science.gov (United States)

    2011-09-01

    Ioannou and J. Sun, Robust Adaptive Control, Prentice Hall, 1996. [23] K. J. Astrom and B. Wittenmark, Adaptive Control, Second Edition, Dover...1989. [25] J. J. E. Slotline and W. Li, Applied Nonlinear Control, Prentice Hall, New Jersey, 1991. 79 [26] K. J. Astrom and B. Wittenmark

  3. MFE revisited : part 1: adaptive grid-generation using the heat equation

    NARCIS (Netherlands)

    Zegeling, P.A.

    1996-01-01

    In this paper the moving-nite-element method (MFE) is used to solve the heat equation, with an articial time component, to give a non-uniform (steady-state) grid that is adapted to a given prole. It is known from theory and experiments that MFE, applied to parabolic PDEs, gives adaptive grids which

  4. OSCAR/Surface: Metadata for the WMO Integrated Observing System WIGOS

    Science.gov (United States)

    Klausen, Jörg; Pröscholdt, Timo; Mannes, Jürg; Cappelletti, Lucia; Grüter, Estelle; Calpini, Bertrand; Zhang, Wenjian

    2016-04-01

    The World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS) is a key WMO priority underpinning all WMO Programs and new initiatives such as the Global Framework for Climate Services (GFCS). It does this by better integrating WMO and co-sponsored observing systems, as well as partner networks. For this, an important aspect is the description of the observational capabilities by way of structured metadata. The 17th Congress of the Word Meteorological Organization (Cg-17) has endorsed the semantic WIGOS metadata standard (WMDS) developed by the Task Team on WIGOS Metadata (TT-WMD). The standard comprises of a set of metadata classes that are considered to be of critical importance for the interpretation of observations and the evolution of observing systems relevant to WIGOS. The WMDS serves all recognized WMO Application Areas, and its use for all internationally exchanged observational data generated by WMO Members is mandatory. The standard will be introduced in three phases between 2016 and 2020. The Observing Systems Capability Analysis and Review (OSCAR) platform operated by MeteoSwiss on behalf of WMO is the official repository of WIGOS metadata and an implementation of the WMDS. OSCAR/Surface deals with all surface-based observations from land, air and oceans, combining metadata managed by a number of complementary, more domain-specific systems (e.g., GAWSIS for the Global Atmosphere Watch, JCOMMOPS for the marine domain, the WMO Radar database). It is a modern, web-based client-server application with extended information search, filtering and mapping capabilities including a fully developed management console to add and edit observational metadata. In addition, a powerful application programming interface (API) is being developed to allow machine-to-machine metadata exchange. The API is based on an ISO/OGC-compliant XML schema for the WMDS using the Observations and Measurements (ISO19156) conceptual model. The purpose of the

  5. Testing Metadata Existence of Web Map Services

    Directory of Open Access Journals (Sweden)

    Jan Růžička

    2011-05-01

    Full Text Available For a general user is quite common to use data sources available on WWW. Almost all GIS software allow to use data sources available via Web Map Service (ISO/OGC standard interface. The opportunity to use different sources and combine them brings a lot of problems that were discussed many times on conferences or journal papers. One of the problem is based on non existence of metadata for published sources. The question was: were the discussions effective? The article is partly based on comparison of situation for metadata between years 2007 and 2010. Second part of the article is focused only on 2010 year situation. The paper is created in a context of research of intelligent map systems, that can be used for an automatic or a semi-automatic map creation or a map evaluation.

  6. PIMMS tools for capturing metadata about simulations

    Science.gov (United States)

    Pascoe, Charlotte; Devine, Gerard; Tourte, Gregory; Pascoe, Stephen; Lawrence, Bryan; Barjat, Hannah

    2013-04-01

    PIMMS (Portable Infrastructure for the Metafor Metadata System) provides a method for consistent and comprehensive documentation of modelling activities that enables the sharing of simulation data and model configuration information. The aim of PIMMS is to package the metadata infrastructure developed by Metafor for CMIP5 so that it can be used by climate modelling groups in UK Universities. PIMMS tools capture information about simulations from the design of experiments to the implementation of experiments via simulations that run models. PIMMS uses the Metafor methodology which consists of a Common Information Model (CIM), Controlled Vocabularies (CV) and software tools. PIMMS software tools provide for the creation and consumption of CIM content via a web services infrastructure and portal developed by the ES-DOC community. PIMMS metadata integrates with the ESGF data infrastructure via the mapping of vocabularies onto ESGF facets. There are three paradigms of PIMMS metadata collection: Model Intercomparision Projects (MIPs) where a standard set of questions is asked of all models which perform standard sets of experiments. Disciplinary level metadata collection where a standard set of questions is asked of all models but experiments are specified by users. Bespoke metadata creation where the users define questions about both models and experiments. Examples will be shown of how PIMMS has been configured to suit each of these three paradigms. In each case PIMMS allows users to provide additional metadata beyond that which is asked for in an initial deployment. The primary target for PIMMS is the UK climate modelling community where it is common practice to reuse model configurations from other researchers. This culture of collaboration exists in part because climate models are very complex with many variables that can be modified. Therefore it has become common practice to begin a series of experiments by using another climate model configuration as a starting

  7. Metadata Analysis at the Command-Line

    Directory of Open Access Journals (Sweden)

    Mark Phillips

    2013-01-01

    Full Text Available Over the past few years the University of North Texas Libraries' Digital Projects Unit (DPU has developed a set of metadata analysis tools, processes, and methodologies aimed at helping to focus limited quality control resources on the areas of the collection where they might have the most benefit. The key to this work lies in its simplicity: records harvested from OAI-PMH-enabled digital repositories are transformed into a format that makes them easily parsable using traditional Unix/Linux-based command-line tools. This article describes the overall methodology, introduces two simple open-source tools developed to help with the aforementioned harvesting and breaking, and provides example commands to demonstrate some common metadata analysis requests. All software tools described in the article are available with an open-source license via the author's GitHub account.

  8. GraphMeta: Managing HPC Rich Metadata in Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Zhang, Wei; Ross, Robert

    2016-01-01

    High-performance computing (HPC) systems face increasingly critical metadata management challenges, especially in the approaching exascale era. These challenges arise not only from exploding metadata volumes, but also from increasingly diverse metadata, which contains data provenance and arbitrary user-defined attributes in addition to traditional POSIX metadata. This ‘rich’ metadata is becoming critical to supporting advanced data management functionality such as data auditing and validation. In our prior work, we identified a graph-based model as a promising solution to uniformly manage HPC rich metadata due to its flexibility and generality. However, at the same time, graph-based HPC rich metadata anagement also introduces significant challenges to the underlying infrastructure. In this study, we first identify the challenges on the underlying infrastructure to support scalable, high-performance rich metadata management. Based on that, we introduce GraphMeta, a graphbased engine designed for this use case. It achieves performance scalability by introducing a new graph partitioning algorithm and a write-optimal storage engine. We evaluate GraphMeta under both synthetic and real HPC metadata workloads, compare it with other approaches, and demonstrate its advantages in terms of efficiency and usability for rich metadata management in HPC systems.

  9. Evaluating non-relational storage technology for HEP metadata and meta-data catalog

    Science.gov (United States)

    Grigorieva, M. A.; Golosova, M. V.; Gubin, M. Y.; Klimentov, A. A.; Osipova, V. V.; Ryabinkin, E. A.

    2016-10-01

    Large-scale scientific experiments produce vast volumes of data. These data are stored, processed and analyzed in a distributed computing environment. The life cycle of experiment is managed by specialized software like Distributed Data Management and Workload Management Systems. In order to be interpreted and mined, experimental data must be accompanied by auxiliary metadata, which are recorded at each data processing step. Metadata describes scientific data and represent scientific objects or results of scientific experiments, allowing them to be shared by various applications, to be recorded in databases or published via Web. Processing and analysis of constantly growing volume of auxiliary metadata is a challenging task, not simpler than the management and processing of experimental data itself. Furthermore, metadata sources are often loosely coupled and potentially may lead to an end-user inconsistency in combined information queries. To aggregate and synthesize a range of primary metadata sources, and enhance them with flexible schema-less addition of aggregated data, we are developing the Data Knowledge Base architecture serving as the intelligence behind GUIs and APIs.

  10. HIS Central and the Hydrologic Metadata Catalog

    Science.gov (United States)

    Whitenack, T.; Zaslavsky, I.; Valentine, D. W.

    2008-12-01

    The CUAHSI Hydrologic Information System project maintains a comprehensive workflow for publishing hydrologic observations data and registering them to the common Hydrologic Metadata Catalog. Once the data are loaded into a database instance conformant with the CUAHSI HIS Observations Data Model (ODM), the user configures ODM web service template to point to the new database. After this, the hydrologic data become available via the standard CUAHSI HIS web service interface, that includes both data discovery (GetSites, GetVariables, GetSiteInfo, GetVariableInfo) and data retrieval (GetValues) methods. The observations data then can be further exposed via the global semantics-based search engine called Hydroseek. To register the published observations networks to the global search engine, users can now use the HIS Central application (new in HIS 1.1). With this online application, the WaterML-compliant web services can be submitted to the online catalog of data services, along with network metadata and a desired network symbology. Registering services to the HIS Central application triggers a harvester which uses the services to retrieve additional network metadata from the underlying ODM (information about stations, variables, and periods of record). The next step in HIS Central application is mapping variable names from the newly registered network, to the terms used in the global search ontology. Once these steps are completed, the new observations network is added to the map and becomes available for searching and querying. The number of observations network registered to the Hydrologic Metadata Catalog at SDSC is constantly growing. At the time of submission, the catalog contains 51 registered networks, with estimated 1.7 million stations.

  11. Metadata Management System for Healthcare Information Systems

    OpenAIRE

    Patil, Ketan Shripat

    2011-01-01

    The Utah Department of Health (UDOH) uses multiple and diverse healthcare information systems for managing, maintaining, and sharing the health information. To keep track of the important details about these information systems such as the operational details, data semantics, data exchange standards, and personnel responsible for maintaining and managing it is a monumental task, with several limitations. This report describes the design and implementation of the Metadata Management System (MD...

  12. Publishers and Libraries: Sharing Metadata Between Communities

    OpenAIRE

    2014-01-01

    A project team dubbed the Author Names Project has been working on an ambitious effort that aims to have a major impact on how libraries and publishers exchange data in support of discovery of new authors and their scholarly and creative content. The project team has been developing a proof-of-concept system to enable publishers to exchange Author Names/Identity metadata with libraries. This web application, which we are calling OAQ (Online Author Questionnaire), is open source and will utili...

  13. Ontology-Based Search of Genomic Metadata.

    Science.gov (United States)

    Fernandez, Javier D; Lenzerini, Maurizio; Masseroli, Marco; Venco, Francesco; Ceri, Stefano

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) is a huge and still expanding public repository of more than 4,000 experiments and 25,000 data files, assembled by a large international consortium since 2007; unknown biological knowledge can be extracted from these huge and largely unexplored data, leading to data-driven genomic, transcriptomic, and epigenomic discoveries. Yet, search of relevant datasets for knowledge discovery is limitedly supported: metadata describing ENCODE datasets are quite simple and incomplete, and not described by a coherent underlying ontology. Here, we show how to overcome this limitation, by adopting an ENCODE metadata searching approach which uses high-quality ontological knowledge and state-of-the-art indexing technologies. Specifically, we developed S.O.S. GeM (http://www.bioinformatics.deib.polimi.it/SOSGeM/), a system supporting effective semantic search and retrieval of ENCODE datasets. First, we constructed a Semantic Knowledge Base by starting with concepts extracted from ENCODE metadata, matched to and expanded on biomedical ontologies integrated in the well-established Unified Medical Language System. We prove that this inference method is sound and complete. Then, we leveraged the Semantic Knowledge Base to semantically search ENCODE data from arbitrary biologists' queries. This allows correctly finding more datasets than those extracted by a purely syntactic search, as supported by the other available systems. We empirically show the relevance of found datasets to the biologists' queries.

  14. The role of metadata in managing large environmental science datasets. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Melton, R.B.; DeVaney, D.M. [eds.] [Pacific Northwest Lab., Richland, WA (United States); French, J. C. [Univ. of Virginia, (United States)

    1995-06-01

    The purpose of this workshop was to bring together computer science researchers and environmental sciences data management practitioners to consider the role of metadata in managing large environmental sciences datasets. The objectives included: establishing a common definition of metadata; identifying categories of metadata; defining problems in managing metadata; and defining problems related to linking metadata with primary data.

  15. NERIES: Seismic Data Gateways and User Composed Datasets Metadata Management

    Science.gov (United States)

    Spinuso, Alessandro; Trani, Luca; Kamb, Linus; Frobert, Laurent

    2010-05-01

    One of the NERIES EC project main objectives is to establish and improve the networking of seismic waveform data exchange and access among four main data centers in Europe: INGV, GFZ, ORFEUS and IPGP. Besides the implementation of the data backbone, several investigations and developments have been conducted in order to offer to the users the data available from this network, either programmatically or interactively. One of the challenges is to understand how to enable users` activities such as discovering, aggregating, describing and sharing datasets to obtain a decrease in the replication of similar data queries towards the network, exempting the data centers to guess and create useful pre-packed products. We`ve started to transfer this task more and more towards the users community, where the users` composed data products could be extensively re-used. The main link to the data is represented by a centralized webservice (SeismoLink) acting like a single access point to the whole data network. Users can download either waveform data or seismic station inventories directly from their own software routines by connecting to this webservice, which routes the request to the data centers. The provenance of the data is maintained and transferred to the users in the form of URIs, that identify the dataset and implicitly refer to the data provider. SeismoLink, combined with other webservices (eg EMSC-QuakeML earthquakes catalog service), is used from a community gateway such as the NERIES web portal (http://www.seismicportal.eu). Here the user interacts with a map based portlet which allows the dynamic composition of a data product, binding seismic event`s parameters with a set of seismic stations. The requested data is collected by the back-end processes of the portal, preserved and offered to the user in a personal data cart, where metadata can be generated interactively on-demand. The metadata, expressed in RDF, can also be remotely ingested. They offer rating

  16. An emergent theory of digital library metadata enrich then filter

    CERN Document Server

    Stevens, Brett

    2015-01-01

    An Emergent Theory of Digital Library Metadata is a reaction to the current digital library landscape that is being challenged with growing online collections and changing user expectations. The theory provides the conceptual underpinnings for a new approach which moves away from expert defined standardised metadata to a user driven approach with users as metadata co-creators. Moving away from definitive, authoritative, metadata to a system that reflects the diversity of users’ terminologies, it changes the current focus on metadata simplicity and efficiency to one of metadata enriching, which is a continuous and evolving process of data linking. From predefined description to information conceptualised, contextualised and filtered at the point of delivery. By presenting this shift, this book provides a coherent structure in which future technological developments can be considered.

  17. A CONCEPTUAL METADATA FRAMEWORK FOR SPATIAL DATA WAREHOUSE

    Directory of Open Access Journals (Sweden)

    M.Laxmaiah

    2013-05-01

    Full Text Available Metadata represents the information about data to be stored in Data Warehouses. It is a mandatory element of Data Warehouse to build an efficient Data Warehouse. Metadata helps in data integration, lineage, data quality and populating transformed data into data warehouse. Spatial data warehouses are based on spatial data mostly collected from Geographical Information Systems (GIS and the transactional systems that are specific to an application or enterprise. Metadata design and deployment is the most critical phase in building of data warehouse where it is mandatory to bring the spatial information and data modeling together. In this paper, we present a holistic metadata framework that drives metadata creation for spatial data warehouse. Theoretically, the proposed metadata framework improves the efficiency of accessing of data in response to frequent queries on SDWs. In other words, the proposed framework decreases the response time of the query and accurate information is fetched from Data Warehouse including the spatial information

  18. Integrating Semantic Information in Metadata Descriptions for a Geoscience-wide Resource Inventory.

    Science.gov (United States)

    Zaslavsky, I.; Richard, S. M.; Gupta, A.; Valentine, D.; Whitenack, T.; Ozyurt, I. B.; Grethe, J. S.; Schachne, A.

    2016-12-01

    Integrating semantic information into legacy metadata catalogs is a challenging issue and so far has been mostly done on a limited scale. We present experience of CINERGI (Community Inventory of Earthcube Resources for Geoscience Interoperability), an NSF Earthcube Building Block project, in creating a large cross-disciplinary catalog of geoscience information resources to enable cross-domain discovery. The project developed a pipeline for automatically augmenting resource metadata, in particular generating keywords that describe metadata documents harvested from multiple geoscience information repositories or contributed by geoscientists through various channels including surveys and domain resource inventories. The pipeline examines available metadata descriptions using text parsing, vocabulary management and semantic annotation and graph navigation services of GeoSciGraph. GeoSciGraph, in turn, relies on a large cross-domain ontology of geoscience terms, which bridges several independently developed ontologies or taxonomies including SWEET, ENVO, YAGO, GeoSciML, GCMD, SWO, and CHEBI. The ontology content enables automatic extraction of keywords reflecting science domains, equipment used, geospatial features, measured properties, methods, processes, etc. We specifically focus on issues of cross-domain geoscience ontology creation, resolving several types of semantic conflicts among component ontologies or vocabularies, and constructing and managing facets for improved data discovery and navigation. The ontology and keyword generation rules are iteratively improved as pipeline results are presented to data managers for selective manual curation via a CINERGI Annotator user interface. We present lessons learned from applying CINERGI metadata augmentation pipeline to a number of federal agency and academic data registries, in the context of several use cases that require data discovery and integration across multiple earth science data catalogs of varying quality

  19. A model for generating several adaptive phenotypes from a single genetic event

    DEFF Research Database (Denmark)

    Møller, Henrik D; Andersen, Kaj S; Regenberg, Birgitte

    2013-01-01

    Microbial populations adapt to environmental fluctuations through random switching of fitness-related traits in individual cells. This increases the likelihood that a subpopulation will be adaptive in a future milieu. However, populations are particularly challenged when several environment factors...... energy recruitment by trehalose mobilization, and in some cases, adherent biofilm growth. Our proposed model of a hub-switch locus enhances the bet-hedging model of population dynamics....

  20. ARIADNE: a Tracking System for Relationships in LHCb Metadata

    CERN Document Server

    Shapoval, I; Cattaneo, M

    2014-01-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ari...

  1. Aggregation and Linking of Observational Metadata in the ADS

    CERN Document Server

    Accomazzi, Alberto; Henneken, Edwin A; Grant, Carolyn S; Thompson, Donna M; Chyla, Roman; Holachek, Alexandra; Elliott, Jonathan

    2016-01-01

    We discuss current efforts behind the curation of observing proposals, archive bibliographies, and data links in the NASA Astrophysics Data System (ADS). The primary data in the ADS is the bibliographic content from scholarly articles in Astronomy and Physics, which ADS aggregates from publishers, arXiv and conference proceeding sites. This core bibliographic information is then further enriched by ADS via the generation of citations and usage data, and through the aggregation of external resources from astronomy data archives and libraries. Important sources of such additional information are the metadata describing observing proposals and high level data products, which, once ingested in ADS, become easily discoverable and citeable by the science community. Bibliographic studies have shown that the integration of links between data archives and the ADS provides greater visibility to data products and increased citations to the literature associated with them.

  2. Review of small-angle coronagraphic techniques in the wake of ground-based second-generation adaptive optics systems

    CERN Document Server

    Mawet, Dimitri; Lawson, Peter; Mugnier, Laurent; Traub, Wesley; Boccaletti, Anthony; Trauger, John; Gladysz, Szymon; Serabyn, Eugene; Milli, Julien; Belikov, Ruslan; Kasper, Markus; Baudoz, Pierre; Macintosh, Bruce; Marois, Christian; Oppenheimer, Ben; Barrett, Harrisson; Beuzit, Jean-Luc; Devaney, Nicolas; Girard, Julien; Guyon, Olivier; Krist, John; Mennesson, Bertrand; Mouillet, David; Murakami, Naoshi; Poyneer, Lisa; Savransky, Dmitri; ́erinaud, Christophe V; Wallace, James K

    2012-01-01

    Small-angle coronagraphy is technically and scientifically appealing because it enables the use of smaller telescopes, allows covering wider wavelength ranges, and potentially increases the yield and completeness of circumstellar environment - exoplanets and disks - detection and characterization campaigns. However, opening up this new parameter space is challenging. Here we will review the four posts of high contrast imaging and their intricate interactions at very small angles (within the first 4 resolution elements from the star). The four posts are: choice of coronagraph, optimized wavefront control, observing strategy, and post-processing methods. After detailing each of the four foundations, we will present the lessons learned from the 10+ years of operations of zeroth and first-generation adaptive optics systems. We will then tentatively show how informative the current integration of second-generation adaptive optics system is, and which lessons can already be drawn from this fresh experience. Then, w...

  3. From Gutenberg to Berners-Lee: the Need for Metadata

    OpenAIRE

    Simons, Eduard

    2010-01-01

    Keynote at the 1st Workshop on CRIS, CERIF and Institutional Repositories.-- 23 slides Metadata allow us to describe and classify research information in a systematic way, and as such they are indispensable for searching and finding academic publications and other results of research. In order to make full use of the information discovery potential of the Internet, the 'formal' and 'content' metadata commonly used in repositories should be supplemented with the 'context' metadata as stored...

  4. A metadata-driven approach to data repository design.

    Science.gov (United States)

    Harvey, Matthew J; McLean, Andrew; Rzepa, Henry S

    2017-01-01

    The design and use of a metadata-driven data repository for research data management is described. Metadata is collected automatically during the submission process whenever possible and is registered with DataCite in accordance with their current metadata schema, in exchange for a persistent digital object identifier. Two examples of data preview are illustrated, including the demonstration of a method for integration with commercial software that confers rich domain-specific data analytics without introducing customisation into the repository itself.

  5. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...... are crucial to be formalized by the semantic web ontologies for adaptive web. We use examples from an eLearning domain to illustrate the principles which are broadly applicable to any information domain on the web....

  6. Grid generation and adaptation for the Direct Simulation Monte Carlo Method. [for complex flows past wedges and cones

    Science.gov (United States)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1988-01-01

    A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.

  7. A design of vertical axis wind power generating system combined with Darrieus-Savonius for adaptation of variable wind speed

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Young Taek; Oh, Chul Soo [Kyung Pook National University, Taegu (Korea, Republic of)

    1996-02-01

    This paper presents a design of vertical axis Darrieus wind turbine combined with Savonius for wind-power generating system to be adapted for variable wind speed. The wind turbine consists of two troposkien- and four Savonius-blades. Darrieus turbine is designed with diameter 9.4[m], chord length 380[mm], tip speed ratio 5. Savonius turbine is designed with diameter 1.8[m], height 2[m], tip speed ratio 0.95. The design of turbine is laid for the main data of rated wind speed 10[m/s], turbine speed 101.4[rpm]. The generating power is estimated to maximum power 20[kWh], and this is converted to commercial power line by means of three phase synchronous generator-inverter system. Generating system is designed for operation on VSVF(variable speed variable frequency) condition and constant voltage system. (author). 11 refs., 14 figs.

  8. Content Metadata Standards for Marine Science: A Case Study

    Science.gov (United States)

    Riall, Rebecca L.; Marincioni, Fausto; Lightsom, Frances L.

    2004-01-01

    The U.S. Geological Survey developed a content metadata standard to meet the demands of organizing electronic resources in the marine sciences for a broad, heterogeneous audience. These metadata standards are used by the Marine Realms Information Bank project, a Web-based public distributed library of marine science from academic institutions and government agencies. The development and deployment of this metadata standard serve as a model, complete with lessons about mistakes, for the creation of similarly specialized metadata standards for digital libraries.

  9. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    Science.gov (United States)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web

  10. Question Generation and Adaptation Using a Bayesian Network of the Learner’s Achievements

    NARCIS (Netherlands)

    Wißner, M.; Linnebank, F.; Liem, J.; Bredeweg, B.; André, E.; Lane, H.C.; Yacef, K.; Mostow, J.; Pavlik, P.

    2013-01-01

    This paper presents a domain independent question generation and interaction procedure that automatically generates multiple-choice questions for conceptual models created with Qualitative Reasoning vocabulary. A Bayesian Network is deployed that captures the learning progress based on the answers

  11. Web Log Pre-processing and Analysis for Generation of Learning Profiles in Adaptive E-learning

    Directory of Open Access Journals (Sweden)

    Radhika M. Pai

    2016-04-01

    Full Text Available Adaptive E-learning Systems (AESs enhance the efficiency of online courses in education by providing personalized contents and user interfaces that changes according to learner’s requirements and usage patterns. This paper presents the approach to generate learning profile of each learner which helps to identify the learning styles and provide Adaptive User Interface which includes adaptive learning components and learning material. The proposed method analyzes the captured web usage data to identify the learning profile of the learners. The learning profiles are identified by an algorithmic approach that is based on the frequency of accessing the materials and the time spent on the various learning components on the portal. The captured log data is pre-processed and converted into standard XML format to generate learners sequence data corresponding to the different sessions and time spent. The learning style model adopted in this approach is Felder-Silverman Learning Style Model (FSLSM. This paper also presents the analysis of learner’s activities, preprocessed XML files and generated sequences.

  12. Web Log Pre-processing and Analysis for Generation of Learning Profiles in Adaptive E-learning

    Directory of Open Access Journals (Sweden)

    Radhika M. Pai

    2016-03-01

    Full Text Available Adaptive E-learning Systems (AESs enhance the efficiency of online courses in education by providing personalized contents and user interfaces that changes according to learner’s requirements and usage patterns. This paper presents the approach to generate learning profile of each learner which helps to identify the learning styles and provide Adaptive User Interface which includes adaptive learning components and learning material. The proposed method analyzes the captured web usage data to identify the learning profile of the learners. The learning profiles are identified by an algorithmic approach that is based on the frequency of accessing the materials and the time spent on the various learning components on the portal. The captured log data is pre-processed and converted into standard XML format to generate learners sequence data corresponding to the different sessions and time spent. The learning style model adopted in this approach is Felder-Silverman Learning Style Model (FSLSM. This paper also presents the analysis of learner’s activities, preprocessed XML files and generated sequences.

  13. Power-generation system vulnerability and adaptation to changes in climate and water resources

    NARCIS (Netherlands)

    Vliet, Van Michelle T.H.; Wiberg, David; Leduc, Sylvain; Riahi, Keywan

    2016-01-01

    Hydropower and thermoelectric power together contribute 98% of the worldâ €™ s electricity generation at present. These power-generating technologies both strongly depend on water availability, and water temperature for cooling also plays a critical role for thermoelectric power generation. Clima

  14. Adaptive Controller for Vehicle Active Suspension Generated Through LMS Filter Algorithms

    Institute of Scientific and Technical Information of China (English)

    SUN Jianmin; SHU Gequn

    2006-01-01

    The least means squares (LMS) adaptive filter algorithm was used in active suspension system.By adjusting the weight of adaptive filter, the minimum quadratic performance index was obtained.For two-degree-of-freedom vehicle suspension model, LMS adaptive controller was designed.The acceleration of the sprung mass,the dynamic tyre load between wheels and road,and the dynamic deflection between sprung mass and unsprung mass were determined as the evaluation targets of suspension performance.For LMS adaptive control suspension, compared with passive suspension, acceleration power spectral density of sprung mass acceleration under the road input model decreased 8-10 times in high frequency resonance band or low frequency resonance band.The simulation results show that LMS adaptive control is simple and remarkably effective.It further proves that the active control suspension system can improve both the riding comfort and handling safety in various operation conditions, and the method is fit for the active control of the suspension system.

  15. Informing and Evaluating a Metadata Initiative: Usability and Metadata Studies in Minnesota's "Foundations" Project.

    Science.gov (United States)

    Quam, Eileen

    2001-01-01

    Explains Minnesota's Foundations Project, a multiagency collaboration to improve access to environmental and natural resources information. Discusses the use of the Dublin core metadata standard for Web resources and describes three studies that included needs assessment, Bridges Web site user interface, and usability of controlled vocabulary in…

  16. Metadata: A user`s view

    Energy Technology Data Exchange (ETDEWEB)

    Bretherton, F.P. [Univ. of Wisconsin, Madison, WI (United States); Singley, P.T. [Oak Ridge National Lab., TN (United States)

    1994-12-31

    An analysis is presented of the uses of metadata from four aspects of database operations: (1) search, query, retrieval, (2) ingest, quality control, processing, (3) application to application transfer; (4) storage, archive. Typical degrees of database functionality ranging from simple file retrieval to interdisciplinary global query with metadatabase-user dialog and involving many distributed autonomous databases, are ranked in approximate order of increasing sophistication of the required knowledge representation. An architecture is outlined for implementing such functionality in many different disciplinary domains utilizing a variety of off the shelf database management subsystems and processor software, each specialized to a different abstract data model.

  17. Metadata For Identity Management of Population Registers

    Directory of Open Access Journals (Sweden)

    Olivier Glassey

    2011-04-01

    Full Text Available A population register is an inventory of residents within a country, with their characteristics (date of birth, sex, marital status, etc. and other socio-economic data, such as occupation or education. However, data on population are also stored in numerous other public registers such as tax, land, building and housing, military, foreigners, vehicles, etc. Altogether they contain vast amounts of personal and sensitive information. Access to public information is granted by law in many countries, but this transparency is generally subject to tensions with data protection laws. This paper proposes a framework to analyze data access (or protection requirements, as well as a model of metadata for data exchange.

  18. Information resource description creating and managing metadata

    CERN Document Server

    Hider, Philip

    2012-01-01

    An overview of the field of information organization that examines resource description as both a product and process of the contemporary digital environment.This timely book employs the unifying mechanism of the semantic web and the resource description framework to integrate the various traditions and practices of information and knowledge organization. Uniquely, it covers both the domain-specific traditions and practices and the practices of the ?metadata movement' through a single lens ? that of resource description in the broadest, semantic web sense.This approach more readily accommodate

  19. An Approach for Automatic Generation of Adaptive Hypermedia in Education with Multilingual Knowledge Discovery Techniques

    Science.gov (United States)

    Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana

    2007-01-01

    This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…

  20. Fetal hemodynamic adaptive changes related to intrauterine growth the generation R study

    NARCIS (Netherlands)

    B.O. Verburg (Bero Olof); V.W.V. Jaddoe (Vincent); J.W. Wladimiroff (Juriy); A. Hofman (Albert); J.C.M. Witteman (Jacqueline); R.P.M. Steegers-Theunissen (Régine)

    2008-01-01

    textabstractBackground-It has been suggested that an adverse fetal environment increases susceptibility to hypertension and cardiovascular disease in adult life. This increased risk may result from suboptimal development of the heart and main arteries in utero and from adaptive cardiovascular change

  1. Analysis list - ChIP-Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us ChIP-Atla...3 Description of data contents A list of metadata to generate file paths of analysis provided on ChIP-Atlas,...e class. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Analysis list - ChIP-Atlas | LSDB Archive ...

  2. Ready to put metadata on the post-2015 development agenda? Linking data publications to responsible innovation and science diplomacy.

    Science.gov (United States)

    Özdemir, Vural; Kolker, Eugene; Hotez, Peter J; Mohin, Sophie; Prainsack, Barbara; Wynne, Brian; Vayena, Effy; Coşkun, Yavuz; Dereli, Türkay; Huzair, Farah; Borda-Rodriguez, Alexander; Bragazzi, Nicola Luigi; Faris, Jack; Ramesar, Raj; Wonkam, Ambroise; Dandara, Collet; Nair, Bipin; Llerena, Adrián; Kılıç, Koray; Jain, Rekha; Reddy, Panga Jaipal; Gollapalli, Kishore; Srivastava, Sanjeeva; Kickbusch, Ilona

    2014-01-01

    Metadata refer to descriptions about data or as some put it, "data about data." Metadata capture what happens on the backstage of science, on the trajectory from study conception, design, funding, implementation, and analysis to reporting. Definitions of metadata vary, but they can include the context information surrounding the practice of science, or data generated as one uses a technology, including transactional information about the user. As the pursuit of knowledge broadens in the 21(st) century from traditional "science of whats" (data) to include "science of hows" (metadata), we analyze the ways in which metadata serve as a catalyst for responsible and open innovation, and by extension, science diplomacy. In 2015, the United Nations Millennium Development Goals (MDGs) will formally come to an end. Therefore, we propose that metadata, as an ingredient of responsible innovation, can help achieve the Sustainable Development Goals (SDGs) on the post-2015 agenda. Such responsible innovation, as a collective learning process, has become a key component, for example, of the European Union's 80 billion Euro Horizon 2020 R&D Program from 2014-2020. Looking ahead, OMICS: A Journal of Integrative Biology, is launching an initiative for a multi-omics metadata checklist that is flexible yet comprehensive, and will enable more complete utilization of single and multi-omics data sets through data harmonization and greater visibility and accessibility. The generation of metadata that shed light on how omics research is carried out, by whom and under what circumstances, will create an "intervention space" for integration of science with its socio-technical context. This will go a long way to addressing responsible innovation for a fairer and more transparent society. If we believe in science, then such reflexive qualities and commitments attained by availability of omics metadata are preconditions for a robust and socially attuned science, which can then remain broadly

  3. An Adaptive Neuro-Fuzzy Inference System for Sea Level Prediction Considering Tide-Generating Forces and Oceanic Thermal Expansion

    Directory of Open Access Journals (Sweden)

    Li-Ching Lin Hsien-Kuo Chang

    2008-01-01

    Full Text Available The paper presents an adaptive neuro fuzzy inference system for predicting sea level considering tide-generating forces and oceanic thermal expansion assuming a model of sea level dependence on sea surface temperature. The proposed model named TGFT-FN (Tide-Generating Forces considering sea surface Temperature and Fuzzy Neuro-network system is applied to predict tides at five tide gauge sites located in Taiwan and has the root mean square of error of about 7.3 - 15.0 cm. The capability of TGFT-FN model is superior in sea level prediction than the previous TGF-NN model developed by Chang and Lin (2006 that considers the tide-generating forces only. The TGFT-FN model is employed to train and predict the sea level of Hua-Lien station, and is also appropriate for the same prediction at the tide gauge sites next to Hua-Lien station.

  4. Investigation of methods for user adapted visualisation of information in a hypermedia generation system

    NARCIS (Netherlands)

    J. Werner

    2005-01-01

    textabstractA literature review of user interaction to support creative processes is given. A design for an authoring system for semi-automatically generated hypermedia presentations is developed. The system designed is called SampLe (a Semi-Automatic Multimedia Presentation generation Environment)

  5. Investigation of methods for user adapted visualisation of information in a hypermedia generation system

    NARCIS (Netherlands)

    Werner, J.

    2005-01-01

    A literature review of user interaction to support creative processes is given. A design for an authoring system for semi-automatically generated hypermedia presentations is developed. The system designed is called SampLe (a Semi-Automatic Multimedia Presentation generation Environment)

  6. Adapting to a new workplace : Generational differences in work needs and values

    NARCIS (Netherlands)

    Lub, X.D.; Blomme, R.J.

    2009-01-01

    Hospitality businesses are experiencing high staff turnover and seem to have particular problems retaining a new generation of employees. This study explores generational differences as a possible reason for this problem. This phenomenon is widely reported in the popular press but has received very

  7. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  8. Forensic devices for activism: Metadata tracking and public proof

    NARCIS (Netherlands)

    van der Velden, L.

    2015-01-01

    The central topic of this paper is a mobile phone application, ‘InformaCam’, which turns metadata from a surveillance risk into a method for the production of public proof. InformaCam allows one to manage and delete metadata from images and videos in order to diminish surveillance risks related to o

  9. Metadata as a means for correspondence on digital media

    NARCIS (Netherlands)

    Stouffs, R.; Kooistra, J.; Tuncer, B.

    2004-01-01

    Metadata derive their action from their association to data and from the relationship they maintain with this data. An interpretation of this action is that the metadata lays claim to the data collection to which it is associated, where the claim is successful if the data collection gains quality as

  10. Learning Object Metadata in a Web-Based Learning Environment

    NARCIS (Netherlands)

    Avgeriou, Paris; Koutoumanos, Anastasios; Retalis, Symeon; Papaspyrou, Nikolaos

    2000-01-01

    The plethora and variance of learning resources embedded in modern web-based learning environments require a mechanism to enable their structured administration. This goal can be achieved by defining metadata on them and constructing a system that manages the metadata in the context of the learning

  11. Shared Geospatial Metadata Repository for Ontario University Libraries: Collaborative Approaches

    Science.gov (United States)

    Forward, Erin; Leahey, Amber; Trimble, Leanne

    2015-01-01

    Successfully providing access to special collections of digital geospatial data in academic libraries relies upon complete and accurate metadata. Creating and maintaining metadata using specialized standards is a formidable challenge for libraries. The Ontario Council of University Libraries' Scholars GeoPortal project, which created a shared…

  12. Metadata in the Collaboratory for Multi-Scale Chemical Science

    Energy Technology Data Exchange (ETDEWEB)

    Pancerella, Carmen M.; Hewson, John; Koegler, Wendy S.; Leahy, David; Lee, Michael; Rahn, Larry; Yang, Christine; Myers, James D.; Didier, Brett T.; McCoy, Renata; Schuchardt, Karen L.; Stephan, Eric G.; Windus, Theresa L.; Amin, Kaizer; Bittner, Sandra; Lansing, Carina S.; Minkoff, Michael; Nijsure, Sandeep; von Laszewski, Gregor; Pinzon, Reinhardt; Ruscic, Branko; Wagner, Albert F.; Wang, Baoshan; Pitz, William; Ho, Yen-Ling; Montoya, David W.; Xu, Lili; Allison, Thomas C.; Green, William H.; Frenklach, Michael

    2003-10-02

    The goal of the Collaboratory for the Multi-scale Chemical Sciences (CMCS) [1] is to develop an informatics-based approach to synthesizing multi-scale chemistry information to create knowledge in the chemical sciences. CMCS is using a portal and metadata-aware content store as a base for building a system to support inter-domain knowledge exchange in chemical science. Key aspects of the system include configurable metadata extraction and translation, a core schema for scientific pedigree, and a suite of tools for managing data and metadata and visualizing pedigree relationships between data entries. CMCS metadata is represented using Dublin Core with metadata extensions that are useful to both the chemical science community and the science community in general. CMCS is working with several chemistry groups who are using the system to collaboratively assemble and analyze existing data to derive new chemical knowledge. In this paper we discuss the project’s metadata-related requirements, the relevant software infrastructure, core metadata schema, and tools that use the metadata to enhance science

  13. Generating Initial Data in General Relativity using Adaptive Finite Element Methods

    CERN Document Server

    Aksoylu, Burak; Bond, Stephen; Holst, Michael

    2008-01-01

    The conformal formulation of the Einstein constraint equations is first reviewed, and we then consider the design, analysis, and implementation of adaptive multilevel finite element-type numerical methods for the resulting coupled nonlinear elliptic system. We derive weak formulations of the coupled constraints, and review some new developments in the solution theory for the constraints in the cases of constant mean extrinsic curvature (CMC) data, near-CMC data, and arbitrarily prescribed mean extrinsic curvature data. We then outline some recent results on a priori and a posteriori error estimates for a broad class of Galerkin-type approximation methods for this system which includes techniques such as finite element, wavelet, and spectral methods. We then use these estimates to construct an adaptive finite element method (AFEM) for solving this system numerically, and outline some new convergence and optimality results. We then describe in some detail an implementation of the methods using the FETK software...

  14. Metadata Laws, Journalism and Resistance in Australia

    Directory of Open Access Journals (Sweden)

    Benedetta Brevini

    2017-03-01

    Full Text Available The intelligence leaks from Edward Snowden in 2013 unveiled the sophistication and extent of data collection by the United States’ National Security Agency and major global digital firms prompting domestic and international debates about the balance between security and privacy, openness and enclosure, accountability and secrecy. It is difficult not to see a clear connection with the Snowden leaks in the sharp acceleration of new national security legislations in Australia, a long term member of the Five Eyes Alliance. In October 2015, the Australian federal government passed controversial laws that require telecommunications companies to retain the metadata of their customers for a period of two years. The new acts pose serious threats for the profession of journalism as they enable government agencies to easily identify and pursue journalists’ sources. Bulk data collections of this type of information deter future whistleblowers from approaching journalists, making the performance of the latter’s democratic role a challenge. After situating this debate within the scholarly literature at the intersection between surveillance studies and communication studies, this article discusses the political context in which journalists are operating and working in Australia; assesses how metadata laws have affected journalism practices and addresses the possibility for resistance.

  15. Local genetic adaptation generates latitude-specific effects of warming on predator-prey interactions.

    Science.gov (United States)

    De Block, Marjan; Pauwels, Kevin; Van Den Broeck, Maarten; De Meester, Luc; Stoks, Robby

    2013-03-01

    Temperature effects on predator-prey interactions are fundamental to better understand the effects of global warming. Previous studies never considered local adaptation of both predators and prey at different latitudes, and ignored the novel population combinations of the same predator-prey species system that may arise because of northward dispersal. We set up a common garden warming experiment to study predator-prey interactions between Ischnura elegans damselfly predators and Daphnia magna zooplankton prey from three source latitudes spanning >1500 km. Damselfly foraging rates showed thermal plasticity and strong latitudinal differences consistent with adaptation to local time constraints. Relative survival was higher at 24 °C than at 20 °C in southern Daphnia and higher at 20 °C than at 24 °C, in northern Daphnia indicating local thermal adaptation of the Daphnia prey. Yet, this thermal advantage disappeared when they were confronted with the damselfly predators of the same latitude, reflecting also a signal of local thermal adaptation in the damselfly predators. Our results further suggest the invasion success of northward moving predators as well as prey to be latitude-specific. We advocate the novel common garden experimental approach using predators and prey obtained from natural temperature gradients spanning the predicted temperature increase in the northern populations as a powerful approach to gain mechanistic insights into how community modules will be affected by global warming. It can be used as a space-for-time substitution to inform how predator-prey interaction may gradually evolve to long-term warming.

  16. Generating relevant climate adaptation science tools in concert with local natural resource agencies

    Science.gov (United States)

    Micheli, L.; Flint, L. E.; Veloz, S.; Heller, N. E.

    2015-12-01

    To create a framework for adapting to climate change, decision makers operating at the urban-wildland interface need to define climate vulnerabilities in the context of site-specific opportunities and constraints relative to water supply, land use suitability, wildfire risks, ecosystem services and quality of life. Pepperwood's TBC3.org is crafting customized climate vulnerability assessments with selected water and natural resource agencies of California's Sonoma, Marin, Napa and Mendocino counties under the auspices of Climate Ready North Bay, a public-private partnership funded by the California Coastal Conservancy. Working directly with managers from the very start of the process to define resource-specific information needs, we are developing high-resolution, spatially-explicit data products to help local governments and agency staff implement informed and effective climate adaptation strategies. Key preliminary findings for the region using the USGS' Basin Characterization Model (at a 270 m spatial resolution) include a unidirectional trend, independent of greater or lesser precipitation, towards increasing climatic water deficits across model scenarios. Therefore a key message is that managers will be facing an increasingly arid environment. Companion models translate the impacts of shifting climate and hydrology on vegetation composition and fire risks. The combination of drought stress on water supplies and native vegetation with an approximate doubling of fire risks may demand new approaches to watershed planning. Working with agencies we are exploring how to build capacity for protection and enhancement of key watershed functions with a focus on groundwater recharge, facilitating greater drought tolerance in forest and rangeland systems, and considering more aggressive approaches to management of fuel loads. Lessons learned about effective engagement include the need for extended in-depth dialog, translation of key climate adaptation questions into

  17. Developing Cyberinfrastructure Tools and Services for Metadata Quality Evaluation

    Science.gov (United States)

    Mecum, B.; Gordon, S.; Habermann, T.; Jones, M. B.; Leinfelder, B.; Powers, L. A.; Slaughter, P.

    2016-12-01

    Metadata and data quality are at the core of reusable and reproducible science. While great progress has been made over the years, much of the metadata collected only addresses data discovery, covering concepts such as titles and keywords. Improving metadata beyond the discoverability plateau means documenting detailed concepts within the data such as sampling protocols, instrumentation used, and variables measured. Given that metadata commonly do not describe their data at this level, how might we improve the state of things? Giving scientists and data managers easy to use tools to evaluate metadata quality that utilize community-driven recommendations is the key to producing high-quality metadata. To achieve this goal, we created a set of cyberinfrastructure tools and services that integrate with existing metadata and data curation workflows which can be used to improve metadata and data quality across the sciences. These tools work across metadata dialects (e.g., ISO19115, FGDC, EML, etc.) and can be used to assess aspects of quality beyond what is internal to the metadata such as the congruence between the metadata and the data it describes. The system makes use of a user-friendly mechanism for expressing a suite of checks as code in popular data science programming languages such as Python and R. This reduces the burden on scientists and data managers to learn yet another language. We demonstrated these services and tools in three ways. First, we evaluated a large corpus of datasets in the DataONE federation of data repositories against a metadata recommendation modeled after existing recommendations such as the LTER best practices and the Attribute Convention for Dataset Discovery (ACDD). Second, we showed how this service can be used to display metadata and data quality information to data producers during the data submission and metadata creation process, and to data consumers through data catalog search and access tools. Third, we showed how the centrally

  18. Blood Vessels Extraction in Retinal Image Using New Generation Curvelet Transform and Adaptive Weighted Morphology Operators

    Directory of Open Access Journals (Sweden)

    Saleh Shahbeig

    2013-02-01

    Full Text Available According to many medical and biometric applications of retinal images, the automatic and accurate extraction of the retinal blood vessels is very important. In this paper, an effective method is introduced to extract the blood vessels from the background of colored images of retina. In this algorithm, by applying the equalizer function on the retinal images, the brightness of the images is considerably uniformed. Because of high ability of Curvelet transform in introducing image borders in various scale and directions, borders and, consequently the contrast of retinal images can be enhanced. Therefore, the enhanced retinal image can be prepared for the extraction of blood vessels by improving Curvelet coefficients of the retinal images, adaptively and locally. Since the blood vessels in retinal images are distributed in various directions, we use the adaptive weighted morphology operators to extract the blood vessels from retinal images. Morphology operators based on reconstruction are used to refine the appeared frills with the size of smaller than arterioles in images properly. Finally, by analyzing the connected component in the images and applying adaptive filter on the components locally, all residual frills are refined from the images. The proposed algorithm in this paper has been evaluated by the images in the DRIVE database. The results how that the blood vessels are extracted from background of the retinal images of DRIVE database with the high accuracy of 96.15%, which in turn shows the high ability of the proposed algorithm in extracting the retinal blood vessels.

  19. Generating an Educational Domain Checklist through an Adaptive Framework for Evaluating Educational Systems

    Directory of Open Access Journals (Sweden)

    Roobaea S. AlRoobaea

    2013-09-01

    Full Text Available The growth of the Internet and related technologies has enabled the development of a new breed of dynamic websites that is growing rapidly in use and that has had a huge impact on many businesses. One type of websites that have been widely spread and are being widely adopted is the educational websites. There are many forms of educational websites, such as free online websites and Web-based server software. This creates challenges regarding their continuing evaluation and monitoring in order to measure their efficiency and effectiveness, to assess user satisfaction and, ultimately, to improve their quality. The lack of an adaptive usability checklist for improvement of the usability assessment process for educational systems represents a missing piece in ‘usability testing’. This paper presents an adaptive Domain-Specific Inspection (DSI checklist as a tool for evaluating the usability of educational systems. The results show that the adaptive educational usability checklist helped evaluators to facilitate the evaluation process. It also provides an opportunity for website owners to choose the usability area(s that they think need to be evaluated. Moreover, this method was more efficient and effective than user testing (UT and heuristics evaluation (HE methods.

  20. Making the Case for Embedded Metadata in Digital Images

    DEFF Research Database (Denmark)

    Smith, Kari R.; Saunders, Sarah; Kejser, U.B.

    2014-01-01

    This paper discusses the standards, methods, use cases, and opportunities for using embedded metadata in digital images. In this paper we explain the past and current work engaged with developing specifications, standards for embedding metadata of different types, and the practicalities of data...... exchange in heritage institutions and the culture sector. Our examples and findings support the case for embedded metadata in digital images and the opportunities for such use more broadly in non-heritage sectors as well. We encourage the adoption of embedded metadata by digital image content creators...... and curators as well as those developing software and hardware that support the creation or re-use of digital images. We conclude that the usability of born digital images as well as physical objects that are digitized can be extended and the files preserved more readily with embedded metadata....

  1. A Spatialization-based Method for Checking and Updating Metadata

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    In this paper the application of spatialization technology on metadata quality check and updating was discussed. A new method based on spatialization was proposed for checking and updating metadata to overcome the deficiency of text based methods with the powerful functions of spatial query and analysis provided by GIS software. This method employs the technology of spatialization to transform metadata into a coordinate space and the functions of spatial analysis in GIS to check and update spatial metadata in a visual environment. The basic principle and technical flow of this method were explained in detail, and an example of implementation using ArcMap of GIS software was illustrated with a metadata set of digital raster maps. The result shows the new method with the support of interaction of graph and text is much more intuitive and convenient than the ordinary text based method, and can fully utilize the functions of GIS spatial query and analysis with more accuracy and efficiency.

  2. Managing ebook metadata in academic libraries taming the tiger

    CERN Document Server

    Frederick, Donna E

    2016-01-01

    Managing ebook Metadata in Academic Libraries: Taming the Tiger tackles the topic of ebooks in academic libraries, a trend that has been welcomed by students, faculty, researchers, and library staff. However, at the same time, the reality of acquiring ebooks, making them discoverable, and managing them presents library staff with many new challenges. Traditional methods of cataloging and managing library resources are no longer relevant where the purchasing of ebooks in packages and demand driven acquisitions are the predominant models for acquiring new content. Most academic libraries have a complex metadata environment wherein multiple systems draw upon the same metadata for different purposes. This complexity makes the need for standards-based interoperable metadata more important than ever. In addition to complexity, the nature of the metadata environment itself typically varies slightly from library to library making it difficult to recommend a single set of practices and procedures which would be releva...

  3. A Distributed Metadata Management, Data Discovery and Access System

    CERN Document Server

    Palanisamy, Giriprakash; Green, Jim; Wilson, Bruce

    2010-01-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury was developed during 2007. This new version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, support for RSS delivery of search results, among other features. Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fa...

  4. Interpreting the ASTM 'content standard for digital geospatial metadata'

    Science.gov (United States)

    Nebert, Douglas D.

    1996-01-01

    ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.

  5. Statistical Metadata: a Unified Approach to Management and Dissemination

    Directory of Open Access Journals (Sweden)

    Signore Marina

    2015-06-01

    Full Text Available This article illustrates a unified conceptual approach to metadata, whereby metadata describing the information content and structure of data and those describing the statistical process are managed jointly with metadata arising from administrative and support activities. Many different actors may benefit from this approach: internal users who are given the option to reuse information; internal management that is supported in the decision-making process, process industrialisation and standardisation as well as performance assessment; external users who are provided with data and process-related metadata as well as quality measures to retrieve data and use them properly. In the article, a general model useful for metadata representation is illustrated and its application presented. Relationships to existing frameworks and standards are also discussed and enhancements proposed.

  6. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    Directory of Open Access Journals (Sweden)

    Ed Baker

    2013-09-01

    Full Text Available Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity  have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC extraction and mapping.

  7. METADATA EXPANDED SEMANTICALLY BASED RESOURCE SEARCH IN EDUCATION GRID

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    With the rapid increase of educational resources, how to search for necessary educational resource quickly is one of most important issues. Educational resources have the characters of distribution and heterogeneity, which are the same as the characters of Grid resources. Therefore, the technology of Grid resources search was adopted to implement the educational resources search. Motivated by the insufficiency of currently resources search methods based on metadata, a method of extracting semantic relations between words constituting metadata is proposed. We mainly focus on acquiring synonymy, hyponymy, hypernymy and parataxis relations. In our schema, we extract texts related to metadata that will be expanded from text spatial through text extraction templates. Next, metadata will be obtained through metadata extraction templates. Finally, we compute semantic similarity to eliminate false relations and construct a semantic expansion knowledge base. The proposed method in this paper has been applied on the education grid.

  8. Generation and characterization of a cold-adapted attenuated live H3N2 subtype influenza virus vaccine candidate

    Institute of Scientific and Technical Information of China (English)

    AN Wen-qi; LIU Xiu-fan; WANG Xi-liang; YANG Peng-hui; DUAN Yue-qiang; LUO De-yan; TANG Chong; JIA Wei-hong; XING Li; SHI Xin-fu; ZHANG Yu-jing

    2009-01-01

    Background H3N2 subtype influenza A viruses have been identified in humans worldwide, raising concerns about their pandemic potential and prompting the development of candidate vaccines to protect humans against this subtype of influenza A virus. The aim of this study was to establish a system for rescuing of a cold-adapted high-yielding H3N2 subtype human influenza virus by reverse genetics. Methods In order to generate better and safer vaccine candidate viruses, a cold-adapted high yielding reassortant H3N2 influenza A virus was genetically constructed by reverse genetics and was designated as rgAA-H3N2. The rgAA-H3N2 virus contained HA and NA genes from an epidemic strain A/Wisconsin/67/2005 (H3N2) in a background of internal genes derived from the master donor viruses (MDV), cold-adapted (ca), temperature sensitive (te), live attenuated influenza virus strain A/Ann Arbor/6/60 (MDV-A). Results In this presentation, the virus HA titer of rgAA-H3N2 in the allantoic fluid from infected embryonated eggs was as high as 1:1024. A fluorescent focus assay (FFU) was performed 24-36 hours post-infection using a specific antibody and bright staining was used for determining the virus titer. The allantoic fluid containing the recovered influenza virus was analyzed in a hemagglutination inhibition (HI) test and the specific inhibition was found. Conclusion The results mentioned above demonstrated that cold-adapted, attenuated reassortant H3N2 subtype influenza A virus was successfully generated, which laid a good foundation for the further related research.

  9. The ANSS Station Information System: A Centralized Station Metadata Repository for Populating, Managing and Distributing Seismic Station Metadata

    Science.gov (United States)

    Thomas, V. I.; Yu, E.; Acharya, P.; Jaramillo, J.; Chowdhury, F.

    2015-12-01

    Maintaining and archiving accurate site metadata is critical for seismic network operations. The Advanced National Seismic System (ANSS) Station Information System (SIS) is a repository of seismic network field equipment, equipment response, and other site information. Currently, there are 187 different sensor models and 114 data-logger models in SIS. SIS has a web-based user interface that allows network operators to enter information about seismic equipment and assign response parameters to it. It allows users to log entries for sites, equipment, and data streams. Users can also track when equipment is installed, updated, and/or removed from sites. When seismic equipment configurations change for a site, SIS computes the overall gain of a data channel by combining the response parameters of the underlying hardware components. Users can then distribute this metadata in standardized formats such as FDSN StationXML or dataless SEED. One powerful advantage of SIS is that existing data in the repository can be leveraged: e.g., new instruments can be assigned response parameters from the Incorporated Research Institutions for Seismology (IRIS) Nominal Response Library (NRL), or from a similar instrument already in the inventory, thereby reducing the amount of time needed to determine parameters when new equipment (or models) are introduced into a network. SIS is also useful for managing field equipment that does not produce seismic data (eg power systems, telemetry devices or GPS receivers) and gives the network operator a comprehensive view of site field work. SIS allows users to generate field logs to document activities and inventory at sites. Thus, operators can also use SIS reporting capabilities to improve planning and maintenance of the network. Queries such as how many sensors of a certain model are installed or what pieces of equipment have active problem reports are just a few examples of the type of information that is available to SIS users.

  10. ClinData Express--a metadata driven clinical research data management system for secondary use of clinical data.

    Science.gov (United States)

    Li, Zuofeng; Wen, Jingran; Zhang, Xiaoyan; Wu, Chunxiao; Li, Zuogao; Liu, Lei

    2012-01-01

    Aim to ease the secondary use of clinical data in clinical research, we introduce a metadata driven web-based clinical data management system named ClinData Express. ClinData Express is made up of two parts: 1) m-designer, a standalone software for metadata definition; 2) a web based data warehouse system for data management. With ClinData Express, what the researchers need to do is to define the metadata and data model in the m-designer. The web interface for data collection and specific database for data storage will be automatically generated. The standards used in the system and the data export modular make sure of the data reuse. The system has been tested on seven disease-data collection in Chinese and one form from dbGap. The flexibility of system makes its great potential usage in clinical research. The system is available at http://code.google.com/p/clindataexpress.

  11. Mesh Generation and Adaption for High Reynolds Number RANS Computations Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation of our Phase II STTR program is to develop and provide to NASA automatic mesh generation software for the simulation of fluid flows using...

  12. Mesh Generation and Adaption for High Reynolds Number RANS Computations Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal offers to provide NASA with an automatic mesh generator for the simulation of aerodynamic flows using Reynolds-Averages Navier-Stokes (RANS) models....

  13. Unified Science Information Model for SoilSCAPE using the Mercury Metadata Search System

    Science.gov (United States)

    Devarakonda, Ranjeet; Lu, Kefa; Palanisamy, Giri; Cook, Robert; Santhana Vannan, Suresh; Moghaddam, Mahta Clewley, Dan; Silva, Agnelo; Akbar, Ruzbeh

    2013-12-01

    SoilSCAPE (Soil moisture Sensing Controller And oPtimal Estimator) introduces a new concept for a smart wireless sensor web technology for optimal measurements of surface-to-depth profiles of soil moisture using in-situ sensors. The objective is to enable a guided and adaptive sampling strategy for the in-situ sensor network to meet the measurement validation objectives of spaceborne soil moisture sensors such as the Soil Moisture Active Passive (SMAP) mission. This work is being carried out at the University of Michigan, the Massachusetts Institute of Technology, University of Southern California, and Oak Ridge National Laboratory. At Oak Ridge National Laboratory we are using Mercury metadata search system [1] for building a Unified Information System for the SoilSCAPE project. This unified portal primarily comprises three key pieces: Distributed Search/Discovery; Data Collections and Integration; and Data Dissemination. Mercury, a Federally funded software for metadata harvesting, indexing, and searching would be used for this module. Soil moisture data sources identified as part of this activity such as SoilSCAPE and FLUXNET (in-situ sensors), AirMOSS (airborne retrieval), SMAP (spaceborne retrieval), and are being indexed and maintained by Mercury. Mercury would be the central repository of data sources for cal/val for soil moisture studies and would provide a mechanism to identify additional data sources. Relevant metadata from existing inventories such as ORNL DAAC, USGS Clearinghouse, ARM, NASA ECHO, GCMD etc. would be brought in to this soil-moisture data search/discovery module. The SoilSCAPE [2] metadata records will also be published in broader metadata repositories such as GCMD, data.gov. Mercury can be configured to provide a single portal to soil moisture information contained in disparate data management systems located anywhere on the Internet. Mercury is able to extract, metadata systematically from HTML pages or XML files using a variety of

  14. Performance assessment of electric power generations using an adaptive neural network algorithm and fuzzy DEA

    Energy Technology Data Exchange (ETDEWEB)

    Javaheri, Zahra

    2010-09-15

    Modeling, evaluating and analyzing performance of Iranian thermal power plants is the main goal of this study which is based on multi variant methods analysis. These methods include fuzzy DEA and adaptive neural network algorithm. At first, we determine indicators, then data is collected, next we obtained values of ranking and efficiency by Fuzzy DEA, Case study is thermal power plants In view of the fact that investment to establish on power plant is very high, and maintenance of power plant causes an expensive expenditure, moreover using fossil fuel effected environment hence optimum produce of current power plants is important.

  15. LHCb: Dynamically Adaptive Header Generator and Front-End Source Emulator for a 100 Gbps FPGA Based DAQ

    CERN Multimedia

    Srikanth, S

    2014-01-01

    The proposed upgrade for the LHCb experiment envisages a system of 500 Data sources each generating data at 100 Gbps, the acquisition and processing of which is a big challenge even for the current state of the art FPGAs. This requires an FPGA DAQ module that not only handles the data generated by the experiment but also is versatile enough to dynamically adapt to potential inadequacies of other components like the network and PCs. Such a module needs to maintain real time operation while at the same time maintaining system stability and overall data integrity. This also creates a need for a Front-end source Emulator capable of generating the various data patterns, that acts as a testbed to validate the functionality and performance of the Header Generator. The rest of the abstract briefly describes these modules and their implementation. The Header Generator is used to packetize the streaming data from the detectors before it is sent to the PCs for further processing. This is achieved by continuously scannin...

  16. Comparison of Different Strategies for Selection/Adaptation of Mixed Microbial Cultures Able to Ferment Crude Glycerol Derived from Second-Generation Biodiesel

    DEFF Research Database (Denmark)

    Varrone, Cristiano; Heggeset, T. M. B.; Le, S. B.

    2015-01-01

    Objective of this study was the selection and adaptation of mixed microbial cultures (MMCs), able to ferment crude glycerol generated from animal fat-based biodiesel and produce building-blocks and green chemicals. Various adaptation strategies have been investigated for the enrichment of suitable...

  17. Adaptive Generation and Diagnostics of Linear Few-Cycle Light Bullets

    Directory of Open Access Journals (Sweden)

    Martin Bock

    2013-02-01

    Full Text Available Recently we introduced the class of highly localized wavepackets (HLWs as a generalization of optical Bessel-like needle beams. Here we report on the progress in this field. In contrast to pulsed Bessel beams and Airy beams, ultrashort-pulsed HLWs propagate with high stability in both spatial and temporal domain, are nearly paraxial (supercollimated, have fringe-less spatial profiles and thus represent the best possible approximation to linear “light bullets”. Like Bessel beams and Airy beams, HLWs show self-reconstructing behavior. Adaptive HLWs can be shaped by ultraflat three-dimensional phase profiles (generalized axicons which are programmed via calibrated grayscale maps of liquid-crystal-on-silicon spatial light modulators (LCoS-SLMs. Light bullets of even higher complexity can either be freely formed from quasi-continuous phase maps or discretely composed from addressable arrays of identical nondiffracting beams. The characterization of few-cycle light bullets requires spatially resolved measuring techniques. In our experiments, wavefront, pulse and phase were detected with a Shack-Hartmann wavefront sensor, 2D-autocorrelation and spectral phase interferometry for direct electric-field reconstruction (SPIDER. The combination of the unique propagation properties of light bullets with the flexibility of adaptive optics opens new prospects for applications of structured light like optical tweezers, microscopy, data transfer and storage, laser fusion, plasmon control or nonlinear spectroscopy.

  18. 基于生存期的云存储元数据缓存算法%Metadata caching algorithm of cloud storage based on life cycle

    Institute of Scientific and Technical Information of China (English)

    牛德姣; 蔡涛; 詹永照; 鞠时光

    2012-01-01

    针对元数据管理子系统成为云存储中性能瓶颈的问题,研究了云存储元数据缓存算法.在分析元数据被访问特性的基础上,提出了元数据缓存生存期的概念;依据云存储的特性设计了元数据缓存生存期的计算规则,给出了基于生存期的元数据调出策略和元数据缓存写回策略,提高了云存储元数据管理的效率;分析了基于生存期元数据缓存算法适应用户访问特性的能力,讨论了使用基于生存期元数据缓存算法后如何保证元数据一致性的问题;使用基于生存期元数据缓存算法,实现了云存储元数据缓存原型系统,并使用通用数据集和测试工具进行了测试与分析.结果表明,该算法能提高云存储15%的I/O速度和16%的操作处理速度.%To solve the bottle-neck caused by metadata management subsystem in cloud storage, a metadata caching algorithm of cloud storage was proposed. Based on the characteristics analysis of accessed metadata, the concept of metadata caching life cycle was determined. The calculating rules of metadata caching life cycle was designed , and the metadata caching algorithms of metadata substitution and write-back were presented based on the life cycle definition. The metadata management efficiency of cloud storage was greatly improved by the caching algorithms. The metadata caching algorithms adaptability to user accessing characteristics was analyzed, and the metadata consistency guaranteed by the algorithms was discussed. A metadata caching prototype was realized based on the proposed algorithms. Analysis and experiments were completed on the benchmark dataset by general test tools. The results show that the proposed algorithms can effectively enhance I/O rate by 15% and operation processing rate by 16% in cloud storage, respectively.

  19. Spacecraft Formation Flying near Sun-Earth L2 Lagrange Point: Trajectory Generation and Adaptive Full-State Feedback Control

    Science.gov (United States)

    Wong, Hong; Kapila, Vikram

    2004-01-01

    In this paper, we present a method for trajectory generation and adaptive full-state feedback control to facilitate spacecraft formation flying near the Sun-Earth L2 Lagrange point. Specifically, the dynamics of a spacecraft in the neighborhood of a Halo orbit reveals that there exist quasi-periodic orbits surrounding the Halo orbit. Thus, a spacecraft formation is created by placing a leader spacecraft on a desired Halo orbit and placing follower spacecraft on desired quasi-periodic orbits. To produce a formation maintenance controller, we first develop the nonlinear dynamics of a follower spacecraft relative to the leader spacecraft. We assume that the leader spacecraft is on a desired Halo orbit trajectory and the follower spacecraft is to track a desired quasi-periodic orbit surrounding the Halo orbit. Then, we design an adaptive, full-state feedback position tracking controller for the follower spacecraft providing an adaptive compensation for the unknown mass of the follower spacecraft. The proposed control law is simulated for the case of the leader and follower spacecraft pair and is shown to yield global, asymptotic convergence of the relative position tracking errors.

  20. Adapting Training to Meet the Preferred Learning Styles of Different Generations

    Science.gov (United States)

    Urick, Michael

    2017-01-01

    This article considers how training professionals can respond to differences in training preferences between generational groups. It adopts two methods. First, it surveys the existing research and finds generally that preferences for training approaches can differ between groups and specifically that younger employees are perceived to leverage…

  1. Taxonomic names, metadata, and the Semantic Web

    Directory of Open Access Journals (Sweden)

    Roderic D. M. Page

    2006-01-01

    Full Text Available Life Science Identifiers (LSIDs offer an attractive solution to the problem of globally unique identifiers for digital objects in biology. However, I suggest that in the context of taxonomic names, the most compelling benefit of adopting these identifiers comes from the metadata associated with each LSID. By using existing vocabularies wherever possible, and using a simple vocabulary for taxonomy-specific concepts we can quickly capture the essential information about a taxonomic name in the Resource Description Framework (RDF format. This opens up the prospect of using technologies developed for the Semantic Web to add ``taxonomic intelligence" to biodiversity databases. This essay explores some of these ideas in the context of providing a taxonomic framework for the phylogenetic database TreeBASE.

  2. Integrating Structured Metadata with Relational Affinity Propagation

    CERN Document Server

    Plangprasopchok, Anon; Getoor, Lise

    2010-01-01

    Structured and semi-structured data describing entities, taxonomies and ontologies appears in many domains. There is a huge interest in integrating structured information from multiple sources; however integrating structured data to infer complex common structures is a difficult task because the integration must aggregate similar structures while avoiding structural inconsistencies that may appear when the data is combined. In this work, we study the integration of structured social metadata: shallow personal hierarchies specified by many individual users on the SocialWeb, and focus on inferring a collection of integrated, consistent taxonomies. We frame this task as an optimization problem with structural constraints. We propose a new inference algorithm, which we refer to as Relational Affinity Propagation (RAP) that extends affinity propagation (Frey and Dueck 2007) by introducing structural constraints. We validate the approach on a real-world social media dataset, collected from the photosharing website ...

  3. A Highly Available Grid Metadata Catalog

    DEFF Research Database (Denmark)

    Jensen, Henrik Thostrup; Kleist, Joshva

    2009-01-01

    This article presents a metadata catalog, intended foruse in grids. The catalog provides high availability, by replication across several hosts. The replicas are kept consistent using a replication protocol based on the Paxos algorithm. A majority of the replicas must be available in order...... for the system to function. The data model used in the catalog is RDF, which allows users to create theirown name spaces and schemas. Querying is performed using SPARQL. Additionally the catalog can be used as a synchronization mechanism, by utilizing a compare and swap operation. The catalog is accessed using...... HTTP with proxy certificates, and uses GACL for flexible access control.The performance of the catalog is tested in several ways, including a distributed setup between geographically separated sites....

  4. Design and Implementation of a Metadata-rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  5. Kaiser Permanente's "metadata-driven" national clinical intranet.

    Science.gov (United States)

    Dolin, R H; Boles, M; Dolin, R; Green, S; Hanifin, S; Hochhalter, B; Inglesis, R; Ivory, M; Levy, D; Nadspal, K; Rae, M A; Rucks, C J; Snyder, A; Stibolt, T; Stiefel, M; Travis, V

    2001-01-01

    This paper describes the approach taken to build Kaiser Permanente's national clinical intranet. A primary objective for the site is to facilitate resource discovery, which is enabled by the use of "metadata", or data (fields and field values) that describe the various resources available. Users can perform full text queries and/or fielded searching against the metadata. Metadata serves as the organizing principle of the site--it is used to index documents, sort search results, and structure the site's table of contents. The site's use of metadata--what it is, how it is created, how it is applied to documents, how it is indexed, how it is presented to the user in the search and the search results interface, and how it is used to construct the table of contents for the web site--will be discussed in detail. The result is that KP's national clinical intranet has coupled the power of Internet-like full text search engines with the power of MedLine-like fielded searching in order to maximize search precision and recall. Organizing content on the site in accordance with the metadata promotes overall consistency. Issues currently under investigation include how to better exploit the power of the controlled terminology within the metadata; whether the value gained is worth the cost of collecting metadata; and how automatic classification algorithms might obviate the need for manual document indexing.

  6. The diversity-generating benefits of a prokaryotic adaptive immune system.

    Science.gov (United States)

    van Houte, Stineke; Ekroth, Alice K E; Broniewski, Jenny M; Chabas, Hélène; Ashby, Ben; Bondy-Denomy, Joseph; Gandon, Sylvain; Boots, Mike; Paterson, Steve; Buckling, Angus; Westra, Edze R

    2016-04-21

    Prokaryotic CRISPR-Cas adaptive immune systems insert spacers derived from viruses and other parasitic DNA elements into CRISPR loci to provide sequence-specific immunity. This frequently results in high within-population spacer diversity, but it is unclear if and why this is important. Here we show that, as a result of this spacer diversity, viruses can no longer evolve to overcome CRISPR-Cas by point mutation, which results in rapid virus extinction. This effect arises from synergy between spacer diversity and the high specificity of infection, which greatly increases overall population resistance. We propose that the resulting short-lived nature of CRISPR-dependent bacteria-virus coevolution has provided strong selection for the evolution of sophisticated virus-encoded anti-CRISPR mechanisms.

  7. Semantic Metadata for Heterogeneous Spatial Planning Documents

    Science.gov (United States)

    Iwaniak, A.; Kaczmarek, I.; Łukowicz, J.; Strzelecki, M.; Coetzee, S.; Paluszyński, W.

    2016-09-01

    Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa). The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.

  8. SEMANTIC METADATA FOR HETEROGENEOUS SPATIAL PLANNING DOCUMENTS

    Directory of Open Access Journals (Sweden)

    A. Iwaniak

    2016-09-01

    Full Text Available Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa. The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.

  9. Development of active/adaptive lightweight optics for the next generation of telescopes

    Science.gov (United States)

    Ghigo, M.; Basso, S.; Citterio, O.; Mazzoleni, F.; Vernani, D.

    2006-02-01

    The future large optical telescopes will have such large dimensions to require innovative technical solutions either in the engineering and optical fields. Their optics will have dimensions ranging from 30 to 100 m. and will be segmented. It is necessary to develop a cost effective industrial process, fast and efficient, to create the thousands of segments neeededs to assemble the mirrors of these instruments. INAF-OAB (Astronomical Observatory of Brera) is developing with INAF-Arcetri (Florence Astronomical Observatory) a method of production of lightweight glass optics that is suitable for the manufacturing of these segments. These optics will be also probably active and therefore the segments have to be thin, light and relatively flexible. The same requirements are valid also for the secondary adaptive mirrors foreseen for these telescopes and that therefore will benefit from the same technology. The technique under investigation foresees the thermal slumping of thin glass segments using a high quality ceramic mold (master). The sheet of glass is placed onto the mold and then, by means of a suitable thermal cycle, the glass is softened and its shape is changed copying the master shape. At the end of the slumping the correction of the remaining errors will be performed using the Ion Beam Figuring technique, a non-contact deterministic technique. To reduce the time spent for the correction it will be necessary to have shape errors on the segments as small as possible. A very preliminary series of experiments already performed on reduced size segments have shown that it is possible to copy a master shape with high accuracy (few microns PV) and it is very likely that copy accuracies of 1 micron or less are possible. The paper presents in detail the concepts of the proposed process and describes our current efforts that are aimed at the production of a scaled demonstrative adaptive segment of 50 cm of diameter.

  10. A Pan-European and Cross-Discipline Metadata Portal

    Science.gov (United States)

    Widmann, Heinrich; Thiemann, Hannes; Lautenschlager, Michael

    2014-05-01

    In recent years, significant investments have been made to create a pan-European e-infrastructure supporting multiple and diverse research communities. This led to the establishment of the community-driven European Data Infrastructure (EUDAT) project that implements services to tackle the specific challenges of international and interdisciplinary research data management. The EUDAT metadata service B2FIND plays a central role in this context as a repository and a search portal for the diverse metadata collected from heterogeneous sources. For this we built up a comprehensive joint metadata catalogue and an open data portal and offer support for new communities interested in publishing their data within EUDAT. The implemented metadata ingestion workflow consists in three steps. First the metadata records - provided either by various research communities or via other EUDAT services - are harvested. Afterwards the raw metadata records are converted and mapped to unified key-value dictionaries. The semantic mapping of the non-uniform community specific metadata to homogenous structured datasets is hereby the most subtle and challenging task. Finally the mapped records are uploaded as datasets to the catalogue and displayed in the portal. The homogenisation of the different community specific data models and vocabularies enables not only the unique presentation of these datasets as tables of field-value pairs but also the faceted, spatial and temporal search in the B2FIND metadata portal. Furthermore the service provides transparent access to the scientific data objects through the given references in the metadata. We present here the functionality and the features of the B2FIND service and give an outlook of further developments.

  11. Metadata Creation, Management and Search System for your Scientific Data

    Science.gov (United States)

    Devarakonda, R.; Palanisamy, G.

    2012-12-01

    Mercury Search Systems is a set of tools for creating, searching, and retrieving of biogeochemical metadata. Mercury toolset provides orders of magnitude improvements in search speed, support for any metadata format, integration with Google Maps for spatial queries, multi-facetted type search, search suggestions, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. Mercury's metadata editor provides a easy way for creating metadata and Mercury's search interface provides a single portal to search for data and information contained in disparate data management systems, each of which may use any metadata format including FGDC, ISO-19115, Dublin-Core, Darwin-Core, DIF, ECHO, and EML. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury is being used more than 14 different projects across 4 federal agencies. It was originally developed for NASA, with continuing development funded by NASA, USGS, and DOE for a consortium of projects. Mercury search won the NASA's Earth Science Data Systems Software Reuse Award in 2008. References: R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010);

  12. Migration of the ATLAS Metadata Interface (AMI) to Web 2.0 and cloud

    CERN Document Server

    Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian

    2015-01-01

    The ATLAS Metadata Interface (AMI) can be considered to be a mature application because it has existed for at least 10 years. Over the last year, we have been adapting the application to some recently available technologies. The web interface, which previously manipulated XML documents using XSL transformations, has been migrated to Asynchronous Java Script (AJAX). Web development has been considerably simplified by the development of a framework for AMI based on JQuery and Twitter Bootstrap. Finally there has been a major upgrade of the python web service client.

  13. GEO Label Web Services for Dynamic and Effective Communication of Geospatial Metadata Quality

    Science.gov (United States)

    Lush, Victoria; Nüst, Daniel; Bastin, Lucy; Masó, Joan; Lumsden, Jo

    2014-05-01

    We present demonstrations of the GEO label Web services and their integration into a prototype extension of the GEOSS portal (http://scgeoviqua.sapienzaconsulting.com/web/guest/geo_home), the GMU portal (http://gis.csiss.gmu.edu/GADMFS/) and a GeoNetwork catalog application (http://uncertdata.aston.ac.uk:8080/geonetwork/srv/eng/main.home). The GEO label is designed to communicate, and facilitate interrogation of, geospatial quality information with a view to supporting efficient and effective dataset selection on the basis of quality, trustworthiness and fitness for use. The GEO label which we propose was developed and evaluated according to a user-centred design (UCD) approach in order to maximise the likelihood of user acceptance once deployed. The resulting label is dynamically generated from producer metadata in ISO or FDGC format, and incorporates user feedback on dataset usage, ratings and discovered issues, in order to supply a highly informative summary of metadata completeness and quality. The label was easily incorporated into a community portal as part of the GEO Architecture Implementation Programme (AIP-6) and has been successfully integrated into a prototype extension of the GEOSS portal, as well as the popular metadata catalog and editor, GeoNetwork. The design of the GEO label was based on 4 user studies conducted to: (1) elicit initial user requirements; (2) investigate initial user views on the concept of a GEO label and its potential role; (3) evaluate prototype label visualizations; and (4) evaluate and validate physical GEO label prototypes. The results of these studies indicated that users and producers support the concept of a label with drill-down interrogation facility, combining eight geospatial data informational aspects, namely: producer profile, producer comments, lineage information, standards compliance, quality information, user feedback, expert reviews, and citations information. These are delivered as eight facets of a wheel

  14. Generating code adapted for interlinking legacy scalar code and extended vector code

    Science.gov (United States)

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  15. Sensorless Adaptive Output Feedback Control of Wind Energy Systems with PMS Generators

    OpenAIRE

    El Magri, Abdelmounime; Giri, Fouad; Besancon, Gildas; Elfadili, Abderrahim; Dugard, Luc; Chaoui, Fatima Zara

    2013-01-01

    International audience; This paper addresses the problem of controlling wind energy conversion (WEC) systems involving permanent magnet synchronous generator (PMSG) fed by IGBT-based buck-to-buck rectifier-inverter. The prime control objective is to maximize wind energy extraction which cannot be achieved without letting the wind turbine rotor operate in variable-speed mode. Interestingly, the present study features the achievement of the above energetic goal without resorting to sensors of w...

  16. Entice, engage, endure: adapting evidence-based retention strategies to a new generation of nurses

    OpenAIRE

    Broom, Catherine

    2010-01-01

    Catherine BroomCatherine Broom Consulting LLC, Lake Forest Park, WA, USAAbstract: Across the globe, the prolonged and expanding nursing shortage threatens dire consequences unless health care leaders can develop successful strategies to entice and engage a new generation of nurses. Over the past 3 decades investigational work regarding workforce attributes and the impact of organizational structures and processes has helped define professional nursing environments that successfully attract, s...

  17. Knowledge and Metadata Integration for Warehousing Complex Data

    CERN Document Server

    Ralaivao, Jean-Christian

    2008-01-01

    With the ever-growing availability of so-called complex data, especially on the Web, decision-support systems such as data warehouses must store and process data that are not only numerical or symbolic. Warehousing and analyzing such data requires the joint exploitation of metadata and domain-related knowledge, which must thereby be integrated. In this paper, we survey the types of knowledge and metadata that are needed for managing complex data, discuss the issue of knowledge and metadata integration, and propose a CWM-compliant integration solution that we incorporate into an XML complex data warehousing framework we previously designed.

  18. Publishing NASA Metadata as Linked Open Data for Semantic Mashups

    Science.gov (United States)

    Wilson, Brian; Manipon, Gerald; Hua, Hook

    2014-05-01

    Data providers are now publishing more metadata in more interoperable forms, e.g. Atom or RSS 'casts', as Linked Open Data (LOD), or as ISO Metadata records. A major effort on the part of the NASA's Earth Science Data and Information System (ESDIS) project is the aggregation of metadata that enables greater data interoperability among scientific data sets regardless of source or application. Both the Earth Observing System (EOS) ClearingHOuse (ECHO) and the Global Change Master Directory (GCMD) repositories contain metadata records for NASA (and other) datasets and provided services. These records contain typical fields for each dataset (or software service) such as the source, creation date, cognizant institution, related access URL's, and domain and variable keywords to enable discovery. Under a NASA ACCESS grant, we demonstrated how to publish the ECHO and GCMD dataset and services metadata as LOD in the RDF format. Both sets of metadata are now queryable at SPARQL endpoints and available for integration into "semantic mashups" in the browser. It is straightforward to reformat sets of XML metadata, including ISO, into simple RDF and then later refine and improve the RDF predicates by reusing known namespaces such as Dublin core, georss, etc. All scientific metadata should be part of the LOD world. In addition, we developed an "instant" drill-down and browse interface that provides faceted navigation so that the user can discover and explore the 25,000 datasets and 3000 services. The available facets and the free-text search box appear in the left panel, and the instantly updated results for the dataset search appear in the right panel. The user can constrain the value of a metadata facet simply by clicking on a word (or phrase) in the "word cloud" of values for each facet. The display section for each dataset includes the important metadata fields, a full description of the dataset, potentially some related URL's, and a "search" button that points to an Open

  19. Metadata Evaluation and Improvement: Evolving Analysis and Reporting

    Science.gov (United States)

    Habermann, Ted; Kozimor, John; Gordon, Sean

    2017-01-01

    ESIP Community members create and manage a large collection of environmental datasets that span multiple decades, the entire globe, and many parts of the solar system. Metadata are critical for discovering, accessing, using and understanding these data effectively and ESIP community members have successfully created large collections of metadata describing these data. As part of the White House Big Earth Data Initiative (BEDI), ESDIS has developed a suite of tools for evaluating these metadata in native dialects with respect to recommendations from many organizations. We will describe those tools and demonstrate evolving techniques for sharing results with data providers.

  20. Metadata in Chaos: how researchers tag radio broadcasts

    DEFF Research Database (Denmark)

    Lykke, Marianne; Lund, Haakon; Skov, Mette

    2015-01-01

    is to provide access to broadcasts and provide tools to segment and manage concrete segments of radio broadcasts. Although the assigned metadata are project-specific, they serve as invaluable access points for fellow researchers due to their factual and neutral nature. The researchers particularly stress LARM.fm...... apply the metadata scheme in their research work. The study consists of two studies, a) a qualitative study of subjects and vocabulary of the applied metadata and annotations, and 5 semi-structured interviews about goals for tagging. The findings clearly show that the primary role of LARM.fm...

  1. Design and Implementation of Two-Level Metadata Server in Small-Scale Cluster File System

    Institute of Scientific and Technical Information of China (English)

    LIU Yuling; YU Hongfen; SONG Weiwei

    2006-01-01

    The reliability and high performance of metadata service is crucial to the store architecture. A novel design of a two-level metadata server file system (TTMFS) is presented, which behaves high reliability and performance. The merits both centralized management and distributed management are considered simultaneously in our design. In this file system, the advanced-metadata server is responsible for manage directory metadata and the whole namespace. The double-metadata server is responsible for maintaining file metadata. And this paper uses the Markov return model to analyze the reliability of the two-level metadata server. The experiment data indicates that the design can provide high throughput.

  2. Video contents summary using the combination of multiple MPEG-7 metadata

    Science.gov (United States)

    Lee, Hee Kyung; Kim, Cheon S.; Jung, Yong J.; Nam, Je Ho; Kang, Kyeong O.; Ro, Yong M.

    2002-03-01

    We propose a content-based summary generation method using MPEG-7 metadata. In this paper, the important events of video are defined and subsequently shot boundary detection is carried out. Then, we analyze the video contents in the shot with multiple content features using multiple MPEG-7 descriptors. In experiments with a golf-video, we combined motion activity, edge histogram and homogeneous texture for the detection of event. Further, the extracted segments and key-frames of each event are described by XML document. Experimental result shows that the proposed method gives reliable summary generation with robust event detection.

  3. Adapting a GIS-Based Multicriteria Decision Analysis Approach for Evaluating New Power Generating Sites

    Energy Technology Data Exchange (ETDEWEB)

    Omitaomu, Olufemi A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Blevins, Brandon R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Southern California, Edison, CA (United States); Jochem, Warren C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Colorado, Boulder, CO (United States); Mays, Gary T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Belles, Randy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hadley, Stanton W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Harrison, Thomas J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bhaduri, Budhendra L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Neish, Bradley S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rose, Amy N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2012-01-01

    There is a growing need to site new power generating plants that use cleaner energy sources due to increased regulations on air and water pollution and a sociopolitical desire to develop more clean energy sources. To assist utility and energy companies as well as policy-makers in evaluating potential areas for siting new plants in the contiguous United States, a geographic information system (GIS)-based multicriteria decision analysis approach is presented in this paper. The presented approach has led to the development of the Oak Ridge Siting Analysis for power Generation Expansion (OR-SAGE) tool. The tool takes inputs such as population growth, water availability, environmental indicators, and tectonic and geological hazards to provide an in-depth analysis for siting options. To the utility and energy companies, the tool can quickly and effectively provide feedback on land suitability based on technology specific inputs. However, the tool does not replace the required detailed evaluation of candidate sites. To the policy-makers, the tool provides the ability to analyze the impacts of future energy technology while balancing competing resource use.

  4. An atmospheric turbulence generator for dynamic tests with LINC-NIRVANA's adaptive optics system

    Science.gov (United States)

    Meschke, D.; Bizenberger, P.; Gaessler, W.; Zhang, X.; Mohr, L.; Baumeister, H.; Diolaiti, E.

    2010-07-01

    LINC-NIRVANA[1] (LN) is an instrument for the Large Binocular Telescope[2] (LBT). Its purpose is to combine the light coming from the two primary mirrors in a Fizeau-type interferometer. In order to compensate turbulence-induced dynamic aberrations, the layer oriented adaptive optics system of LN[3] consists of two major subsystems for each side: the Ground-Layer-Wavefront sensor (GLWS) and the Mid- and High-Layer Wavefront sensor (MHLWS). The MHLWS is currently set up in a laboratory at the Max-Planck-Institute for Astronomy in Heidelberg. To test the multi-conjugate AO with multiple simulated stars in the laboratory and to develop the necessary control software, a dedicated light source is needed. For this reason, we designed an optical system, operating in visible as well as in infrared light, which imitates the telescope's optical train (f-ratio, pupil position and size, field curvature). By inserting rotating surface etched glass phase screens, artificial aberrations corresponding to the atmospheric turbulence are introduced. In addition, different turbulence altitudes can be simulated depending on the position of these screens along the optical axis. In this way, it is possible to comprehensively test the complete system, including electronics and software, in the laboratory before integration into the final LINC-NIRVANA setup. Combined with an atmospheric piston simulator, also this effect can be taken into account. Since we are building two identical sets, it is possible to feed the complete instrument with light for the interferometric combination during the assembly phase in the integration laboratory.

  5. Real-time auto-adaptive margin generation for MLC-tracked radiotherapy

    Science.gov (United States)

    Glitzner, M.; Fast, M. F.; de Senneville, B. Denis; Nill, S.; Oelfke, U.; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2017-01-01

    In radiotherapy, abdominal and thoracic sites are candidates for performing motion tracking. With real-time control it is possible to adjust the multileaf collimator (MLC) position to the target position. However, positions are not perfectly matched and position errors arise from system delays and complicated response of the electromechanic MLC system. Although, it is possible to compensate parts of these errors by using predictors, residual errors remain and need to be compensated to retain target coverage. This work presents a method to statistically describe tracking errors and to automatically derive a patient-specific, per-segment margin to compensate the arising underdosage on-line, i.e. during plan delivery. The statistics of the geometric error between intended and actual machine position are derived using kernel density estimators. Subsequently a margin is calculated on-line according to a selected coverage parameter, which determines the amount of accepted underdosage. The margin is then applied onto the actual segment to accommodate the positioning errors in the enlarged segment. The proof-of-concept was tested in an on-line tracking experiment and showed the ability to recover underdosages for two test cases, increasing {{V}90 %} in the underdosed area about 47 % and 41 % , respectively. The used dose model was able to predict the loss of dose due to tracking errors and could be used to infer the necessary margins. The implementation had a running time of 23 ms which is compatible with real-time requirements of MLC tracking systems. The auto-adaptivity to machine and patient characteristics makes the technique a generic yet intuitive candidate to avoid underdosages due to MLC tracking errors.

  6. Metadata and Tools for Integration and Preservation of Cultural Heritage 3D Information

    Directory of Open Access Journals (Sweden)

    Achille Felicetti

    2011-12-01

    Full Text Available In this paper we investigate many of the various storage, portability and interoperability issues arising among archaeologists and cultural heritage people when dealing with 3D technologies. On the one side, the available digital repositories look often unable to guarantee affordable features in the management of 3D models and their metadata; on the other side the nature of most of the available data format for 3D encoding seem to be not satisfactory for the necessary portability required nowadays by 3D information across different systems. We propose a set of possible solutions to show how integration can be achieved through the use of well known and wide accepted standards for data encoding and data storage. Using a set of 3D models acquired during various archaeological campaigns and a number of open source tools, we have implemented a straightforward encoding process to generate meaningful semantic data and metadata. We will also present the interoperability process carried out to integrate the encoded 3D models and the geographic features produced by the archaeologists. Finally we will report the preliminary (rather encouraging development of a semantic enabled and persistent digital repository, where 3D models (but also any kind of digital data and metadata can easily be stored, retrieved and shared with the content of other digital archives.

  7. Metadata and Metacognition: How can we stimulate reflection for learning?

    NARCIS (Netherlands)

    Specht, Marcus

    2012-01-01

    Specht, M. (2012, 12 September). Metadata and Metacognition: How can we stimulate reflection for learning? Invited presentation given at the seminar on awareness and reflection in learning at the University of Leuven, Leuven, Belgium.

  8. A framework for basic administrative metadata in digital libraries

    Institute of Scientific and Technical Information of China (English)

    ZHENG; Qiaoying; WANG; Shaoping

    2008-01-01

    Administrative metadata means the expansion of the metadata research to the administrative level of resource development.Based on the basic administrative sections in the information resource lifecycle(IRL),the framework for basic administrative metadata(FBAM)is helpful in constructing open interoperable platforms for acquisition,processing and services of information resources in digital libraries.It facilitates the seamless communication,the cooperative construction and management,and the sharing of digital resources.The formulation of FBAM follows the principles of modularity and openness that promote interoperability in resource management.It also adopts the structured methodology of information system design,with which the FBAM data model is developed in conformity withand PREMIS.The capabilities of FBAM are driven by a metadata repository with administrative information that is contained in FBAM records.

  9. Toward element-level interoperability in bibliographic metadata

    Directory of Open Access Journals (Sweden)

    Eric Childress

    2008-03-01

    Full Text Available This paper discusses an approach and set of tools for translating bibliographic metadata from one format to another. A computational model is proposed to formalize the notion of a 'crosswalk'. The translation process separates semantics from syntax, and specifies a crosswalk as machine executable translation files which are focused on assertions of element equivalence and are closely associated with the underlying intellectual analysis of metadata translation. A data model developed by the authors called Morfrom serves as an internal generic metadata format. Translation logic is written in an XML scripting language designed by the authors called the Semantic Equivalence Expression Language (Seel. These techniques have been built into an OCLC software toolkit to manage large and diverse collections of metadata records, called the Crosswalk Web Service.

  10. Metadata and Metacognition: How can we stimulate reflection for learning?

    NARCIS (Netherlands)

    Specht, Marcus

    2012-01-01

    Specht, M. (2012, 12 September). Metadata and Metacognition: How can we stimulate reflection for learning? Invited presentation given at the seminar on awareness and reflection in learning at the University of Leuven, Leuven, Belgium.

  11. Distributed metadata in a high performance computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  12. USGS 24k Digital Raster Graphic (DRG) Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the scanned USGS 24k Topograpic Map Series (also known as 24k Digital Raster Graphic). Each scanned map is represented by a polygon in the layer and the...

  13. Requirements for multimedia metadata schemes in surveillance applications for security

    NARCIS (Netherlands)

    Rest, J.; Grootjen, F.A.; Grootjen, M.; Wijn, R.; Aarts, O.; Roelofs, M.L.; Burghouts, G.J.; Bouma, H.; Alic, L.; Kraaij, W.

    2014-01-01

    Surveillance for security requires communication between systems and humans, involves behavioural and multimedia research, and demands an objective benchmarking for the performance of system components. Metadata representation schemes are extremely important to facilitate (system) interoperability a

  14. Large geospatial images discovery: metadata model and technological framework

    Directory of Open Access Journals (Sweden)

    Lukáš Brůha

    2015-12-01

    Full Text Available The advancements in geospatial web technology triggered efforts for disclosure of valuable resources of historical collections. This paper focuses on the role of spatial data infrastructures (SDI in such efforts. The work describes the interplay between SDI technologies and potential use cases in libraries such as cartographic heritage. The metadata model is introduced to link up the sources from these two distinct fields. To enhance the data search capabilities, the work focuses on the representation of the content-based metadata of raster images, which is the crucial prerequisite to target the search in a more effective way. The architecture of the prototype system for automatic raster data processing, storage, analysis and distribution is introduced. The architecture responds to the characteristics of input datasets, namely to the continuous flow of very large raster data and related metadata. Proposed solutions are illustrated on the case study of cartometric analysis of digitised early maps and related metadata encoding.

  15. Motion Planning Using an Impact-Based Hybrid Control for Trajectory Generation in Adaptive Walking

    Directory of Open Access Journals (Sweden)

    Umar Asif

    2011-09-01

    Full Text Available This paper aims to solve a major drawback of walking robots i.e. their inability to react to environmental disturbances while navigating in natural rough terrains. This problem is reduced here by suggesting the use of a hybrid force‐position control based trajectory generation with the impact dynamics into consideration that compensates for the stability variations, thus helping the robot react stably in the face of environmental disturbances. As a consequence, the proposed impact‐based hybrid control helps the robot achieve better and stable motion planning than conventional position‐based control algorithms. Dynamic simulations and real world outdoor experiments performed on a six legged hexapod robot show a relevant improvement in the robot locomotion.

  16. Building a High Performance Metadata Broker using Clojure, NoSQL and Message Queues

    Science.gov (United States)

    Truslove, I.; Reed, S.

    2013-12-01

    In practice, Earth and Space Science Informatics often relies on getting more done with less: fewer hardware resources, less IT staff, fewer lines of code. As a capacity-building exercise focused on rapid development of high-performance geoinformatics software, the National Snow and Ice Data Center (NSIDC) built a prototype metadata brokering system using a new JVM language, modern database engines and virtualized or cloud computing resources. The metadata brokering system was developed with the overarching goals of (i) demonstrating a technically viable product with as little development effort as possible, (ii) using very new yet very popular tools and technologies in order to get the most value from the least legacy-encumbered code bases, and (iii) being a high-performance system by using scalable subcomponents, and implementation patterns typically used in web architectures. We implemented the system using the Clojure programming language (an interactive, dynamic, Lisp-like JVM language), Redis (a fast in-memory key-value store) as both the data store for original XML metadata content and as the provider for the message queueing service, and ElasticSearch for its search and indexing capabilities to generate search results. On evaluating the results of the prototyping process, we believe that the technical choices did in fact allow us to do more for less, due to the expressive nature of the Clojure programming language and its easy interoperability with Java libraries, and the successful reuse or re-application of high performance products or designs. This presentation will describe the architecture of the metadata brokering system, cover the tools and techniques used, and describe lessons learned, conclusions, and potential next steps.

  17. Metadata and Data Quality Problems in the Digital Library

    OpenAIRE

    Beall, Jeffrey

    2006-01-01

    This paper describes the main types of data quality errors that occur in digital libraries, both in full-text objects and in metadata. Studying these errors is important because they can block access to online documents and because digital libraries should eliminate errors where possible. Some types of common errors include typographical errors, scanning and data conversion errors, and find and replace errors. Errors in metadata can also hinder access in digital libraries. The paper also disc...

  18. Transforming and enhancing metadata for enduser discovery: a case study

    OpenAIRE

    Edward M. Corrado; Rachel Jaffe

    2014-01-01

    This paper describes the process developed by Binghamton University Libraries to extract embedded metadata from digital photographs and transform it into descriptive Dublin Core metadata for use in the Libraries’ digital preservation system. In 2011, Binghamton University Libraries implemented the Rosetta digital preservation system (from Ex Libris) to preserve digitized and born-digital materials. At the same time, the Libraries’ implemented the Primo discovery tool (from Ex Libris) to br...

  19. Massive Meta-Data: A New Data Mining Resource

    Science.gov (United States)

    Hugo, W.

    2012-04-01

    Worldwide standardisation, and interoperability initiatives such as GBIF, Open Access and GEOSS (to name but three of many) have led to the emergence of interlinked and overlapping meta-data repositories containing, potentially, tens of millions of entries collectively. This forms the backbone of an emerging global scientific data infrastructure that is both driven by changes in the way we work, and opens up new possibilities in management, research, and collaboration. Several initiatives are concentrated on building a generalised, shared, easily available, scalable, and indefinitely preserved scientific data infrastructure to aid future scientific work. This paper deals with the parallel aspect of the meta-data that will be used to support the global scientific data infrastructure. There are obvious practical issues (semantic interoperability and speed of discovery being the most important), but we are here more concerned with some of the less obvious conceptual questions and opportunities: 1. Can we use meta-data to assess, pinpoint, and reduce duplication of meta-data? 2. Can we use it to reduce overlaps of mandates in data portals, research collaborations, and research networks? 3. What possibilities exist for mining the relationships that exist implicitly in very large meta-data collections? 4. Is it possible to define an explicit 'scientific data infrastructure' as a complex, multi-relational network database, that can become self-maintaining and self-organising in true Web 2.0 and 'social networking' fashion? The paper provides a blueprint for a new approach to massive meta-data collections, and how this can be processed using established analysis techniques to answer the questions posed. It assesses the practical implications of working with standard meta-data definitions (such as ISO 19115, Dublin Core, and EML) in a meta-data mining context, and makes recommendations in respect of extension to support self-organising, semantically oriented 'networks of

  20. Real World Data in Adaptive Biomedical Innovation: A Framework for Generating Evidence Fit for Decision-Making.

    Science.gov (United States)

    Schneeweiss, S; Eichler, H-G; Garcia-Altes, A; Chinn, C; Eggimann, A-V; Garner, S; Goettsch, W; Lim, R; Löbker, W; Martin, D; Müller, T; Park, B J; Platt, R; Priddy, S; Ruhl, M; Spooner, A; Vannieuwenhuyse, B; Willke, R J

    2016-12-01

    Analyses of healthcare databases (claims, electronic health records [EHRs]) are useful supplements to clinical trials for generating evidence on the effectiveness, harm, use, and value of medical products in routine care. A constant stream of data from the routine operation of modern healthcare systems, which can be analyzed in rapid cycles, enables incremental evidence development to support accelerated and appropriate access to innovative medicines. Evidentiary needs by regulators, Health Technology Assessment, payers, clinicians, and patients after marketing authorization comprise (1) monitoring of medication performance in routine care, including the materialized effectiveness, harm, and value; (2) identifying new patient strata with added value or unacceptable harms; and (3) monitoring targeted utilization. Adaptive biomedical innovation (ABI) with rapid cycle database analytics is successfully enabled if evidence is meaningful, valid, expedited, and transparent. These principles will bring rigor and credibility to current efforts to increase research efficiency while upholding evidentiary standards required for effective decision-making in healthcare.

  1. Forensic devices for activism: Metadata tracking and public proof

    Directory of Open Access Journals (Sweden)

    Lonneke van der Velden

    2015-10-01

    Full Text Available The central topic of this paper is a mobile phone application, ‘InformaCam’, which turns metadata from a surveillance risk into a method for the production of public proof. InformaCam allows one to manage and delete metadata from images and videos in order to diminish surveillance risks related to online tracking. Furthermore, it structures and stores the metadata in such a way that the documentary material becomes better accommodated to evidentiary settings, if needed. In this paper I propose InformaCam should be interpreted as a ‘forensic device’. By using the conceptualization of forensics and work on socio-technical devices the paper discusses how InformaCam, through a range of interventions, rearranges metadata into a technology of evidence. InformaCam explicitly recognizes mobile phones as context aware, uses their sensors, and structures metadata in order to facilitate data analysis after images are captured. Through these modifications it invents a form of ‘sensory data forensics'. By treating data in this particular way, surveillance resistance does more than seeking awareness. It becomes engaged with investigatory practices. Considering the extent by which states conduct metadata surveillance, the project can be seen as a timely response to the unequal distribution of power over data.

  2. Forensic devices for activism: Metadata tracking and public proof

    Directory of Open Access Journals (Sweden)

    Lonneke van der Velden

    2015-10-01

    Full Text Available The central topic of this paper is a mobile phone application, ‘InformaCam’, which turns metadata from a surveillance risk into a method for the production of public proof. InformaCam allows one to manage and delete metadata from images and videos in order to diminish surveillance risks related to online tracking. Furthermore, it structures and stores the metadata in such a way that the documentary material becomes better accommodated to evidentiary settings, if needed. In this paper I propose InformaCam should be interpreted as a ‘forensic device’. By using the conceptualization of forensics and work on socio-technical devices the paper discusses how InformaCam, through a range of interventions, rearranges metadata into a technology of evidence. InformaCam explicitly recognizes mobile phones as context aware, uses their sensors, and structures metadata in order to facilitate data analysis after images are captured. Through these modifications it invents a form of ‘sensory data forensics'. By treating data in this particular way, surveillance resistance does more than seeking awareness. It becomes engaged with investigatory practices. Considering the extent by which states conduct metadata surveillance, the project can be seen as a timely response to the unequal distribution of power over data.

  3. Organizing Internet Resources and the Development of Metadata

    Directory of Open Access Journals (Sweden)

    Hsueh-Hua Chen

    1997-12-01

    Full Text Available There exist lots of differences between information resources on the Internet and those in the traditional libraries. In order to retrieve and utilize digital information effectively in the coming era of information network, libraries have to explore how Internet resources are organized. Using search engines and subject gateway services are two common ways to retrieve and utilize Internet resources. Search engines are based on robot to extract metadata, which are automatic and are cheap to create. Subject gateway services add value through intellectual effort, and are correspondingly expensive. But neither approach is complete as users are interested in resources at various levels of granularity and aggregation which may not be satisfied by either of these two simplified approaches. In order to use the Internet resources effectively, the establishment of metadata is very important.This article describes the definitions and functions of metadata, a variety of metadata creators and sources, the different formats of metadata, the level of structure and fullness of metadata, and finally the responses and reactions from people in library field.[Article content in Chinese

  4. Surviving the Transition from FGDC to ISO Metadata Standards

    Science.gov (United States)

    Fox, C. G.; Milan, A.; Sylvester, D.; Habermann, T.; Kozimor, J.; Froehlich, D.

    2008-12-01

    The NOAA Metadata Manager and Repository (NMMR) has served a well established group of data managers at NOAA's National Data Centers for over a decade. It provides a web interface for managing FGDC compliant metadata and publishing that metadata to several large data discovery systems (GeoSpatial One-Stop, NASA's Global Change Master Directory, the Comprehensive Large-Array data Stewardship System, and FirstGov). The Data Center's are now faced with migration of these metadata to new International Metadata Standards (ISO 19115, 19115-2, "). We would like to accomplish this migration while minimizing disruption to the current users and supporting significant new capabilities of the ISO standards. Our current approach involves relational ISO views on top of the existing XML database to convert FGDC content into ISO without changing the data manager interface. These views are the foundation for ISO- compliant XML metadata access via REST-like web services. Additionally, new database tables provide information required by ISO that is not included in the FGDC standard. This approach allows us to support the new standard without disrupting the current system.

  5. A Metadata Schema for Geospatial Resource Discovery Use Cases

    Directory of Open Access Journals (Sweden)

    Darren Hardy

    2014-07-01

    Full Text Available We introduce a metadata schema that focuses on GIS discovery use cases for patrons in a research library setting. Text search, faceted refinement, and spatial search and relevancy are among GeoBlacklight's primary use cases for federated geospatial holdings. The schema supports a variety of GIS data types and enables contextual, collection-oriented discovery applications as well as traditional portal applications. One key limitation of GIS resource discovery is the general lack of normative metadata practices, which has led to a proliferation of metadata schemas and duplicate records. The ISO 19115/19139 and FGDC standards specify metadata formats, but are intricate, lengthy, and not focused on discovery. Moreover, they require sophisticated authoring environments and cataloging expertise. Geographic metadata standards target preservation and quality measure use cases, but they do not provide for simple inter-institutional sharing of metadata for discovery use cases. To this end, our schema reuses elements from Dublin Core and GeoRSS to leverage their normative semantics, community best practices, open-source software implementations, and extensive examples already deployed in discovery contexts such as web search and mapping. Finally, we discuss a Solr implementation of the schema using a "geo" extension to MODS.

  6. Using Metadata to Build Geographic Information Sharing Environment on Internet

    Directory of Open Access Journals (Sweden)

    Chih-hong Sun

    1999-12-01

    Full Text Available Internet provides a convenient environment to share geographic information. Web GIS (Geographic Information System even provides users a direct access environment to geographic databases through Internet. However, the complexity of geographic data makes it difficult for users to understand the real content and the limitation of geographic information. In some cases, users may misuse the geographic data and make wrong decisions. Meanwhile, geographic data are distributed across various government agencies, academic institutes, and private organizations, which make it even more difficult for users to fully understand the content of these complex data. To overcome these difficulties, this research uses metadata as a guiding mechanism for users to fully understand the content and the limitation of geographic data. We introduce three metadata standards commonly used for geographic data and metadata authoring tools available in the US. We also review the current development of geographic metadata standard in Taiwan. Two metadata authoring tools are developed in this research, which will enable users to build their own geographic metadata easily.[Article content in Chinese

  7. The fluid dynamic approach to equidistribution methods for grid generation and adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Delzanno, Gian Luca [Los Alamos National Laboratory; Finn, John M [Los Alamos National Laboratory

    2009-01-01

    The equidistribution methods based on L{sub p} Monge-Kantorovich optimization [Finn and Delzanno, submitted to SISC, 2009] and on the deformation [Moser, 1965; Dacorogna and Moser, 1990, Liao and Anderson, 1992] method are analyzed primarily in the context of grid generation. It is shown that the first class of methods can be obtained from a fluid dynamic formulation based on time-dependent equations for the mass density and the momentum density, arising from a variational principle. In this context, deformation methods arise from a fluid formulation by making a specific assumption on the time evolution of the density (but with some degree of freedom for the momentum density). In general, deformation methods do not arise from a variational principle. However, it is possible to prescribe an optimal deformation method, related to L{sub 1} Monge-Kantorovich optimization, by making a further assumption on the momentum density. Some applications of the L{sub p} fluid dynamic formulation to imaging are also explored.

  8. The Adaptation of the Immigrant Second Generation in America: Theoretical Overview and Recent Evidence.

    Science.gov (United States)

    Portes, Alejandro; Fernández-Kelly, Patricia; Haller, William

    2009-01-01

    This paper summarises a research program on the new immigrant second generation initiated in the early 1990s and completed in 2006. The four field waves of the Children of Immigrants Longitudinal Study (CILS) are described and the main theoretical models emerging from it are presented and graphically summarised. After considering critical views of this theory, we present the most recent results from this longitudinal research program in the forum of quantitative models predicting downward assimilation in early adulthood and qualitative interviews identifying ways to escape it by disadvantaged children of immigrants. Quantitative results strongly support the predicted effects of exogenous variables identified by segmented assimilation theory and identify the intervening factors during adolescence that mediate their influence on adult outcomes. Qualitative evidence gathered during the last stage of the study points to three factors that can lead to exceptional educational achievement among disadvantaged youths. All three indicate the positive influence of selective acculturation. Implications of these findings for theory and policy are discussed.

  9. Automated metadata--final project report

    Energy Technology Data Exchange (ETDEWEB)

    Schissel, David [General Atomics, San Diego, CA (United States)

    2016-04-01

    This report summarizes the work of the Automated Metadata, Provenance Cataloging, and Navigable Interfaces: Ensuring the Usefulness of Extreme-Scale Data Project (MPO Project) funded by the United States Department of Energy (DOE), Offices of Advanced Scientific Computing Research and Fusion Energy Sciences. Initially funded for three years starting in 2012, it was extended for 6 months with additional funding. The project was a collaboration between scientists at General Atomics, Lawrence Berkley National Laboratory (LBNL), and Massachusetts Institute of Technology (MIT). The group leveraged existing computer science technology where possible, and extended or created new capabilities where required. The MPO project was able to successfully create a suite of software tools that can be used by a scientific community to automatically document their scientific workflows. These tools were integrated into workflows for fusion energy and climate research illustrating the general applicability of the project’s toolkit. Feedback was very positive on the project’s toolkit and the value of such automatic workflow documentation to the scientific endeavor.

  10. Automated metadata-final project report

    Energy Technology Data Exchange (ETDEWEB)

    Schissel, David [General Atomics, San Diego, CA (United States)

    2016-04-01

    This report summarizes the work of the Automated Metadata, Provenance Cataloging, and Navigable Interfaces: Ensuring the Usefulness of Extreme-Scale Data Project (MPO Project) funded by the United States Department of Energy (DOE), Offices of Advanced Scientific Computing Research and Fusion Energy Sciences. Initially funded for three years starting in 2012, it was extended for 6 months with additional funding. The project was a collaboration between scientists at General Atomics, Lawrence Berkley National Laboratory (LBNL), and Massachusetts Institute of Technology (MIT). The group leveraged existing computer science technology where possible and extended or created new capabilities where required. The MPO project was able to successfully create a suite of software tools that can be used by a scientific community to automatically document their scientific workflows. These tools were integrated into workflows for fusion energy and climate research illustrating the general applicability of the project’s toolkit. Feedback was very positive on the project’s toolkit and the value of such automatic workflow documentation to the scientific endeavor.

  11. Better Living Through Metadata: Examining Archive Usage

    Science.gov (United States)

    Becker, G.; Winkelman, S.; Rots, A.

    2013-10-01

    The primary purpose of an observatory's archive is to provide access to the data through various interfaces. User interactions with the archive are recorded in server logs, which can be used to answer basic questions like: Who has downloaded dataset X? When did she do this? Which tools did she use? The answers to questions like these fill in patterns of data access (e.g., how many times dataset X has been downloaded in the past three years). Analysis of server logs provides metrics of archive usage and provides feedback on interface use which can be used to guide future interface development. The Chandra X-ray Observatory is fortunate in that a database to track data access and downloads has been continuously recording such transactions for years; however, it is overdue for an update. We will detail changes we hope to effect and the differences the changes may make to our usage metadata picture. We plan to gather more information about the geographic location of users without compromising privacy; create improved archive statistics; and track and assess the impact of web “crawlers” and other scripted access methods on the archive. With the improvements to our download tracking we hope to gain a better understanding of the dissemination of Chandra's data; how effectively it is being done; and perhaps discover ideas for new services.

  12. Educational Rationale Metadata for Learning Objects

    Directory of Open Access Journals (Sweden)

    Tom Carey

    2002-10-01

    Full Text Available Instructors searching for learning objects in online repositories will be guided in their choices by the content of the object, the characteristics of the learners addressed, and the learning process embodied in the object. We report here on a feasibility study for metadata to record process-oriented information about instructional approaches for learning objects, though a set of Educational Rationale [ER] tags which would allow authors to describe the critical elements in their design intent. The prototype ER tags describe activities which have been demonstrated to be of value in learning, and authors select the activities whose support was critical in their design decisions. The prototype ER tag set consists descriptors of the instructional approach used in the design, plus optional sub-elements for Comments, Importance and Features which implement the design intent. The tag set was tested by creators of four learning object modules, three intended for post-secondary learners and one for K-12 students and their families. In each case the creators reported that the ER tag set allowed them to express succinctly the key instructional approaches embedded in their designs. These results confirmed the overall feasibility of the ER tag approach as a means of capturing design intent from creators of learning objects. Much work remains to be done before a usable ER tag set could be specified, including evaluating the impact of ER tags during design to improve instructional quality of learning objects.

  13. Enhancing performance of LCoS-SLM as adaptive optics by using computer-generated holograms modulation software

    Science.gov (United States)

    Tsai, Chun-Wei; Lyu, Bo-Han; Wang, Chen; Hung, Cheng-Chieh

    2017-05-01

    We have already developed multi-function and easy-to-use modulation software that was based on LabVIEW system. There are mainly four functions in this modulation software, such as computer generated holograms (CGH) generation, CGH reconstruction, image trimming, and special phase distribution. Based on the above development of CGH modulation software, we could enhance the performance of liquid crystal on silicon - spatial light modulator (LCoSSLM) as similar as the diffractive optical element (DOE) and use it on various adaptive optics (AO) applications. Through the development of special phase distribution, we are going to use the LCoS-SLM with CGH modulation software into AO technology, such as optical microscope system. When the LCOS-SLM panel is integrated in an optical microscope system, it could be placed on the illumination path or on the image forming path. However, LCOS-SLM provides a program-controllable liquid crystal array for optical microscope. It dynamically changes the amplitude or phase of light and gives the obvious advantage, "Flexibility", to the system

  14. Differential effects of IL-15 on the generation, maintenance and cytotoxic potential of adaptive cellular responses induced by DNA vaccination.

    Science.gov (United States)

    Li, Jinyao; Valentin, Antonio; Ng, Sinnie; Beach, Rachel Kelly; Alicea, Candido; Bergamaschi, Cristina; Felber, Barbara K; Pavlakis, George N

    2015-02-25

    IL-15 is an important cytokine for the regulation of lymphocyte homeostasis. However, the role of IL-15 in the generation, maintenance and cytotoxic potential of antigen specific T cells is not fully understood. Because the route of antigenic delivery and the vaccine modality could influence the IL-15 requirement for mounting and preserving cytotoxic T cell responses, we have investigated the immunogenicity of DNA-based vaccines in IL-15 KO mice. DNA vaccination with SIV Gag induced antigen-specific CD4(+) and CD8(+) T cells in the absence of IL-15. However, the absolute number of antigen-specific CD8(+) T cells was decreased in IL-15 KO mice compared to WT animals, suggesting that IL-15 is important for the generation of maximal number of antigen-specific CD8(+) T cells. Interestingly, antigen-specific memory CD8 cells could be efficiently boosted 8 months after the final vaccination in both WT and KO strains of mice, suggesting that the maintenance of antigen-specific long-term memory T cells induced by DNA vaccination is comparable in the absence and presence of IL-15. Importantly, boosting by DNA 8-months after vaccination revealed severely reduced granzyme B content in CD8(+) T cells of IL-15 KO mice compared to WT mice. This suggests that the cytotoxic potential of the long-term memory CD8(+) T cells is impaired. These results suggest that IL-15 is not essential for the generation and maintenance of adaptive cellular responses upon DNA vaccination, but it is critical for the preservation of maximal numbers and for the activity of cytotoxic CD8(+) T cells. Published by Elsevier Ltd.

  15. Using XML to encode TMA DES metadata

    Directory of Open Access Journals (Sweden)

    Oliver Lyttleton

    2011-01-01

    Full Text Available Background: The Tissue Microarray Data Exchange Specification (TMA DES is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. Materials and Methods: We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. Results: We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. Conclusions: All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.

  16. Using XML to encode TMA DES metadata.

    Science.gov (United States)

    Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul

    2011-01-01

    The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.

  17. Mapping Large Scale Research Metadata to Linked Data: A Performance Comparison of HBase, CSV and XML

    OpenAIRE

    Vahdati, Sahar; Karim, Farah; Huang, Jyun-Yao; Lange, Christoph

    2015-01-01

    OpenAIRE, the Open Access Infrastructure for Research in Europe, comprises a database of all EC FP7 and H2020 funded research projects, including metadata of their results (publications and datasets). These data are stored in an HBase NoSQL database, post-processed, and exposed as HTML for human consumption, and as XML through a web service interface. As an intermediate format to facilitate statistical computations, CSV is generated internally. To interlink the OpenAIRE data with related data...

  18. Interoperable Solar Data and Metadata via LISIRD 3

    Science.gov (United States)

    Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.

    2015-12-01

    LISIRD 3 is a major upgrade of the LASP Interactive Solar Irradiance Data Center (LISIRD), which serves several dozen space based solar irradiance and related data products to the public. Through interactive plots, LISIRD 3 provides data browsing supported by data subsetting and aggregation. Incorporating a semantically enabled metadata repository, LISIRD 3 users see current, vetted, consistent information about the datasets offered. Users can now also search for datasets based on metadata fields such as dataset type and/or spectral or temporal range. This semantic database enables metadata browsing, so users can discover the relationships between datasets, instruments, spacecraft, mission and PI. The database also enables creation and publication of metadata records in a variety of formats, such as SPASE or ISO, making these datasets more discoverable. The database also enables the possibility of a public SPARQL endpoint, making the metadata browsable in an automated fashion. LISIRD 3's data access middleware, LaTiS, provides dynamic, on demand reformatting of data and timestamps, subsetting and aggregation, and other server side functionality via a RESTful OPeNDAP compliant API, enabling interoperability between LASP datasets and many common tools. LISIRD 3's templated front end design, coupled with the uniform data interface offered by LaTiS, allows easy integration of new datasets. Consequently the number and variety of datasets offered by LISIRD has grown to encompass several dozen, with many more to come. This poster will discuss design and implementation of LISIRD 3, including tools used, capabilities enabled, and issues encountered.

  19. The ground truth about metadata and community detection in networks.

    Science.gov (United States)

    Peel, Leto; Larremore, Daniel B; Clauset, Aaron

    2017-05-01

    Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks' links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures.

  20. Semantic Representation of Temporal Metadata in a Virtual Observatory

    Science.gov (United States)

    Wang, H.; Rozell, E. A.; West, P.; Zednik, S.; Fox, P. A.

    2011-12-01

    The Virtual Solar-Terrestrial Observatory (VSTO) Portal at vsto.org provides a set of guided workflows to implement use cases designed solar-terrestrial physics and upper atmospheric science. Semantics are used in VSTO to model abstract instrument and parameter classifications, providing data access to users without extended domain specific vocabularies. The temporal restrictions used in the workflows are currently possible via RESTful services made to a remote system with access to a SQL-based metadata catalog. In order to provide a greater range of temporal reasoning and search capabilities for the user, we propose an alternative architecture design for the VSTO Portal, where the temporal metadata is integrated in the domain ontology. We achieve this integration by converting temporal metadata from the headers of raw data files into RDF using the OWL-Time vocabulary. This presentation covers our work with semantic temporal metadata, including: our representation using OWL-Time, issues that we have faced in persistent storage, and performance and scalability of semantic query. We conclude with discussions of the significance semantic temporal metadata has in virtual observatories.

  1. Methylcitrate cycle activation during adaptation of Fusarium solani and Fusarium verticillioides to propionyl-CoA-generating carbon sources.

    Science.gov (United States)

    Domin, Nicole; Wilson, Duncan; Brock, Matthias

    2009-12-01

    Propionyl-CoA is an inhibitor of both primary and secondary metabolism in Aspergillus species and a functional methylcitrate cycle is essential for the efficient removal of this potentially toxic metabolite. Although the genomes of most sequenced fungal species appear to contain genes coding for enzymes of the methylcitrate cycle, experimental confirmation of pathway activity in filamentous fungi has only been provided for Aspergillus nidulans and Aspergillus fumigatus. In this study we demonstrate that pathogenic Fusarium species also possess a functional methylcitrate cycle. Fusarium solani appears highly adapted to saprophytic growth as it utilized propionate with high efficiency, whereas Fusarium verticillioides grew poorly on this carbon source. In order to elucidate the mechanisms of propionyl-CoA detoxification, we first identified the genes coding for methylcitrate synthase from both species. Despite sharing 96 % amino acid sequence identity, analysis of the two purified enzymes demonstrated that their biochemical properties differed in several respects. Both methylcitrate synthases exhibited low K(m) values for propionyl-CoA, but that of F. verticillioides displayed significantly higher citrate synthase activity and greater thermal stability. Activity determinations from cell-free extracts of F. solani revealed a strong methylcitrate synthase activity during growth on propionate and to a lesser extent on Casamino acids, whereas activity by F. verticillioides was highest on Casamino acids. Further phenotypic analysis confirmed that these biochemical differences were reflected in the different growth behaviour of the two species on propionyl-CoA-generating carbon sources.

  2. Reviving legacy clay mineralogy data and metadata through the IEDA-CCNY Data Internship Program

    Science.gov (United States)

    Palumbo, R. V.; Randel, C.; Ismail, A.; Block, K. A.; Cai, Y.; Carter, M.; Hemming, S. R.; Lehnert, K.

    2016-12-01

    Reconstruction of past climate and ocean circulation using ocean sediment cores relies on the use of multiple climate proxies measured on well-studied cores. Preserving all the information collected on a sediment core is crucial for the success of future studies using these unique and important samples. Clay mineralogy is a powerful tool to study weathering processes and sedimentary provenance. In his pioneering dissertation, Pierre Biscaye (1964, Yale University) established the X-Ray Diffraction (XRD) method for quantitative clay mineralogy analyses in ocean sediments and presented data for 500 core-top samples throughout the Atlantic Ocean and its neighboring seas. Unfortunately, the data only exists in analog format, which has discouraged scientists from reusing the data, apart from replication of the published maps. Archiving and preserving this dataset and making it publicly available in a digital format, linked with the metadata from the core repository will allow the scientific community to use these data to generate new findings. Under the supervision of Sidney Hemming and members of the Interdisciplinary Earth Data Alliance (IEDA) team, IEDA-CCNY interns digitized the data and metadata from Biscaye's dissertation and linked them with additional sample metadata using IGSN (International Geo-Sample Number). After compilation and proper documentation of the dataset, it was published in the EarthChem Library where the dataset will be openly accessible, and citable with a persistent DOI (Digital Object Identifier). During this internship, the students read peer-reviewed articles, interacted with active scientists in the field and acquired knowledge about XRD methods and the data generated, as well as its applications. They also learned about existing and emerging best practices in data publication and preservation. Data rescue projects are a fun and interactive way for students to become engaged in the field.

  3. Research on metadata in manufacturing-oriented EAI

    Institute of Scientific and Technical Information of China (English)

    Wang Rui; Li Congxin

    2007-01-01

    Enterprise application integration (EAI) focuses on the collaboration and interconnection of various information systems, so the basic problem to be solved is how EAI guarantees that the applications will produce consistent presentation of data, message and transaction.The metadata methodology may give us certain good ideas.First, the metadata description method of manufacturing information resource, transaction process and message delivery is put forward on the basis of operation analysis of manufacturing-oriented EAI, and then the tree-structured XML schema of corresponding object is built and a framework of metadata application in the discrete Manufacturing-Oriented EAI is established.Finally, a practical enterprise information integration system in Shanghai Tobacco Machine Co., Ltd.is presented as an example to show how it functions.

  4. A Generic Metadata Editor Supporting System Using Drupal CMS

    Science.gov (United States)

    Pan, J.; Banks, N. G.; Leggott, M.

    2011-12-01

    Metadata handling is a key factor in preserving and reusing scientific data. In recent years, standardized structural metadata has become widely used in Geoscience communities. However, there exist many different standards in Geosciences, such as the current version of the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata (FGDC CSDGM), the Ecological Markup Language (EML), the Geography Markup Language (GML), and the emerging ISO 19115 and related standards. In addition, there are many different subsets within the Geoscience subdomain such as the Biological Profile of the FGDC (CSDGM), or for geopolitical regions, such as the European Profile or the North American Profile in the ISO standards. It is therefore desirable to have a software foundation to support metadata creation and editing for multiple standards and profiles, without re-inventing the wheels. We have developed a software module as a generic, flexible software system to do just that: to facilitate the support for multiple metadata standards and profiles. The software consists of a set of modules for the Drupal Content Management System (CMS), with minimal inter-dependencies to other Drupal modules. There are two steps in using the system's metadata functions. First, an administrator can use the system to design a user form, based on an XML schema and its instances. The form definition is named and stored in the Drupal database as a XML blob content. Second, users in an editor role can then use the persisted XML definition to render an actual metadata entry form, for creating or editing a metadata record. Behind the scenes, the form definition XML is transformed into a PHP array, which is then rendered via Drupal Form API. When the form is submitted the posted values are used to modify a metadata record. Drupal hooks can be used to perform custom processing on metadata record before and after submission. It is trivial to store the metadata record as an actual XML file

  5. Metadata for fine-grained processing at ATLAS

    CERN Document Server

    Cranshaw, Jack; The ATLAS collaboration

    2016-01-01

    High energy physics experiments are implementing highly parallel solutions for event processing on resources that support concurrency at multiple levels. These range from the inherent large-scale parallelism of HPC resources to the multiprocessing and multithreading needed for effective use of multi-core and GPU-augmented nodes. Such modes of processing, and the efficient opportunistic use of transiently-available resources, lead to finer-grained processing of event data. Previously metadata systems were tailored to jobs that were atomic and processed large, well-defined units of data. The new environment requires a more fine-grained approach to metadata handling, especially with regard to bookkeeping. For opportunistic resources metadata propagation needs to work even if individual jobs are not finalized. This contribution describes ATLAS solutions to this problem in the context of the multiprocessing framework currently in use for LHC Run 2, development underway for the ATLAS multithreaded framework (Athena...

  6. 可扩展的分布式元数据管理系统设计%Design of Scalable Distributed Metadata Management System

    Institute of Scientific and Technical Information of China (English)

    黄秋兰; 程耀东; 杜然; 陈刚

    2015-01-01

    To solve the problems caused by the storage expanding in high energy physics mass storage system, a scalable distributed metadata management system is designed, which includes metadata management, metadata service, cache service and monitoring information collector. Based on it, a new Adaptive Directory Sub-tree Partition ( ADSP ) algorithm is proposed. ADSP divides the file system namespace into sub-trees with directory granularity and adjusts sub-trees adaptively according to the load of metadata cluster for achieving the storage and distribution of metadata in cluster. Experimental results show that the algorithm can improve the metadata access and retrieval performance, provides a scalable and dynamic load balancing of metadata service to ensure the availability, scalability and I/O performance of metadata management system is not affected by the storage scale,thereby it can meet the growing storage requirements of high energy physics experiments.%为解决高能物理海量存储系统由于存储规模不断扩大所面临的问题,设计一种分布式元数据管理系统,包括元数据管理、元数据服务、缓存服务以及监控信息采集4个部分,在此基础上提出自适应目录子树划分算法,以目录为粒度进行元数据划分,根据集群负载情况调整目录子树,实现元数据信息在元数据集群中的合理存储和分布。实验结果证明,该算法能提高元数据的访问和检索性能,提供可扩展及动态负载均衡的元数据服务,以保证该元数据管理系统的可用性、扩展性及I/O性能不会因存储规模扩大而受到影响,满足高能物理实验日益增长的存储需求。

  7. Comparison of Different Strategies for Selection/Adaptation of Mixed Microbial Cultures Able to Ferment Crude Glycerol Derived from Second-Generation Biodiesel

    DEFF Research Database (Denmark)

    Varrone, Cristiano; Heggeset, T. M. B.; Le, S. B.

    2015-01-01

    Objective of this study was the selection and adaptation of mixed microbial cultures (MMCs), able to ferment crude glycerol generated from animal fat-based biodiesel and produce building-blocks and green chemicals. Various adaptation strategies have been investigated for the enrichment of suitable...... and stable MMC, trying to overcome inhibition problems and enhance substrate degradation efficiency, as well as generation of soluble fermentation products. Repeated transfers in small batches and fed-batch conditions have been applied, comparing the use of different inoculum, growth media, and Kinetic...... Control. The adaptation of activated sludge inoculum was performed successfully and continued unhindered for several months. The best results showed a substrate degradation efficiency of almost 100% (about 10 g/L glycerol in 21 h) and different dominant metabolic products were obtained, depending...

  8. An adaptive radial basis function neural network (RBFNN control of energy storage system for output tracking of a permanent magnet wind generator

    Directory of Open Access Journals (Sweden)

    Abu H. M. A. Rahim

    2014-03-01

    Full Text Available The converters of a permanent magnet synchronous generator have to be properly controlled to achieve maximum transfer of energy from wind. To achiev e this goal, this article employs an energy storage device consisting of an energy capacitor interfaced through a voltage source converter which is operated through a smart adaptive radial basis function neural network (RBFNN controller. The proposed adaptive strategy employs online neural network training as opposed to conventional procedure requiring offline training of a large data-set. The RBFNN controller was tested for various contingencies in the wind generator system. Th e adaptive online controller is observed to provide excellent damping profile following low grid voltage conditions as well as for other large disturbances. The controlled converter DC capacitor voltage helps maintain a smooth flow of real and reactive power in the system.

  9. Generations.

    Science.gov (United States)

    Chambers, David W

    2005-01-01

    Groups naturally promote their strengths and prefer values and rules that give them an identity and an advantage. This shows up as generational tensions across cohorts who share common experiences, including common elders. Dramatic cultural events in America since 1925 can help create an understanding of the differing value structures of the Silents, the Boomers, Gen Xers, and the Millennials. Differences in how these generations see motivation and values, fundamental reality, relations with others, and work are presented, as are some applications of these differences to the dental profession.

  10. Linked data for libraries, archives and museums how to clean, link and publish your metadata

    CERN Document Server

    Hooland, Seth van

    2014-01-01

    This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Libraries, archives and museums are facing up to the challenge of providing access to fast growing collections whilst managing cuts to budgets. Key to this is the creation, linking and publishing of good quality metadata as Linked Data that will allow their collections to be discovered, accessed and disseminated in a sustainable manner. This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Metadata experts Seth van Hooland and Ruben Verborgh introduce the key concepts of metadata standards and Linked Data and how they can be practically applied to existing metadata, giving readers the tools and understanding to achieve maximum results with limited re...

  11. Research and establishment of enterprise quality metadata standard

    Institute of Scientific and Technical Information of China (English)

    Jie LI; Genbao ZHANG; Han SONG

    2008-01-01

    Enabling quality managers to utilize and manage quality data efficiently under modern quality management circumstances is a primary issue for improving enterprise quality management. A concept of quality metadata is proposed in this paper, which can help quality managers gain a deeper understanding of various features of quality data and establish a more stable foundation for further use and management of such data. The procedure of establishing quality meta-data standards is emphasized in the paper, and the content structure and description scheme are given. Finally, a summary is made and future work is prospected.

  12. openPDS: Protecting the Privacy of Metadata through SafeAnswers

    OpenAIRE

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the ...

  13. BrainBank Metadata Specification for the Human Brain Project and Neuroinformatics

    OpenAIRE

    Lianglin, Hu; Yufang, Hou; Jianhui, Li; Ling, Yin; Wenwen, Shi

    2007-01-01

    Many databases and platforms for human brain data have been established in China over the years, and metadata plays an important role in understanding and using them. The BrainBank Metadata Specification for the Human Brain Project and Neuroinformatics provides a structure for describing the context and content information of BrainBank databases and services. It includes six parts: identification, method, data schema, distribution of the database, metadata extension, and metadata reference Th...

  14. The CMIP5 Model Documentation Questionnaire: Development of a Metadata Retrieval System for the METAFOR Common Information Model

    Science.gov (United States)

    Pascoe, Charlotte; Lawrence, Bryan; Moine, Marie-Pierre; Ford, Rupert; Devine, Gerry

    2010-05-01

    The EU METAFOR Project (http://metaforclimate.eu) has created a web-based model documentation questionnaire to collect metadata from the modelling groups that are running simulations in support of the Coupled Model Intercomparison Project - 5 (CMIP5). The CMIP5 model documentation questionnaire will retrieve information about the details of the models used, how the simulations were carried out, how the simulations conformed to the CMIP5 experiment requirements and details of the hardware used to perform the simulations. The metadata collected by the CMIP5 questionnaire will allow CMIP5 data to be compared in a scientifically meaningful way. This paper describes the life-cycle of the CMIP5 questionnaire development which starts with relatively unstructured input from domain specialists and ends with formal XML documents that comply with the METAFOR Common Information Model (CIM). Each development step is associated with a specific tool. (1) Mind maps are used to capture information requirements from domain experts and build a controlled vocabulary, (2) a python parser processes the XML files generated by the mind maps, (3) Django (python) is used to generate the dynamic structure and content of the web based questionnaire from processed xml and the METAFOR CIM, (4) Python parsers ensure that information entered into the CMIP5 questionnaire is output as CIM compliant xml, (5) CIM compliant output allows automatic information capture tools to harvest questionnaire content into databases such as the Earth System Grid (ESG) metadata catalogue. This paper will focus on how Django (python) and XML input files are used to generate the structure and content of the CMIP5 questionnaire. It will also address how the choice of development tools listed above provided a framework that enabled working scientists (who we would never ordinarily get to interact with UML and XML) to be part the iterative development process and ensure that the CMIP5 model documentation questionnaire

  15. Metadata squared: enhancing its usability for volunteered geographic information and the GeoWeb

    Science.gov (United States)

    Poore, Barbara S.; Wolf, Eric B.; Sui, Daniel Z.; Elwood, Sarah; Goodchild, Michael F.

    2013-01-01

    The Internet has brought many changes to the way geographic information is created and shared. One aspect that has not changed is metadata. Static spatial data quality descriptions were standardized in the mid-1990s and cannot accommodate the current climate of data creation where nonexperts are using mobile phones and other location-based devices on a continuous basis to contribute data to Internet mapping platforms. The usability of standard geospatial metadata is being questioned by academics and neogeographers alike. This chapter analyzes current discussions of metadata to demonstrate how the media shift that is occurring has affected requirements for metadata. Two case studies of metadata use are presented—online sharing of environmental information through a regional spatial data infrastructure in the early 2000s, and new types of metadata that are being used today in OpenStreetMap, a map of the world created entirely by volunteers. Changes in metadata requirements are examined for usability, the ease with which metadata supports coproduction of data by communities of users, how metadata enhances findability, and how the relationship between metadata and data has changed. We argue that traditional metadata associated with spatial data infrastructures is inadequate and suggest several research avenues to make this type of metadata more interactive and effective in the GeoWeb.

  16. Understanding the Protocol for Metadata Harvesting of the Open Archives Initiative.

    Science.gov (United States)

    Breeding, Marshall

    2002-01-01

    Explains the Open Archives Initiative (OAI) Protocol for Metadata Harvesting and its impact on digital libraries and information retrieval by transferring metadata from one server to another in a network of information systems. Highlights include data providers; service providers; value-added services; Dublin Core metadata; data transfer;…

  17. Multiscale viscoacoustic waveform inversion with the second generation wavelet transform and adaptive time-space domain finite-difference method

    Science.gov (United States)

    Ren, Zhiming; Liu, Yang; Zhang, Qunshan

    2014-05-01

    Full waveform inversion (FWI) has the potential to provide preferable subsurface model parameters. The main barrier of its applications to real seismic data is heavy computational amount. Numerical modelling methods are involved in both forward modelling and backpropagation of wavefield residuals, which spend most of computational time in FWI. We develop a time-space domain finite-difference (FD) method and adaptive variable-length spatial operator scheme in numerical simulation of viscoacoustic equation and extend them into the viscoacoustic FWI. Compared with conventional FD methods, different operator lengths are adopted for different velocities and quality factors, which can reduce the amount of computation without reducing accuracy. Inversion algorithms also play a significant role in FWI. In conventional single-scale methods, it is likely to converge to local minimums especially when the initial model is far from the real model. To tackle the problem, we introduce the second generation wavelet transform to implement the multiscale FWI. Compared to other multiscale methods, our method has advantages of ease of implementation and better time-frequency local analysis ability. The L2 norm is widely used in FWI and gives invalid model estimates when the data is contaminated with strong non-uniform noises. We apply the L1-norm and the Huber-norm criteria in the time-domain FWI to improve its antinoise ability. Our strategies have been successfully applied in synthetic experiments to both onshore and offshore reflection seismic data. The results of the viscoacoustic Marmousi example indicate that our new FWI scheme consumes smaller computer resources. In addition, the viscoacoustic Overthrust example shows its better convergence and more reasonable velocity and quality factor structures. All these results demonstrate that our method can improve inversion accuracy and computational efficiency of FWI.

  18. Comparison of Different Strategies for Selection/Adaptation of Mixed Microbial Cultures Able to Ferment Crude Glycerol Derived from Second-Generation Biodiesel

    Directory of Open Access Journals (Sweden)

    C. Varrone

    2015-01-01

    Full Text Available Objective of this study was the selection and adaptation of mixed microbial cultures (MMCs, able to ferment crude glycerol generated from animal fat-based biodiesel and produce building-blocks and green chemicals. Various adaptation strategies have been investigated for the enrichment of suitable and stable MMC, trying to overcome inhibition problems and enhance substrate degradation efficiency, as well as generation of soluble fermentation products. Repeated transfers in small batches and fed-batch conditions have been applied, comparing the use of different inoculum, growth media, and Kinetic Control. The adaptation of activated sludge inoculum was performed successfully and continued unhindered for several months. The best results showed a substrate degradation efficiency of almost 100% (about 10 g/L glycerol in 21 h and different dominant metabolic products were obtained, depending on the selection strategy (mainly 1,3-propanediol, ethanol, or butyrate. On the other hand, anaerobic sludge exhibited inactivation after a few transfers. To circumvent this problem, fed-batch mode was used as an alternative adaptation strategy, which led to effective substrate degradation and high 1,3-propanediol and butyrate production. Changes in microbial composition were monitored by means of Next Generation Sequencing, revealing a dominance of glycerol consuming species, such as Clostridium, Klebsiella, and Escherichia.

  19. Comparison of Different Strategies for Selection/Adaptation of Mixed Microbial Cultures Able to Ferment Crude Glycerol Derived from Second-Generation Biodiesel

    Science.gov (United States)

    Varrone, C.; Heggeset, T. M. B.; Le, S. B.; Haugen, T.; Markussen, S.; Skiadas, I. V.; Gavala, H. N.

    2015-01-01

    Objective of this study was the selection and adaptation of mixed microbial cultures (MMCs), able to ferment crude glycerol generated from animal fat-based biodiesel and produce building-blocks and green chemicals. Various adaptation strategies have been investigated for the enrichment of suitable and stable MMC, trying to overcome inhibition problems and enhance substrate degradation efficiency, as well as generation of soluble fermentation products. Repeated transfers in small batches and fed-batch conditions have been applied, comparing the use of different inoculum, growth media, and Kinetic Control. The adaptation of activated sludge inoculum was performed successfully and continued unhindered for several months. The best results showed a substrate degradation efficiency of almost 100% (about 10 g/L glycerol in 21 h) and different dominant metabolic products were obtained, depending on the selection strategy (mainly 1,3-propanediol, ethanol, or butyrate). On the other hand, anaerobic sludge exhibited inactivation after a few transfers. To circumvent this problem, fed-batch mode was used as an alternative adaptation strategy, which led to effective substrate degradation and high 1,3-propanediol and butyrate production. Changes in microbial composition were monitored by means of Next Generation Sequencing, revealing a dominance of glycerol consuming species, such as Clostridium, Klebsiella, and Escherichia. PMID:26509171

  20. Spatial Metadata in Africa and the Middle East

    CSIR Research Space (South Africa)

    Cooper, Antony K

    2005-01-01

    Full Text Available This chapter attempts to paint a broad picture of the spatial metadata in Africa and the Middle East, through describing briefly the activities in countries and regional bodies across the region and providing more detail in one or two countries...

  1. Competence Based Educational Metadata for Supporting Lifelong Competence Development Programmes

    NARCIS (Netherlands)

    Sampson, Demetrios; Fytros, Demetrios

    2008-01-01

    Sampson, D., & Fytros, D. (2008). Competence Based Educational Metadata for Supporting Lifelong Competence Development Programmes. In P. Diaz, Kinshuk, I. Aedo & E. Mora (Eds.), Proceedings of the 8th IEEE International Conference on Advanced Learning Technologies (ICALT 2008), pp. 288-292. July, 1-

  2. Metadata Harvesting in Regional Digital Libraries in the PIONIER Network

    Science.gov (United States)

    Mazurek, Cezary; Stroinski, Maciej; Werla, Marcin; Weglarz, Jan

    2006-01-01

    Purpose: The paper aims to present the concept of the functionality of metadata harvesting for regional digital libraries, based on the OAI-PMH protocol. This functionality is a part of regional digital libraries platform created in Poland. The platform was required to reach one of main objectives of the Polish PIONIER Programme--to enrich the…

  3. Home | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available f use, or is not downloadable, it may not be fully used, cited or rightly acknowledged by the (research) com... supports further contribution of each research to life science. Export All Metadata CSV Format JSON Format ...titute of Agrobiological Sciences Junichi Yonemaru QTL Rice The database of Rice QTL information extracted from published research

  4. Automatic Extraction of Metadata from Scientific Publications for CRIS Systems

    Science.gov (United States)

    Kovacevic, Aleksandar; Ivanovic, Dragan; Milosavljevic, Branko; Konjovic, Zora; Surla, Dusan

    2011-01-01

    Purpose: The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS). Design/methodology/approach: The system is based on machine learning and performs automatic extraction…

  5. Web Video Mining: Metadata Predictive Analysis using Classification Techniques

    Directory of Open Access Journals (Sweden)

    Siddu P. Algur

    2016-02-01

    Full Text Available Now a days, the Data Engineering becoming emerging trend to discover knowledge from web audiovisual data such as- YouTube videos, Yahoo Screen, Face Book videos etc. Different categories of web video are being shared on such social websites and are being used by the billions of users all over the world. The uploaded web videos will have different kind of metadata as attribute information of the video data. The metadata attributes defines the contents and features/characteristics of the web videos conceptually. Hence, accomplishing web video mining by extracting features of web videos in terms of metadata is a challenging task. In this work, effective attempts are made to classify and predict the metadata features of web videos such as length of the web videos, number of comments of the web videos, ratings information and view counts of the web videos using data mining algorithms such as Decision tree J48 and navie Bayesian algorithms as a part of web video mining. The results of Decision tree J48 and navie Bayesian classification models are analyzed and compared as a step in the process of knowledge discovery from web videos.

  6. Metadata Harvesting in Regional Digital Libraries in the PIONIER Network

    Science.gov (United States)

    Mazurek, Cezary; Stroinski, Maciej; Werla, Marcin; Weglarz, Jan

    2006-01-01

    Purpose: The paper aims to present the concept of the functionality of metadata harvesting for regional digital libraries, based on the OAI-PMH protocol. This functionality is a part of regional digital libraries platform created in Poland. The platform was required to reach one of main objectives of the Polish PIONIER Programme--to enrich the…

  7. Syndicating Rich Bibliographic Metadata Using MODS and RSS

    Science.gov (United States)

    Ashton, Andrew

    2008-01-01

    Many libraries use RSS to syndicate information about their collections to users. A survey of 65 academic libraries revealed their most common use for RSS is to disseminate information about library holdings, such as lists of new acquisitions. Even though typical RSS feeds are ill suited to the task of carrying rich bibliographic metadata, great…

  8. Automatic Extraction of Metadata from Scientific Publications for CRIS Systems

    Science.gov (United States)

    Kovacevic, Aleksandar; Ivanovic, Dragan; Milosavljevic, Branko; Konjovic, Zora; Surla, Dusan

    2011-01-01

    Purpose: The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS). Design/methodology/approach: The system is based on machine learning and performs automatic extraction…

  9. Genomic standards consortium workshop: metagenomics, metadata and metaanalysis (M3).

    Science.gov (United States)

    Sterk, Peter; Hirschman, Lynette; Field, Dawn; Wooley, John

    2010-01-01

    The M3 workshop has, as its primary focus, the rapidly growing area of metagenomics, including the metadata standards and the meta-analysis approaches needed to organize, process and interpret metagenomics data. The PSB Workshop builds on the first M3 meeting, a Special Interest Group (SIG) meeting at ISMB 2009, organized by the Genomics Standards Consortium.

  10. Metadata Schema Used in OCLC Sampled Web Pages

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2005-12-01

    Full Text Available The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the others? To address these issues, this study analyzed 16,383 Web pages with meta tags extracted from 200,000 OCLC sampled Web pages in 2000. It found that only 8.19% Web pages used meta tags; description tags, keyword tags, and Dublin Core tags were the only three schemas used in the Web pages. This article revealed the use of meta tags in terms of their function distribution, syntax characteristics, granularity of the Web pages, and the length distribution and word number distribution of both description and keywords tags.

  11. Big Earth Data Initiative: Metadata Improvement: Case Studies

    Science.gov (United States)

    Kozimor, John; Habermann, Ted; Farley, John

    2016-01-01

    Big Earth Data Initiative (BEDI) The Big Earth Data Initiative (BEDI) invests in standardizing and optimizing the collection, management and delivery of U.S. Government's civil Earth observation data to improve discovery, access use, and understanding of Earth observations by the broader user community. Complete and consistent standard metadata helps address all three goals.

  12. Transforming and enhancing metadata for enduser discovery: a case study

    Directory of Open Access Journals (Sweden)

    Edward M. Corrado

    2014-05-01

    The Libraries’ workflow and portions of code will be shared; issues and challenges involved will be discussed. While this case study is specific to Binghamton University Libraries, examples of strategies used at other institutions will also be introduced. This paper should be useful to anyone interested in describing large quantities of photographs or other materials with preexisting embedded metadata.

  13. Training and Best Practice Guidelines: Implications for Metadata Creation

    Science.gov (United States)

    Chuttur, Mohammad Y.

    2012-01-01

    In response to the rapid development of digital libraries over the past decade, researchers have focused on the use of metadata as an effective means to support resource discovery within online repositories. With the increasing involvement of libraries in digitization projects and the growing number of institutional repositories, it is anticipated…

  14. Metadata Standards in Theory and Practice: The Human in the Loop

    Science.gov (United States)

    Yarmey, L.; Starkweather, S.

    2013-12-01

    Metadata standards are meant to enable interoperability through common, well-defined structures and are a foundation for broader cyberinfrastructure efforts. Standards are central to emerging technologies such as metadata brokering tools supporting distributed data search. However, metadata standards in practice are often poor indicators of standardized, readily interoperable metadata. The International Arctic Systems for Observing the Atmosphere (IASOA) data portal provides discovery and access tools for aggregated datasets from ten long-term international Arctic atmospheric observing stations. The Advanced Cooperative Arctic Data and Information Service (ACADIS) Arctic Data Explorer brokers metadata to provide distributed data search across Arctic repositories. Both the IASOA data portal and the Arctic Data Explorer rely on metadata and metadata standards to support value-add services. Challenges have included: translating between different standards despite existing crosswalks, diverging implementation practices of the same standard across communities, changing metadata practices over time and associated backwards compatibility, reconciling metadata created by data providers with standards, lack of community-accepted definitions for key terms (e.g. ';project'), integrating controlled vocabularies, and others. Metadata record ';validity' or compliance with a standard has been insufficient for interoperability. To overcome these challenges, both projects committed significant work to integrate and offer services over already 'standards compliant' metadata. Both efforts have shown that the 'human-in-the-loop' is still required to fulfill the lofty theoretical promises of metadata standards. In this talk, we 1) summarize the real-world experiences of two data discovery portals working with metadata in standard form, and 2) offer lessons learned for others who work with and rely on metadata and metadata standards.

  15. Assessment of urban pluvial flood risk and efficiency of adaptation options through simulations - A new generation of urban planning tools

    Science.gov (United States)

    Löwe, Roland; Urich, Christian; Sto. Domingo, Nina; Mark, Ole; Deletic, Ana; Arnbjerg-Nielsen, Karsten

    2017-07-01

    We present a new framework for flexible testing of flood risk adaptation strategies in a variety of urban development and climate scenarios. This framework couples the 1D-2D hydrodynamic simulation package MIKE FLOOD with the agent-based urban development model DAnCE4Water and provides the possibility to systematically test various flood risk adaptation measures ranging from large infrastructure changes over decentralised water management to urban planning policies. We have tested the framework in a case study in Melbourne, Australia considering 9 scenarios for urban development and climate and 32 potential combinations of flood adaptation measures. We found that the performance of adaptation measures strongly depended on the considered climate and urban development scenario and the other implementation measures implemented, suggesting that adaptive strategies are preferable over one-off investments. Urban planning policies proved to be an efficient means for the reduction of flood risk, while implementing property buyback and pipe increases in a guideline-oriented manner was too costly. Random variations in location and time point of urban development could have significant impact on flood risk and would in some cases outweigh the benefits of less efficient adaptation strategies. The results of our setup can serve as an input for robust decision making frameworks and thus support the identification of flood risk adaptation measures that are economically efficient and robust to variations of climate and urban layout.

  16. A Solr Powered Architecture for Scientific Metadata Search Applications

    Science.gov (United States)

    Reed, S. A.; Billingsley, B. W.; Harper, D.; Kovarik, J.; Brandt, M.

    2014-12-01

    Discovering and obtaining resources for scientific research is increasingly difficult but Open Source tools have been implemented to provide inexpensive solutions for scientific metadata search applications. Common practices used in modern web applications can improve the quality of scientific data as well as increase availability to a wider audience while reducing costs of maintenance. Motivated to improve discovery and access of scientific metadata hosted at NSIDC and the need to aggregate many areas of arctic research, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) contributed to a shared codebase used by the NSIDC Search and Arctic Data Explorer (ADE) portals. We implemented the NSIDC Search and ADE to improve search and discovery of scientific metadata in many areas of cryospheric research. All parts of the applications are available free and open for reuse in other applications and portals. We have applied common techniques that are widely used by search applications around the web and with the goal of providing quick and easy access to scientific metadata. We adopted keyword search auto-suggest which provides a dynamic list of terms and phrases that closely match characters as the user types. Facet queries are another technique we have implemented to filter results based on aspects of the data like the instrument used or temporal duration of the data set. Service APIs provide a layer between the interface and the database and are shared between the NSIDC Search and ACADIS ADE interfaces. We also implemented a shared data store between both portals using Apache Solr (an Open Source search engine platform that stores and indexes XML documents) and leverage many powerful features including geospatial search and faceting. This presentation will discuss the application architecture as well as tools and techniques used to enhance search and discovery of scientific metadata.

  17. Serving Fisheries and Ocean Metadata to Communities Around the World

    Science.gov (United States)

    Meaux, Melanie F.

    2007-01-01

    NASA's Global Change Master Directory (GCMD) assists the oceanographic community in the discovery, access, and sharing of scientific data by serving on-line fisheries and ocean metadata to users around the globe. As of January 2006, the directory holds more than 16,300 Earth Science data descriptions and over 1,300 services descriptions. Of these, nearly 4,000 unique ocean-related metadata records are available to the public, with many having direct links to the data. In 2005, the GCMD averaged over 5 million hits a month, with nearly a half million unique hosts for the year. Through the GCMD portal (http://gcmd.nasa.gov/), users can search vast and growing quantities of data and services using controlled keywords, free-text searches, or a combination of both. Users may now refine a search based on topic, location, instrument, platform, project, data center, spatial and temporal coverage, and data resolution for selected datasets. The directory also offers data holders a means to advertise and search their data through customized portals, which are subset views of the directory. The discovery metadata standard used is the Directory Interchange Format (DIF), adopted in 1988. This format has evolved to accommodate other national and international standards such as FGDC and IS019115. Users can submit metadata through easy-to-use online and offline authoring tools. The directory, which also serves as the International Directory Network (IDN), has been providing its services and sharing its experience and knowledge of metadata at the international, national, regional, and local level for many years. Active partners include the Committee on Earth Observation Satellites (CEOS), federal agencies (such as NASA, NOAA, and USGS), international agencies (such as IOC/IODE, UN, and JAXA) and organizations (such as ESIP, IOOS/DMAC, GOSIC, GLOBEC, OBIS, and GoMODP).

  18. RESTful Access to NOAA's Space Weather Data and Metadata

    Science.gov (United States)

    Kihn, E. A.; Elespuru, P. R.; Zhizhin, M.

    2010-12-01

    The Space Physics Interactive Data Resource (SPIDR) (http://spidr.ngdc.noaa.gov) is a web based application for searching, accessing and interacting with NOAA’s space related data holdings. SPIDR serves as one of several interfaces to the National Geophysical Data Center's archived digital holdings. The SPIDR system while successful in delivering data and visualization to clients was also found to be limited in its ability to interact with other programs, its ability to integrate with alternate work-flows and its support for multiple user interfaces (UI). As such in 2006 the SPIDR development team implemented a SOAP based interface to SPIDR through which outside developers could make use of the resource. It was our finding however that despite our best efforts at documentation, the interface remained elusive to many users. That is to say a few strong programmers were able to format and use the XML messaging but in general it did not make the data more accessible. In response SPIDR has been extended to include a REST style web services API for all time series data. This provides direct, synchronous, simple programmatic access to over 200 individual parameters representing space weather data directly from the NGDC archive. In addition to the data service SPIDR has implemented a metadata service which allows users to get Federal Geographic Data Committee (FGDC )style metadata records describing all available data and stations. This metadata will migrate to the NASA Space Physics Archive Search and Extract ( SPASE) style in future versions in order to provide further detail. The combination of data, metadata and visualization tools available through SPIDR combine to make it a powerful virtual observatory (VO). When this is combined with a content rich metadata system we have experience vastly greater user response and usage This talk will present details of the development as well as lessons learned from 10 years of SPIDR development.

  19. Serious Games for Health: The Potential of Metadata.

    Science.gov (United States)

    Göbel, Stefan; Maddison, Ralph

    2017-02-01

    Numerous serious games and health games exist, either as commercial products (typically with a focus on entertaining a broad user group) or smaller games and game prototypes, often resulting from research projects (typically tailored to a smaller user group with a specific health characteristic). A major drawback of existing health games is that they are not very well described and attributed with (machine-readable, quantitative, and qualitative) metadata such as the characterizing goal of the game, the target user group, or expected health effects well proven in scientific studies. This makes it difficult or even impossible for end users to find and select the most appropriate game for a specific situation (e.g., health needs). Therefore, the aim of this article was to motivate the need and potential/benefit of metadata for the description and retrieval of health games and to describe a descriptive model for the qualitative description of games for health. It was not the aim of the article to describe a stable, running system (portal) for health games. This will be addressed in future work. Building on previous work toward a metadata format for serious games, a descriptive model for the formal description of games for health is introduced. For the conceptualization of this model, classification schemata of different existing health game repositories are considered. The classification schema consists of three levels: a core set of mandatory descriptive fields relevant for all games for health application areas, a detailed level with more comprehensive, optional information about the games, and so-called extension as level three with specific descriptive elements relevant for dedicated health games application areas, for example, cardio training. A metadata format provides a technical framework to describe, find, and select appropriate health games matching the needs of the end user. Future steps to improve, apply, and promote the metadata format in the health games

  20. BrainBank Metadata Specification for the Human Brain Project and Neuroinformatics

    Directory of Open Access Journals (Sweden)

    Hu Lianglin

    2007-07-01

    Full Text Available Many databases and platforms for human brain data have been established in China over the years, and metadata plays an important role in understanding and using them. The BrainBank Metadata Specification for the Human Brain Project and Neuroinformatics provides a structure for describing the context and content information of BrainBank databases and services. It includes six parts: identification, method, data schema, distribution of the database, metadata extension, and metadata reference The application of the BrainBank Metadata Specification will promote conservation and management of BrainBank databases and platforms. it will also greatly facilitate the retrieval, evaluation, acquisition, and application of the data.

  1. The National Digital Information Infrastructure Preservation Program; Metadata Principles and Practicalities; Challenges for Service Providers when Importing Metadata in Digital Libraries; Integrated and Aggregated Reference Services.

    Science.gov (United States)

    Friedlander, Amy; Duval, Erik; Hodgins, Wayne; Sutton, Stuart; Weibel, Stuart L.; McClelland, Marilyn; McArthur, David; Giersch, Sarah; Geisler, Gary; Hodgkin, Adam

    2002-01-01

    Includes 6 articles that discuss the National Digital Information Infrastructure Preservation Program at the Library of Congress; metadata in digital libraries; integrated reference services on the Web. (LRW)

  2. The National Digital Information Infrastructure Preservation Program; Metadata Principles and Practicalities; Challenges for Service Providers when Importing Metadata in Digital Libraries; Integrated and Aggregated Reference Services.

    Science.gov (United States)

    Friedlander, Amy; Duval, Erik; Hodgins, Wayne; Sutton, Stuart; Weibel, Stuart L.; McClelland, Marilyn; McArthur, David; Giersch, Sarah; Geisler, Gary; Hodgkin, Adam

    2002-01-01

    Includes 6 articles that discuss the National Digital Information Infrastructure Preservation Program at the Library of Congress; metadata in digital libraries; integrated reference services on the Web. (LRW)

  3. Tools for proactive collection and use of quality metadata in GEOSS

    Science.gov (United States)

    Bastin, L.; Thum, S.; Maso, J.; Yang, K. X.; Nüst, D.; Van den Broek, M.; Lush, V.; Papeschi, F.; Riverola, A.

    2012-12-01

    from producer metadata, from the data themselves, from validation of in-situ sensor data, from provenance information and from user feedback, and will be aggregated to produce clear and useful summaries of quality, including a GEO Label. GeoViQua's conceptual quality information models for users and producers are specifically described and illustrated in this presentation. These models (which have been encoded as XML schemas and can be accessed at http://schemas.geoviqua.org/) are designed to satisfy the identified user needs while remaining consistent with current standards such as ISO 19115 and advanced drafts such as ISO 19157. The resulting components being developed for the GEO Portal are designed to lower the entry barrier to users who wish to help to generate and explore rich and useful metadata. This metadata will include reviews, comments and ratings, reports of usage in specific domains and specification of datasets used for benchmarking, as well as rich quantitative information encoded in more traditional data quality elements such as thematic correctness and positional accuracy. The value of the enriched metadata will also be enhanced by graphical tools for visualizing spatially distributed uncertainties. We demonstrate practical example applications in selected environmental application domains.

  4. The Ontological Perspectives of the Semantic Web and the Metadata Harvesting Protocol: Applications of Metadata for Improving Web Search.

    Science.gov (United States)

    Fast, Karl V.; Campbell, D. Grant

    2001-01-01

    Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…

  5. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    Science.gov (United States)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    generators and data aggregators are updated. A Digital Object Identifier is assigned using the Australian National Data Service (ANDS). Once the data has been quality assured, a DOI is minted and the metadata record updated. NCI's data citation policy establishes the relationship between research outcomes, data providers, and the data.

  6. Metadata Creation Practices in Digital Repositories and Collections: Schemata, Selection Criteria, and Interoperability

    Directory of Open Access Journals (Sweden)

    Jung-ran Park

    2010-09-01

    Full Text Available This study explores the current state of metadata-creation practices across digital repositories and collections by using data collected from a nationwide survey of mostly cataloging and metadata professionals. Results show that MARC, AACR2, and LCSH are the most widely used metadata schema, content standard, and subjectcontrolled vocabulary, respectively. Dublin Core (DC is the second most widely used metadata schema, followed by EAD, MODS, VRA, and TEI. Qualified DC’s wider use vis-à-vis Unqualified DC (40.6 percent versus 25.4 percent is noteworthy. The leading criteria in selecting metadata and controlled-vocabulary schemata are collection-specific considerations, such as the types of resources, nature of the collection, and needs of primary users and communities. Existing technological infrastructure and staff expertise also are significant factors contributing to the current use of metadata schemata and controlled vocabularies for subject access across distributed digital repositories and collections. Metadata interoperability remains a major challenge. There is a lack of exposure of locally created metadata and metadata guidelines beyond the local environments. Homegrown locally added metadata elements may also hinder metadata interoperability across digital repositories and collections when there is a lack of sharable mechanisms for locally defined extensions and variants.

  7. Treating metadata as annotations: separating the content markup from the content

    Directory of Open Access Journals (Sweden)

    Fredrik Paulsson

    2007-11-01

    Full Text Available The use of digital learning resources creates an increasing need for semantic metadata, describing the whole resource, as well as parts of resources. Traditionally, schemas such as Text Encoding Initiative (TEI have been used to add semantic markup for parts of resources. This is not sufficient for use in a ”metadata ecology”, where metadata is distributed, coherent to different Application Profiles, and added by different actors. A new methodology, where metadata is “pointed in” as annotations, using XPointers, and RDF is proposed. A suggestion for how such infrastructure can be implemented, using existing open standards for metadata, and for the web is presented. We argue that such methodology and infrastructure is necessary to realize the decentralized metadata infrastructure needed for a “metadata ecology".

  8. Extending attributes page: a scheme for enhancing the reliability of storage system metadata*

    Institute of Scientific and Technical Information of China (English)

    Juan WANG; Dan FENG; Fang WANG; Cheng-tao LU

    2009-01-01

    In an object-based storage system, a novel scheme named EAP (extending attributes page) is presented to enhance the metadata reliability of the system by adding the user object file information attributes page for each user object and storing the file-related attributes of each user object in object-based storage devices. The EAP scheme requires no additional hardware equipments compared to a general method which uses backup metadata servers to improve the metadata reliability. Leveraging a Markov chain, this paper compares the metadata reliability of the system using the EAP scheme with that using only metadata servers to offer the file metadata service. Our results demonstrate that the EAP scheme can dramatically enhance the reliability of storage system metadata.

  9. Inferring Metadata for a Semantic Web Peer-to-Peer Environment

    Directory of Open Access Journals (Sweden)

    Mark Painter

    2004-04-01

    Full Text Available Learning Objects Metadata (LOM aims at describing educational resources in order to allow better reusability and retrieval. In this article we show how additional inference rules allows us to derive additional metadata from existing ones. Additionally, using these rules as integrity constraints helps us to define the constraints on LOM elements, thus taking an important step toward a complete axiomatization of LOM metadata (with the goal of transforming the LOM definitions from a simple syntactical description into a complete ontology. We will use RDF metadata descriptions and Prolog as an inference language. We show how these rules can be applied for the extensions of course metadata using an existing test bed with several courses. Based on the Edutella peer-to-peer architecture we can easily make RDF metadata accessible to a whole community using Edutella peers that manage RDF metadata. By processing inference rules we can achieve better search results.

  10. Manipulating premarked rectangular areas in a real-time digiTV stream utilizing MPEG-7 metadata descriptors

    Science.gov (United States)

    Lugmayr, Artur R.; Creutzburg, Reiner; Kalli, Seppo; Tsoumanis, Andreas

    2002-07-01

    One of the major challenges in digital, interactive television is to provide facilities for an intelligent multimedia presentation at the consumer terminal. The end-user shall benefit from a web-page like structure, whose content is browsable, rather than one monolithic broadcast stream without either any interaction facilities or content adaptation models. Therefore we introduce a Digital Broadcast Item (DBI) that structures the broadcast content into an interactable intelligent multimedia presentation: along the push content (broadcast stream) consisting of a video/audio stream adaptive content elements are streamed by the help of binarized metadata streaming solutions and synchronized to the audio/video stream. As far broadcasting only provided content as monolithic structure, composed of an image flow, graphics, special effects, sound effects, single path story flow, etc. The transport medium utilized is a high-bit rate MPEG-2 Transport Stream (MPEG-2 TS) carrying audio/video and some low level metadata, such as an Electronic Programme Guide (EPG).The aim of this research paper is to show and prove the concept of realizing adaptive content customisation for white pre-marked rectangular areas in multiple orientations.

  11. Can multi-generational exposure to ocean warming and acidification lead to the adaptation of life history and physiology in a marine metazoan?

    Science.gov (United States)

    Gibbin, Emma M; Chakravarti, Leela J; Jarrold, Michael D; Christen, Felix; Turpin, Vincent; Massamba N'Siala, Gloria; Blier, Pierre U; Calosi, Piero

    2017-02-15

    Ocean warming and acidification are concomitant global drivers that are currently threatening the survival of marine organisms. How species will respond to these changes depends on their capacity for plastic and adaptive responses. Little is known about the mechanisms that govern plasticity and adaptability or how global changes will influence these relationships across multiple generations. Here, we exposed the emerging model marine polychaete Ophryotrocha labronica to conditions simulating ocean warming and acidification, in isolation and in combination over five generations to identify: (i) how multiple versus single global change drivers alter both juvenile and adult life-history traits; (ii) the mechanistic link between adult physiological and fitness-related life-history traits; and (iii) whether the phenotypic changes observed over multiple generations are of plastic and/or adaptive origin. Two juvenile (developmental rate; survival to sexual maturity) and two adult (average reproductive body size; fecundity) life-history traits were measured in each generation, in addition to three physiological (cellular reactive oxygen species content, mitochondrial density, mitochondrial capacity) traits. We found that multi-generational exposure to warming alone caused an increase in juvenile developmental rate, reactive oxygen species production and mitochondrial density, decreases in average reproductive body size and fecundity, and fluctuations in mitochondrial capacity, relative to control conditions. Exposure to ocean acidification alone had only minor effects on juvenile developmental rate. Remarkably, when both drivers of global change were present, only mitochondrial capacity was significantly affected, suggesting that ocean warming and acidification act as opposing vectors of stress across multiple generations. © 2017. Published by The Company of Biologists Ltd.

  12. Speed of engagement with support generated by a smoking cessation smartphone Just In Time Adaptive Intervention (JITAI

    Directory of Open Access Journals (Sweden)

    Felix Naughton

    2015-10-01

    Full Text Available Background: An advantage of the high portability and sensing capabilities of smartphones is the potential for health apps to deliver advice and support to individuals close in time to when it is deemed of greatest relevance and impact, often referred to as Just In Time Adaptive Interventions (JITAI. However, little research has been undertaken to explore the viability of JITAI in terms of how long it takes users to engage with support triggered by real time data input, compared to scheduled support, and whether context affects response. This paper is focused on Q Sense, a smoking cessation app developed to deliver both Just in Time and scheduled support messages (every morning during a smoker’s quit attempt. The Just in Time cessation support generated by Q Sense is triggered by and tailored to real time context using location sensing. Objectives: To assess: 1 the time to engage with the app after a Just in Time support notification is delivered and whether this is influenced by the context in which the notification was initially delivered, 2 whether the time to engage with the app differs between Just in Time support notifications and scheduled support message notifications and 3 whether findings from objectives 1 and 2 differ between smokers receiving or not receiving NHS smoking cessation support. Methods: Data are from two studies evaluating the use of Q Sense: a feasibility study using an opportunity sample of smokers initiating a quit attempt with Q Sense without NHS cessation support (N=15 and an ongoing acceptability study of smokers receiving NHS smoking cessation support alongside app use (target N=40, recruitment due to be completed end of November 2015. Time elapse between notification generation and the user opening the app will be calculated and compared between message types (Just in Time vs. scheduled messages, contexts (home, work, socialising, other and samples (receiving or not receiving NHS cessation support using t

  13. Coordinate Reference System Metadata in Interdisciplinary Environmental Modeling

    Science.gov (United States)

    Blodgett, D. L.; Arctur, D. K.; Hnilo, J.; Danko, D. M.; Rutledge, G. K.

    2011-12-01

    For global climate modeling based on a unit sphere, the positional accuracy of transformations between "real earth" coordinates and the spherical earth coordinates is practically irrelevant due to the coarse grid and precision of global models. Consequently, many climate models are driven by data using real-earth coordinates without transforming them to the shape of the model grid. Additionally, metadata to describe the earth shape and its relationship to latitude longitude demarcations, or datum, used for model output is often left unspecified or ambiguous. Studies of weather and climate effects on coastal zones, water resources, agriculture, biodiversity, and other critical domains typically require positional accuracy on the order of several meters or less. This precision requires that a precise datum be used and accounted for in metadata. While it may be understood that climate model results using spherical earth coordinates could not possibly approach this level of accuracy, precise coordinate reference system metadata is nevertheless required by users and applications integrating climate and geographic information. For this reason, data publishers should provide guidance regarding the appropriate datum to assume for their data. Without some guidance, analysts must make assumptions they are uncomfortable or unwilling to make and may spend inordinate amounts of time researching the correct assumption to make. A consequence of the (practically justified for global climate modeling) disregard for datums is that datums are also neglected when publishing regional or local scale climate and weather data where datum information may be important. For example, observed data, like precipitation and temperature measurements, used in downscaling climate model results are georeferenced precisely. If coordinate reference system metadata are disregarded in cases like this, systematic biases in geolocation can result. Additionally, if no datum transformation was applied to

  14. Auto-Generated Semantic Processing Services

    Science.gov (United States)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  15. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement

    Directory of Open Access Journals (Sweden)

    Juan J. Garcia-Cantero

    2017-06-01

    Full Text Available Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells’ overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma’s morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been

  16. New Solutions for Enabling Discovery of User-Centric Virtual Data Products in NASA's Common Metadata Repository

    Science.gov (United States)

    Pilone, D.; Gilman, J.; Baynes, K.; Shum, D.

    2015-12-01

    This talk introduces a new NASA Earth Observing System Data and Information System (EOSDIS) capability to automatically generate and maintain derived, Virtual Product information allowing DAACs and Data Providers to create tailored and more discoverable variations of their products. After this talk the audience will be aware of the new EOSDIS Virtual Product capability, applications of it, and how to take advantage of it. Much of the data made available in the EOSDIS are organized for generation and archival rather than for discovery and use. The EOSDIS Common Metadata Repository (CMR) is launching a new capability providing automated generation and maintenance of user-oriented Virtual Product information. DAACs can easily surface variations on established data products tailored to specific uses cases and users, leveraging DAAC exposed services such as custom ordering or access services like OPeNDAP for on-demand product generation and distribution. Virtual Data Products enjoy support for spatial and temporal information, keyword discovery, association with imagery, and are fully discoverable by tools such as NASA Earthdata Search, Worldview, and Reverb. Virtual Product generation has applicability across many use cases: - Describing derived products such as Surface Kinetic Temperature information (AST_08) from source products (ASTER L1A) - Providing streamlined access to data products (e.g. AIRS) containing many (>800) data variables covering an enormous variety of physical measurements - Attaching additional EOSDIS offerings such as Visual Metadata, external services, and documentation metadata - Publishing alternate formats for a product (e.g. netCDF for HDF products) with the actual conversion happening on request - Publishing granules to be modified by on-the-fly services, like GES-DISC's Data Quality Screening Service - Publishing "bundled" products where granules from one product correspond to granules from one or more other related products

  17. Automatic Metadata Extraction - The High Energy Physics Use Case

    CERN Document Server

    Boyd, Joseph; Rajman, Martin

    Automatic metadata extraction (AME) of scientific papers has been described as one of the hardest problems in document engineering. Heterogeneous content, varying style, and unpredictable placement of article components render the problem inherently indeterministic. Conditional random fields (CRF), a machine learning technique, can be used to classify document metadata amidst this uncertainty, annotating document contents with semantic labels. High energy physics (HEP) papers, such as those written at CERN, have unique content and structural characteristics, with scientific collaborations of thousands of authors altering article layouts dramatically. The distinctive qualities of these papers necessitate the creation of specialised datasets and model features. In this work we build an unprecedented training set of HEP papers and propose and evaluate a set of innovative features for CRF models. We build upon state-of-the-art AME software, GROBID, a tool coordinating a hierarchy of CRF models in a full document ...

  18. ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond

    CERN Document Server

    van Gemmeren, Peter; The ATLAS collaboration; Malon, David; Vaniachine, Alexandre

    2015-01-01

    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework’s state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires ...

  19. Metadata and Data Management for the Keck Observatory Archive

    CERN Document Server

    Tran, H D; Goodrich, R W; Mader, J A; Swain, M; Laity, A C; Kong, M; Gelino, C R; Berriman, G B

    2014-01-01

    A collaboration between the W. M. Keck Observatory (WMKO) in Hawaii and the NASA Exoplanet Science Institute (NExScI) in California, the Keck Observatory Archive (KOA) was commissioned in 2004 to archive observing data from WMKO, which operates two classically scheduled 10 m ground-based telescopes. The observing data from Keck is not suitable for direct ingestion into the archive since the metadata contained in the original FITS headers lack the information necessary for proper archiving. Coupled with different standards among instrument builders and the heterogeneous nature of the data inherent in classical observing, in which observers have complete control of the instruments and their observations, the data pose a number of technical challenges for KOA. We describe the methodologies and tools that we have developed to successfully address these difficulties, adding content to the FITS headers and "retrofitting" the metadata in order to support archiving Keck data, especially those obtained before the arch...

  20. ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond

    CERN Document Server

    van Gemmeren, Peter; The ATLAS collaboration; Cranshaw, Jack; Vaniachine, Alexandre

    2015-01-01

    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework’s state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires ...

  1. CAMELOT: Cloud Archive for MEtadata, Library and Online Toolkit

    Science.gov (United States)

    Ginsburg, Adam; Kruijssen, J. M. Diederik; Longmore, Steven N.; Koch, Eric; Glover, Simon C. O.; Dale, James E.; Commerçon, Benoît; Giannetti, Andrea; McLeod, Anna F.; Testi, Leonardo; Zahorecz, Sarolta; Rathborne, Jill M.; Zhang, Qizhou; Fontani, Francesco; Beltrán, Maite T.; Rivilla, Victor M.

    2016-05-01

    CAMELOT facilitates the comparison of observational data and simulations of molecular clouds and/or star-forming regions. The central component of CAMELOT is a database summarizing the properties of observational data and simulations in the literature through pertinent metadata. The core functionality allows users to upload metadata, search and visualize the contents of the database to find and match observations/simulations over any range of parameter space. To bridge the fundamental disconnect between inherently 2D observational data and 3D simulations, the code uses key physical properties that, in principle, are straightforward for both observers and simulators to measure — the surface density (Sigma), velocity dispersion (sigma) and radius (R). By determining these in a self-consistent way for all entries in the database, it should be possible to make robust comparisons.

  2. Data Bookkeeping Service 3 - Providing event metadata in CMS

    CERN Document Server

    Giffels, Manuel; Riley, Daniel

    2014-01-01

    The Data Bookkeeping Service 3 provides a catalog of event metadata for Monte Carlo and recorded data of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN, Geneva. It comprises all necessary information for tracking datasets, their processing history and associations between runs, files and datasets, on a large scale of about $200,000$ datasets and more than $40$ million files, which adds up in around $700$ GB of metadata. The DBS is an essential part of the CMS Data Management and Workload Management (DMWM) systems, all kind of data-processing like Monte Carlo production, processing of recorded event data as well as physics analysis done by the users are heavily relying on the information stored in DBS.

  3. The ground truth about metadata and community detection in networks

    CERN Document Server

    Peel, Leto; Clauset, Aaron

    2016-01-01

    Across many scientific domains, there is common need to automatically extract a simplified view or a coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called \\textit{ground truth} communities. This works well in synthetic networks with planted communities because such networks' links are formed explicitly based on the planted communities. However, there are no planted communities in real world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. Here, we show that metadata are not the same as ground truth, and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch the...

  4. Getting Data Should be Easy! Working with NASA to Improve Earth Science Data Accessibility with Metadata

    Science.gov (United States)

    le Roux, J.

    2016-12-01

    One of the key components of Earth Science data stewardship is high quality metadata. Ideally, all Earth Science/ Earth Observation datasets should be accompanied by a comprehensive metadata record including information such as: where to download data, the data format, the data temporal and spatial resolution, instruments used, and the purpose of the data collection (to name a few). While there are metadata formats and standards in place for NASA Earth Science data, many records either fail to provide critical information, or the information provided may be inaccurate, inconsistent, or outdated. The ARC Team at Marshall Space Flight Center has been working to improve the quality of records in the Common Metadata Repository (CMR), which serves as the authoritative management system for all NASA EOSDIS metadata. This process requires direct collaboration with personnel at NASA Distributed Active Archive Centers (DAACs) to ensure that their metadata holdings in CMR are optimal for search and discovery. The first DAAC to undergo a metadata review was the Global Hydrology Resource Center (GHRC). In this presentation, we will describe challenges and lessons learned from the metadata review process undertaken with GHRC. These lessons pave the way for a more efficient metadata review process with other DAACs in the future, which will ultimately result in improved data search capabilities for CMR users. A quantitative overview of improvements made to GHRC metadata since the start of its review process will also be provided.

  5. ISO, FGDC, DIF and Dublin Core - Making Sense of Metadata Standards for Earth Science Data

    Science.gov (United States)

    Jones, P. R.; Ritchey, N. A.; Peng, G.; Toner, V. A.; Brown, H.

    2014-12-01

    Metadata standards provide common definitions of metadata fields for information exchange across user communities. Despite the broad adoption of metadata standards for Earth science data, there are still heterogeneous and incompatible representations of information due to differences between the many standards in use and how each standard is applied. Federal agencies are required to manage and publish metadata in different metadata standards and formats for various data catalogs. In 2014, the NOAA National Climatic data Center (NCDC) managed metadata for its scientific datasets in ISO 19115-2 in XML, GCMD Directory Interchange Format (DIF) in XML, DataCite Schema in XML, Dublin Core in XML, and Data Catalog Vocabulary (DCAT) in JSON, with more standards and profiles of standards planned. Of these standards, the ISO 19115-series metadata is the most complete and feature-rich, and for this reason it is used by NCDC as the source for the other metadata standards. We will discuss the capabilities of metadata standards and how these standards are being implemented to document datasets. Successful implementations include developing translations and displays using XSLTs, creating links to related data and resources, documenting dataset lineage, and establishing best practices. Benefits, gaps, and challenges will be highlighted with suggestions for improved approaches to metadata storage and maintenance.

  6. Automated Atmospheric Composition Dataset Level Metadata Discovery. Difficulties and Surprises

    Science.gov (United States)

    Strub, R. F.; Falke, S. R.; Kempler, S.; Fialkowski, E.; Goussev, O.; Lynnes, C.

    2015-12-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System - CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not

  7. Metadata Embeddings for User and Item Cold-start Recommendations

    OpenAIRE

    Kula, Maciej

    2015-01-01

    I present a hybrid matrix factorisation model representing users and items as linear combinations of their content features' latent factors. The model outperforms both collaborative and content-based models in cold-start or sparse interaction data scenarios (using both user and item metadata), and performs at least as well as a pure collaborative matrix factorisation model where interaction data is abundant. Additionally, feature embeddings produced by the model encode semantic information in...

  8. The Use of Metadata Visualisation Assist Information Retrieval

    Science.gov (United States)

    2007-10-01

    aspect of the popularity scale (Ahlberg & Shneirderman, 1994). The different genres (including drama, mystery, comedy, western, horror , action etc...organised with metadata for each item within the library, providing information describing the author, the genre , the title, the publisher, the year it...album title, the track length and the genre of music. Again, any of these pieces of information can be used to quickly search and locate specific

  9. A 225 kW Direct Driven PM Generator Adapted to a Vertical Axis Wind Turbine

    Directory of Open Access Journals (Sweden)

    S. Eriksson

    2011-01-01

    Full Text Available A unique direct driven permanent magnet synchronous generator has been designed and constructed. Results from simulations as well as from the first experimental tests are presented. The generator has been specifically designed to be directly driven by a vertical axis wind turbine and has an unusually low reactance. Generators for wind turbines with full variable speed should maintain a high efficiency for the whole operational regime. Furthermore, for this application, requirements are placed on high generator torque capability for the whole operational regime. These issues are elaborated in the paper and studied through simulations. It is shown that the generator fulfils the expectations. An electrical control can effectively substitute a mechanical pitch control. Furthermore, results from measurements of magnetic flux density in the airgap and no load voltage coincide with simulations. The electromagnetic simulations of the generator are performed by using an electromagnetic model solved in a finite element environment.

  10. The ATLAS Eventlndex: data flow and inclusion of other metadata

    Science.gov (United States)

    Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration

    2016-10-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information collected from the ATLAS detector. The basic unit of this information is the event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex is event picking, as well as data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the Grid, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalogue AMI and the Rucio data management system and information on production jobs from the ATLAS production system. The ATLAS production system is also used for the collection of event information from the Grid jobs. EventIndex developments started in 2012 and in the middle of 2015 the system was commissioned and started collecting event metadata, as a part of ATLAS Distributed Computing operations.

  11. Embedding Metadata and Other Semantics in Word Processing Documents

    Directory of Open Access Journals (Sweden)

    Peter Sefton

    2009-10-01

    Full Text Available This paper describes a technique for embedding document metadata, and potentially other semantic references inline in word processing documents, which the authors have implemented with the help of a software development team. Several assumptions underly the approach; It must be available across computing platforms and work with both Microsoft Word (because of its user base and OpenOffice.org (because of its free availability. Further the application needs to be acceptable to and usable by users, so the initial implementation covers only small number of features, which will only be extended after user-testing. Within these constraints the system provides a mechanism for encoding not only simple metadata, but for inferring hierarchical relationships between metadata elements from a ‘flat’ word processing file.The paper includes links to open source code implementing the techniques as part of a broader suite of tools for academic writing. This addresses tools and software, semantic web and data curation, integrating curation into research workflows and will provide a platform for integrating work on ontologies, vocabularies and folksonomies into word processing tools.

  12. Metadata Wizard: an easy-to-use tool for creating FGDC-CSDGM metadata for geospatial datasets in ESRI ArcGIS Desktop

    Science.gov (United States)

    Ignizio, Drew A.; O'Donnell, Michael S.; Talbert, Colin B.

    2014-01-01

    Creating compliant metadata for scientific data products is mandated for all federal Geographic Information Systems professionals and is a best practice for members of the geospatial data community. However, the complexity of the The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata, the limited availability of easy-to-use tools, and recent changes in the ESRI software environment continue to make metadata creation a challenge. Staff at the U.S. Geological Survey Fort Collins Science Center have developed a Python toolbox for ESRI ArcDesktop to facilitate a semi-automated workflow to create and update metadata records in ESRI’s 10.x software. The U.S. Geological Survey Metadata Wizard tool automatically populates several metadata elements: the spatial reference, spatial extent, geospatial presentation format, vector feature count or raster column/row count, native system/processing environment, and the metadata creation date. Once the software auto-populates these elements, users can easily add attribute definitions and other relevant information in a simple Graphical User Interface. The tool, which offers a simple design free of esoteric metadata language, has the potential to save many government and non-government organizations a significant amount of time and costs by facilitating the development of The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata compliant metadata for ESRI software users. A working version of the tool is now available for ESRI ArcDesktop, version 10.0, 10.1, and 10.2 (downloadable at http:/www.sciencebase.gov/metadatawizard).

  13. Towards Next Generation BI Systems

    DEFF Research Database (Denmark)

    Varga, Jovan; Romero, Oscar; Pedersen, Torben Bach

    2014-01-01

    Next generation Business Intelligence (BI) systems require integration of heterogeneous data sources and a strong user-centric orientation. Both needs entail machine-processable metadata to enable automation and allow end users to gain access to relevant data for their decision making processes...

  14. Metadata distribution algorithm based on directory hash in mass storage system

    Science.gov (United States)

    Wu, Wei; Luo, Dong-jian; Pei, Can-hao

    2008-12-01

    The distribution of metadata is very important in mass storage system. Many storage systems use subtree partition or hash algorithm to distribute the metadata among metadata server cluster. Although the system access performance is improved, the scalability problem is remarkable in most of these algorithms. This paper proposes a new directory hash (DH) algorithm. It treats directory as hash key value, implements a concentrated storage of metadata, and take a dynamic load balance strategy. It improves the efficiency of metadata distribution and access in mass storage system by hashing to directory and placing metadata together with directory granularity. DH algorithm has solved the scalable problems existing in file hash algorithm such as changing directory name or permission, adding or removing MDS from the cluster, and so on. DH algorithm reduces the additional request amount and the scale of each data migration in scalable operations. It enhances the scalability of mass storage system remarkably.

  15. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    Science.gov (United States)

    Zou, Ding; Djordjevic, Ivan B

    2016-09-05

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  16. From Finding Aids to Wiki Pages: Remixing Archival Metadata with RAMP

    Directory of Open Access Journals (Sweden)

    David González

    2013-10-01

    Full Text Available The Remixing Archival Metadata Project (RAMP is a lightweight web-based editing tool that is intended to let users do two things: (1 generate enhanced authority records for creators of archival collections and (2 publish the content of those records as Wikipedia pages. The RAMP editor can extract biographical and historical data from EAD finding aids to create new authority records for persons, corporate bodies, and families associated with archival and special collections (using the EAC-CPF format. It can then let users enhance those records with additional data from sources like VIAF and WorldCat Identities. Finally, it can transform those records into wiki markup so that users can edit them directly, merge them with any existing Wikipedia pages, and publish them to Wikipedia through its API.

  17. A Conceptual Frame for Evaluating Success Factors being Critical for the Adaption of Externally Generated R&D

    Directory of Open Access Journals (Sweden)

    ARTURO RODRÍGUEZ CASTELLANOS

    2007-06-01

    Full Text Available Nowadays, enterprises require developing continuously innovations in their products, processes and organisational structures to gain and maintain competitive advantages. In this sense, the organizational adoption capacity of externally generated knowledge is becoming an increasingly crucial core capacity for businesses. Based on an empiric study, the present work proposes a model that aims at evaluating the organizational key drivers supporting the adoption of externally generated R+D. With this, it provides a management tool that might facilitate the determination of adequate strategies to take advantage of knowledge which is generated outide the own organizational borders.

  18. Design of a phased array for the generation of adaptive radiation force along a path surrounding a breast lesion for dynamic ultrasound elastography imaging.

    Science.gov (United States)

    Ekeom, Didace; Hadj Henni, Anis; Cloutier, Guy

    2013-03-01

    This work demonstrates, with numerical simulations, the potential of an octagonal probe for the generation of radiation forces in a set of points following a path surrounding a breast lesion in the context of dynamic ultrasound elastography imaging. Because of the in-going wave adaptive focusing strategy, the proposed method is adapted to induce shear wave fronts to interact optimally with complex lesions. Transducer elements were based on 1-3 piezocomposite material. Three-dimensional simulations combining the finite element method and boundary element method with periodic boundary conditions in the elevation direction were used to predict acoustic wave radiation in a targeted region of interest. The coupling factor of the piezocomposite material and the radiated power of the transducer were optimized. The transducer's electrical impedance was targeted to 50 Ω. The probe was simulated by assembling the designed transducer elements to build an octagonal phased-array with 256 elements on each edge (for a total of 2048 elements). The central frequency is 4.54 MHz; simulated transducer elements are able to deliver enough power and can generate the radiation force with a relatively low level of voltage excitation. Using dynamic transmitter beamforming techniques, the radiation force along a path and resulting acoustic pattern in the breast were simulated assuming a linear isotropic medium. Magnitude and orientation of the acoustic intensity (radiation force) at any point of a generation path could be controlled for the case of an example representing a heterogeneous medium with an embedded soft mechanical inclusion.

  19. Demo Abstract: Human-in-the-loop BMS Point Matching and Metadata Labeling with Babel

    DEFF Research Database (Denmark)

    Fürst, Jonathan; Chen, Kaifei; Katz, Randy H.

    2015-01-01

    The inconsistent metadata in Building Management Systems (BMS) hinders the deployment of cyber-physical applications in non-residential buildings. In this demonstration we present Babel, a continuous, human-in-the-loop and crowdsourced approach to the creation and maintenance of BMS metadata...... system in a non-residential building over the BACnet protocol. While our approach can not solve all metadata problems, this demonstration illustrates that it is able to match many relevant points in a fast and precise manner....

  20. Preliminary document analyzing and summarizing metadata standards and issues across Europe

    OpenAIRE

    Anderson, David; Delve, Janet; Pinchbeck, Dan; Alemu, Getaneh

    2009-01-01

    This document is a report on the state-of-the-art in metadata standards and approaches in Europe. Metadata are widely recognized as a critical component of digital preservation and it is typically the case that within individual cultural heritage organizations numerous different metadata schemes are employed, each of which aims to capture particular aspects of digital objects. KEEP is particularly focused on emulation as a digital preservation strategy and addresses directly dynamic digital o...

  1. A Common Data Model for Meta-Data in Interoperable Environments

    OpenAIRE

    Macfarlane, A; McCann, J. A.; Liddell, H

    1996-01-01

    A Common Data Model is a unifying structure used to allow heterogeneous environments to interoperate. An Object Oriented common model is presented in this paper, which provides this unifying structure for a Meta-Data Repository Visualisation Tool. The creation of this common model from the Meta-Data held in component databases is described. The role this common model has in interoperable environments is discussed, and the physical architecture created from the examination of the Meta-Data in ...

  2. Exploring historical trends using taxonomic name metadata

    Directory of Open Access Journals (Sweden)

    Schenk Ryan

    2008-05-01

    Full Text Available Abstract Background Authority and year information have been attached to taxonomic names since Linnaean times. The systematic structure of taxonomic nomenclature facilitates the ability to develop tools that can be used to explore historical trends that may be associated with taxonomy. Results From the over 10.7 million taxonomic names that are part of the uBio system 4, approximately 3 million names were identified to have taxonomic authority information from the years 1750 to 2004. A pipe-delimited file was then generated, organized according to a Linnaean hierarchy and by years from 1750 to 2004, and imported into an Excel workbook. A series of macros were developed to create an Excel-based tool and a complementary Web site to explore the taxonomic data. A cursory and speculative analysis of the data reveals observable trends that may be attributable to significant events that are of both taxonomic (e.g., publishing of key monographs and societal importance (e.g., world wars. The findings also help quantify the number of taxonomic descriptions that may be made available through digitization initiatives. Conclusion Temporal organization of taxonomic data can be used to identify interesting biological epochs relative to historically significant events and ongoing efforts. We have developed an Excel workbook and complementary Web site that enables one to explore taxonomic trends for Linnaean taxonomic groupings, from Kingdoms to Families.

  3. Exploring historical trends using taxonomic name metadata.

    Science.gov (United States)

    Sarkar, Indra Neil; Schenk, Ryan; Norton, Catherine N

    2008-05-13

    Authority and year information have been attached to taxonomic names since Linnaean times. The systematic structure of taxonomic nomenclature facilitates the ability to develop tools that can be used to explore historical trends that may be associated with taxonomy. From the over 10.7 million taxonomic names that are part of the uBio system 4, approximately 3 million names were identified to have taxonomic authority information from the years 1750 to 2004. A pipe-delimited file was then generated, organized according to a Linnaean hierarchy and by years from 1750 to 2004, and imported into an Excel workbook. A series of macros were developed to create an Excel-based tool and a complementary Web site to explore the taxonomic data. A cursory and speculative analysis of the data reveals observable trends that may be attributable to significant events that are of both taxonomic (e.g., publishing of key monographs) and societal importance (e.g., world wars). The findings also help quantify the number of taxonomic descriptions that may be made available through digitization initiatives. Temporal organization of taxonomic data can be used to identify interesting biological epochs relative to historically significant events and ongoing efforts. We have developed an Excel workbook and complementary Web site that enables one to explore taxonomic trends for Linnaean taxonomic groupings, from Kingdoms to Families.

  4. The Theory and Implementation for Metadata in Digital Library/Museum

    Directory of Open Access Journals (Sweden)

    Hsueh-hua Chen

    1998-12-01

    Full Text Available Digital Libraries and Museums (DL/M have become one of the important research issues of Library and Information Science as well as other related fields. This paper describes the basic concepts of DL/M and briefly introduces the development of Taiwan Digital Museum Project. Based on the features of various collections, wediscuss how to maintain, to manage and to exchange metadata, especially from the viewpoint of users. We propose the draft of metadata, MICI (Metadata Interchange for Chinese Information , developed by ROSS (Resources Organization and SearchingSpecification team. Finally, current problems and future development of metadata will be touched.[Article content in Chinese

  5. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    Science.gov (United States)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  6. Improving Scientific Metadata Interoperability And Data Discoverability using OAI-PMH

    Science.gov (United States)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James M.; Wilson, Bruce E.

    2010-12-01

    While general-purpose search engines (such as Google or Bing) are useful for finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. However, there are a number of different protocols for harvesting metadata, with some challenges for ensuring that updates are propagated and for collaborations with repositories using differing metadata standards. The Open Archive Initiative Protocol for Metadata Handling (OAI-PMH) is a standard that is seeing increased use as a means for exchanging structured metadata. OAI-PMH implementations must support Dublin Core as a metadata standard, with other metadata formats as optional. We have developed tools which enable our structured search tool (Mercury; http://mercury.ornl.gov) to consume metadata from OAI-PMH services in any of the metadata formats we support (Dublin Core, Darwin Core, FCDC CSDGM, GCMD DIF, EML, and ISO 19115/19137). We are also making ORNL DAAC metadata available through OAI-PMH for other metadata tools to utilize, such as the NASA Global Change Master Directory, GCMD). This paper describes Mercury capabilities with multiple metadata formats, in general, and, more specifically, the results of our OAI-PMH implementations and

  7. Metadata: Standards for Retrieving WWW Documents (and Other Digitized and Non-Digitized Resources)

    Science.gov (United States)

    Rusch-Feja, Diann

    The use of metadata for indexing digitized and non-digitized resources for resource discovery in a networked environment is being increasingly implemented all over the world. Greater precision is achieved using metadata than relying on universal search engines and furthermore, meta-data can be used as filtering mechanisms for search results. An overview of various metadata sets is given, followed by a more focussed presentation of Dublin Core Metadata including examples of sub-elements and qualifiers. Especially the use of the Dublin Core Relation element provides connections between the metadata of various related electronic resources, as well as the metadata for physical, non-digitized resources. This facilitates more comprehensive search results without losing precision and brings together different genres of information which would otherwise be only searchable in separate databases. Furthermore, the advantages of Dublin Core Metadata in comparison with library cataloging and the use of universal search engines are discussed briefly, followed by a listing of types of implementation of Dublin Core Metadata.

  8. Studies of Big Data metadata segmentation between relational and non-relational databases

    CERN Document Server

    Golosova, M V; Klimentov, A A; Ryabinkin, E A; Dimitrov, G; Potekhin, M

    2015-01-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  9. Studies of Big Data metadata segmentation between relational and non-relational databases

    Science.gov (United States)

    Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.

    2015-12-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  10. Towards an integrated food safety surveillance system: a simulation study to explore the potential of combining genomic and epidemiological metadata

    Science.gov (United States)

    Crotta, M.; Wall, B.; Good, L.; O'Brien, S. J.; Guitian, J.

    2017-01-01

    Foodborne infection is a result of exposure to complex, dynamic food systems. The efficiency of foodborne infection is driven by ongoing shifts in genetic machinery. Next-generation sequencing technologies can provide high-fidelity data about the genetics of a pathogen. However, food safety surveillance systems do not currently provide similar high-fidelity epidemiological metadata to associate with genetic data. As a consequence, it is rarely possible to transform genetic data into actionable knowledge that can be used to genuinely inform risk assessment or prevent outbreaks. Big data approaches are touted as a revolution in decision support, and pose a potentially attractive method for closing the gap between the fidelity of genetic and epidemiological metadata for food safety surveillance. We therefore developed a simple food chain model to investigate the potential benefits of combining ‘big’ data sources, including both genetic and high-fidelity epidemiological metadata. Our results suggest that, as for any surveillance system, the collected data must be relevant and characterize the important dynamics of a system if we are to properly understand risk: this suggests the need to carefully consider data curation, rather than the more ambitious claims of big data proponents that unstructured and unrelated data sources can be combined to generate consistent insight. Of interest is that the biggest influencers of foodborne infection risk were contamination load and processing temperature, not genotype. This suggests that understanding food chain dynamics would probably more effectively generate insight into foodborne risk than prescribing the hazard in ever more detail in terms of genotype.

  11. Online Fault Identification Based on an Adaptive Observer for Modular Multilevel Converters Applied to Wind Power Generation Systems

    DEFF Research Database (Denmark)

    Liu, Hui; Ma, Ke; Loh, Poh Chiang

    2015-01-01

    and post-fault maintenance. Therefore, in this paper, an effective fault diagnosis technique for real-time diagnosis of the switching device faults covering both the open-circuit faults and the short-circuit faults in MMC sub-modules is proposed, in which the faulty phase and the fault type is detected...... by analyzing the difference among the three output load currents, while the localization of the faulty switches is achieved by comparing the estimation results by the adaptive observer. In contrast to other methods that use additional sensors or devices, the presented technique uses the measured phase currents...

  12. A 0.76-pJ/Pulse 0.1-1 Gpps Microwatt IR-UWB CMOS Pulse Generator with Adaptive PSD Control Using A Limited Monocycle Precharge Technique

    DEFF Research Database (Denmark)

    Shen, Ming; Yin, Ying-Zheng; Jiang, Hao

    2015-01-01

    This brief presents an ultra-wideband pulse generator topology featuring adaptive control of power spectral density for a broad range of applications with different data rate requirements. The adaptivity is accomplished by employing a limited monocycle precharge approach to control the energy use...

  13. ClipCard: Sharable, Searchable Visual Metadata Summaries on the Cloud to Render Big Data Actionable

    Science.gov (United States)

    Saripalli, P.; Davis, D.; Cunningham, R.

    2013-12-01

    Research firm IDC estimates that approximately 90 percent of the Enterprise Big Data go un-analyzed, as 'dark data' - an enormous corpus of undiscovered, untagged information residing on data warehouses, servers and Storage Area Networks (SAN). In the geosciences, these data range from unpublished model runs to vast survey data assets to raw sensor data. Many of these are now being collected instantaneously, at a greater volume and in new data formats. Not all of these data can be analyzed, nor processed in real time, and their features may not be well described at the time of collection. These dark data are a serious data management problem for science organizations of all types, especially ones with mandated or required data reporting and compliance requirements. Additionally, data curators and scientists are encouraged to quantify the impact of their data holdings as a way to measure research success. Deriving actionable insights is the foremost goal of Big Data Analytics (BDA), which is especially true with geoscience, given its direct impact on most of the pressing global issues. Clearly, there is a pressing need for innovative approaches to making dark data discoverable, measurable, and actionable. We report on ClipCard, a Cloud-based SaaS analytic platform for instant summarization, quick search, visualization and easy sharing of metadata summaries form the Dark Data at hierarchical levels of detail, thus rendering it 'white', i.e., actionable. We present a use case of the ClipCard platform, a cloud-based application which helps generate (abstracted) visual metadata summaries and meta-analytics for environmental data at hierarchical scales within and across big data containers. These summaries and analyses provide important new tools for managing big data and simplifying collaboration through easy to deploy sharing APIs. The ClipCard application solves a growing data management bottleneck by helping enterprises and large organizations to summarize, search

  14. Standardized metadata for human pathogen/vector genomic sequences.

    Directory of Open Access Journals (Sweden)

    Vivien G Dugan

    Full Text Available High throughput sequencing has accelerated the determination of genome sequences for thousands of human infectious disease pathogens and dozens of their vectors. The scale and scope of these data are enabling genotype-phenotype association studies to identify genetic determinants of pathogen virulence and drug/insecticide resistance, and phylogenetic studies to track the origin and spread of disease outbreaks. To maximize the utility of genomic sequences for these purposes, it is essential that metadata about the pathogen/vector isolate characteristics be collected and made available in organized, clear, and consistent formats. Here we report the development of the GSCID/BRC Project and Sample Application Standard, developed by representatives of the Genome Sequencing Centers for Infectious Diseases (GSCIDs, the Bioinformatics Resource Centers (BRCs for Infectious Diseases, and the U.S. National Institute of Allergy and Infectious Diseases (NIAID, part of the National Institutes of Health (NIH, informed by interactions with numerous collaborating scientists. It includes mapping to terms from other data standards initiatives, including the Genomic Standards Consortium's minimal information (MIxS and NCBI's BioSample/BioProjects checklists and the Ontology for Biomedical Investigations (OBI. The standard includes data fields about characteristics of the organism or environmental source of the specimen, spatial-temporal information about the specimen isolation event, phenotypic characteristics of the pathogen/vector isolated, and project leadership and support. By modeling metadata fields into an ontology-based semantic framework and reusing existing ontologies and minimum information checklists, the application standard can be extended to support additional project-specific data fields and integrated with other data represented with comparable standards. The use of this metadata standard by all ongoing and future GSCID sequencing projects will

  15. A Combination of Central Pattern Generator-based and Reflex-based Neural Networks for Dynamic, Adaptive, Robust Bipedal Locomotion

    DEFF Research Database (Denmark)

    Di Canio, Giuliano; Larsen, Jørgen Christian; Wörgötter, Florentin

    2016-01-01

    Robotic systems inspired from humans have always been lightening up the curiosity of engineers and scientists. Of many challenges, human locomotion is a very difficult one where a number of different systems needs to interact in order to generate a correct and balanced pattern. To simulate the in...

  16. Generation of Mammalian Host-adapted Leptospira interrogans by Cultivation in Peritoneal Dialysis Membrane Chamber Implantation in Rats.

    Science.gov (United States)

    Grassmann, André Alex; McBride, Alan John Alexander; Nally, Jarlath E; Caimano, Melissa J

    2015-07-20

    Leptospira interrogans can infect a myriad of mammalian hosts, including humans (Bharti et al., 2003; Ko et al., 2009). Following acquisition by a suitable host, leptospires disseminate via the bloodstream to multiple tissues, including the kidneys, where they adhere to and colonize the proximal convoluted renal tubules (Athanazio et al., 2008). Infected hosts shed large number of spirochetes in their urine and the leptospires can survive in different environmental conditions before transmission to another host. Differential gene expression by Leptospira spp. permits adaption to these new conditions. Here we describe a protocol for the cultivation of Leptospira interrogans within Dialysis Membrane Chambers (DMCs) implanted into the peritoneal cavities of Sprague-Dawley rats (Caimano et al., 2014). This technique was originally developed to study mammalian adaption by the Lyme disease spirochete, Borrelia burgdorferi (Akins et al., 1998; Caimano, 2005). The small pore size (8,000 MWCO) of the dialysis membrane tubing used for this procedure permits access to host nutrients but excludes host antibodies and immune effector cells. Given the physiological and environmental similarities between DMCs and the proximal convoluted renal tubule, we reasoned that the DMC model would be suitable for studying in vivo gene expression by L. interrogans. In a 20 to 30 min procedure, DMCs containing virulent leptospires are surgically-implanted into the rat peritoneal cavity. Nine to 11 days post-implantation, DMCs are explanted and organisms recovered. Typically, a single DMC yields ~10(9) mammalian host-adapted leptospires (Caimano et al., 2014). In addition to providing a facile system for studying the transcriptional and physiologic changes pathogenic L. interrogans undergo within the mammal, the DMC model also provides a rationale basis for selecting new targets for mutagenesis and the identification of novel virulence determinants. Caution: Leptospira interrogans is a BSL-2

  17. openPDS: protecting the privacy of metadata through SafeAnswers.

    Science.gov (United States)

    de Montjoye, Yves-Alexandre; Shmueli, Erez; Wang, Samuel S; Pentland, Alex Sandy

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research.

  18. openPDS: protecting the privacy of metadata through SafeAnswers.

    Directory of Open Access Journals (Sweden)

    Yves-Alexandre de Montjoye

    Full Text Available The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1 we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2 we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research.

  19. Metadata Design in the New PDS4 Standards - Something for Everybody

    Science.gov (United States)

    Raugh, Anne C.; Hughes, John S.

    2015-11-01

    The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data

  20. Technical Evaluation Report 40: The International Learning Object Metadata Survey

    Directory of Open Access Journals (Sweden)

    Norm Friesen

    2004-11-01

    Full Text Available A wide range of projects and organizations is currently making digital learning resources (learning objects available to instructors, students, and designers via systematic, standards-based infrastructures. One standard that is central to many of these efforts and infrastructures is known as Learning Object Metadata (IEEE 1484.12.1-2002, or LOM. This report builds on Report #11 in this series, and discusses the findings of the author's recent study of ways in which the LOM standard is being used internationally.