WorldWideScience

Sample records for delaware internet-based metadata

  1. Metadata Effectiveness in Internet Discovery: An Analysis of Digital Collection Metadata Elements and Internet Search Engine Keywords

    Science.gov (United States)

    Yang, Le

    2016-01-01

    This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…

  2. Using Metadata to Build Geographic Information Sharing Environment on Internet

    Directory of Open Access Journals (Sweden)

    Chih-hong Sun

    1999-12-01

    Full Text Available Internet provides a convenient environment to share geographic information. Web GIS (Geographic Information System even provides users a direct access environment to geographic databases through Internet. However, the complexity of geographic data makes it difficult for users to understand the real content and the limitation of geographic information. In some cases, users may misuse the geographic data and make wrong decisions. Meanwhile, geographic data are distributed across various government agencies, academic institutes, and private organizations, which make it even more difficult for users to fully understand the content of these complex data. To overcome these difficulties, this research uses metadata as a guiding mechanism for users to fully understand the content and the limitation of geographic data. We introduce three metadata standards commonly used for geographic data and metadata authoring tools available in the US. We also review the current development of geographic metadata standard in Taiwan. Two metadata authoring tools are developed in this research, which will enable users to build their own geographic metadata easily.[Article content in Chinese

  3. Development of health information search engine based on metadata and ontology.

    Science.gov (United States)

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  4. A web-based, dynamic metadata interface to MDSplus

    International Nuclear Information System (INIS)

    Gardner, Henry J.; Karia, Raju; Manduchi, Gabriele

    2008-01-01

    We introduce the concept of a Fusion Data Grid and discuss the management of metadata within such a Grid. We describe a prototype application which serves fusion data over the internet together with metadata information which can be flexibly created and modified over time. The application interfaces with the MDSplus data acquisition system and it has been designed to capture metadata which is generated by scientists from the post-processing of experimental data. The implementation of dynamic metadata tables using the Java programming language together with an object-relational mapping system, Hibernate, is described in the Appendix

  5. Study on high-level waste geological disposal metadata model

    International Nuclear Information System (INIS)

    Ding Xiaobin; Wang Changhong; Zhu Hehua; Li Xiaojun

    2008-01-01

    This paper expatiated the concept of metadata and its researches within china and abroad, then explain why start the study on the metadata model of high-level nuclear waste deep geological disposal project. As reference to GML, the author first set up DML under the framework of digital underground space engineering. Based on DML, a standardized metadata employed in high-level nuclear waste deep geological disposal project is presented. Then, a Metadata Model with the utilization of internet is put forward. With the standardized data and CSW services, this model may solve the problem in the data sharing and exchanging of different data form A metadata editor is build up in order to search and maintain metadata based on this model. (authors)

  6. Perancangan dan Implementasi Aplikasi Internet Radio Menggunakan Multimedia Database Melalui Penerapan Ontology dan Metadata

    Directory of Open Access Journals (Sweden)

    M. Rudy Erwansyah

    2012-06-01

    Full Text Available The study aims to analyze, design and implement the internet radio application used in managing the audio data on Heartline FM radio station. In this application, the audio data which has been managed can be used in a radio broadcast scheduling. The scheduled radio broadcast is then forwarded to the webcast server to be transmitted through the Internet. This research carries out analysis, design and implementation using Object Oriented Analysis and Design method and Lean Architecture for Agile Software Development. The programcomponent design consists of: (1 software functional system, (2 user interface, (3 problem domain model, which in internet radio application is divided into five subcomponents, namely: audio-indexing-retrieval, scheduling, reporting, user and ontology. In the implementation of internet application of this radio, the audio data management uses multimedia database by applying metadata and ontology, so that the process of indexing and retrieval can be reused quickly on the broadcast. This application can also be used in carrying out the radiobroadcast automatically during specified hours. This internet radio application has been able to meet the needs of radio Heartline.

  7. DESIGN AND PRACTICE ON METADATA SERVICE SYSTEM OF SURVEYING AND MAPPING RESULTS BASED ON GEONETWORK

    Directory of Open Access Journals (Sweden)

    Z. Zha

    2012-08-01

    Full Text Available Based on the analysis and research on the current geographic information sharing and metadata service,we design, develop and deploy a distributed metadata service system based on GeoNetwork covering more than 30 nodes in provincial units of China.. By identifying the advantages of GeoNetwork, we design a distributed metadata service system of national surveying and mapping results. It consists of 31 network nodes, a central node and a portal. Network nodes are the direct system metadata source, and are distributed arround the country. Each network node maintains a metadata service system, responsible for metadata uploading and management. The central node harvests metadata from network nodes using OGC CSW 2.0.2 standard interface. The portal shows all metadata in the central node, provides users with a variety of methods and interface for metadata search or querying. It also provides management capabilities on connecting the central node and the network nodes together. There are defects with GeoNetwork too. Accordingly, we made improvement and optimization on big-amount metadata uploading, synchronization and concurrent access. For metadata uploading and synchronization, by carefully analysis the database and index operation logs, we successfully avoid the performance bottlenecks. And with a batch operation and dynamic memory management solution, data throughput and system performance are significantly improved; For concurrent access, , through a request coding and results cache solution, query performance is greatly improved. To smoothly respond to huge concurrent requests, a web cluster solution is deployed. This paper also gives an experiment analysis and compares the system performance before and after improvement and optimization. Design and practical results have been applied in national metadata service system of surveying and mapping results. It proved that the improved GeoNetwork service architecture can effectively adaptive for

  8. Delaware Bay, Delaware Benthic Habitats 2010 Substrate

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coastal Program of Delaware's Division of Soil and Water conservation (DNREC), the University of Delaware, Partnership for the Delaware Estuary, and the New...

  9. Delaware Bay, Delaware Benthic Habitats 2010 Biotic

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coastal Program of Delaware's Division of Soil and Water conservation (DNREC), the University of Delaware, Partnership for the Delaware Estuary, and the New...

  10. Delaware Bay, Delaware Benthic Habitats 2010 Geoform

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coastal Program of Delaware's Division of Soil and Water conservation (DNREC), the University of Delaware, Partnership for the Delaware Estuary, and the New...

  11. Delaware Bay, Delaware Benthic Habitats 2010 Geodatabase

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coastal Program of Delaware's Division of Soil and Water conservation (DNREC), the University of Delaware, Partnership for the Delaware Estuary, and the New...

  12. Multi-facetted Metadata - Describing datasets with different metadata schemas at the same time

    Science.gov (United States)

    Ulbricht, Damian; Klump, Jens; Bertelmann, Roland

    2013-04-01

    Inspired by the wish to re-use research data a lot of work is done to bring data systems of the earth sciences together. Discovery metadata is disseminated to data portals to allow building of customized indexes of catalogued dataset items. Data that were once acquired in the context of a scientific project are open for reappraisal and can now be used by scientists that were not part of the original research team. To make data re-use easier, measurement methods and measurement parameters must be documented in an application metadata schema and described in a written publication. Linking datasets to publications - as DataCite [1] does - requires again a specific metadata schema and every new use context of the measured data may require yet another metadata schema sharing only a subset of information with the meta information already present. To cope with the problem of metadata schema diversity in our common data repository at GFZ Potsdam we established a solution to store file-based research data and describe these with an arbitrary number of metadata schemas. Core component of the data repository is an eSciDoc infrastructure that provides versioned container objects, called eSciDoc [2] "items". The eSciDoc content model allows assigning files to "items" and adding any number of metadata records to these "items". The eSciDoc items can be submitted, revised, and finally published, which makes the data and metadata available through the internet worldwide. GFZ Potsdam uses eSciDoc to support its scientific publishing workflow, including mechanisms for data review in peer review processes by providing temporary web links for external reviewers that do not have credentials to access the data. Based on the eSciDoc API, panMetaDocs [3] provides a web portal for data management in research projects. PanMetaDocs, which is based on panMetaWorks [4], is a PHP based web application that allows to describe data with any XML-based schema. It uses the eSciDoc infrastructures

  13. Learning Object Metadata in a Web-Based Learning Environment

    NARCIS (Netherlands)

    Avgeriou, Paris; Koutoumanos, Anastasios; Retalis, Symeon; Papaspyrou, Nikolaos

    2000-01-01

    The plethora and variance of learning resources embedded in modern web-based learning environments require a mechanism to enable their structured administration. This goal can be achieved by defining metadata on them and constructing a system that manages the metadata in the context of the learning

  14. A Novel Architecture of Metadata Management System Based on Intelligent Cache

    Institute of Scientific and Technical Information of China (English)

    SONG Baoyan; ZHAO Hongwei; WANG Yan; GAO Nan; XU Jin

    2006-01-01

    This paper introduces a novel architecture of metadata management system based on intelligent cache called Metadata Intelligent Cache Controller (MICC). By using an intelligent cache to control the metadata system, MICC can deal with different scenarios such as splitting and merging of queries into sub-queries for available metadata sets in local, in order to reduce access time of remote queries. Application can find results patially from local cache and the remaining portion of the metadata that can be fetched from remote locations. Using the existing metadata, it can not only enhance the fault tolerance and load balancing of system effectively, but also improve the efficiency of access while ensuring the access quality.

  15. Dyniqx: a novel meta-search engine for metadata based cross search

    OpenAIRE

    Zhu, Jianhan; Song, Dawei; Eisenstadt, Marc; Barladeanu, Cristi; Rüger, Stefan

    2008-01-01

    The effect of metadata in collection fusion has not been sufficiently studied. In response to this, we present a novel meta-search engine called Dyniqx for metadata based cross search. Dyniqx exploits the availability of metadata in academic search services such as PubMed and Google Scholar etc for fusing search results from heterogeneous search engines. In addition, metadata from these search engines are used for generating dynamic query controls such as sliders and tick boxes etc which are ...

  16. Ontology-based Metadata Portal for Unified Semantics

    Data.gov (United States)

    National Aeronautics and Space Administration — The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS) will extend the prototype Ontology-Driven Interactive Search Environment for Earth Sciences...

  17. A programmatic view of metadata, metadata services, and metadata flow in ATLAS

    International Nuclear Information System (INIS)

    Malon, D; Albrand, S; Gallas, E; Stewart, G

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS are considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and integrated view to physicists, and support both human use and programmatic access. In this paper we consider ATLAS metadata, metadata services, and metadata flow principally from the illustrative perspective of how disparate metadata are made available to executing jobs and, conversely, how metadata generated by such jobs are returned. We describe how metadata are read, how metadata are cached, and how metadata generated by jobs and the tasks of which they are a part are communicated, associated with data products, and preserved. We also discuss the principles that guide decision-making about metadata storage, replication, and access.

  18. In Interactive, Web-Based Approach to Metadata Authoring

    Science.gov (United States)

    Pollack, Janine; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    NASA's Global Change Master Directory (GCMD) serves a growing number of users by assisting the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 8000 data set descriptions in Directory Interchange Format (DIF) and 200 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information, thus allowing researchers to discover data pertaining to a particular geographic location, as well as subject of interest. The GCMD strives to be the preeminent data locator for world-wide directory level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are not currently attracting. widespread usage. With usage being the prime indicator of utility, it has become apparent that current tools must be improved. As a result, the GCMD has released a new suite of web-based authoring tools that enable a user to create new data and service entries, as well as modify existing data entries. With these tools, a more interactive approach to metadata authoring is taken, as they feature a visual "checklist" of data/service fields that automatically update when a field is completed. In this way, the user can quickly gauge which of the required and optional fields have not been populated. With the release of these tools, the Earth science community will be further assisted in efficiently creating quality data and services metadata. Keywords: metadata, Earth science, metadata authoring tools

  19. On the Origin of Metadata

    Directory of Open Access Journals (Sweden)

    Sam Coppens

    2012-12-01

    Full Text Available Metadata has been around and has evolved for centuries, albeit not recognized as such. Medieval manuscripts typically had illuminations at the start of each chapter, being both a kind of signature for the author writing the script and a pictorial chapter anchor for the illiterates at the time. Nowadays, there is so much fragmented information on the Internet that users sometimes fail to distinguish the real facts from some bended truth, let alone being able to interconnect different facts. Here, the metadata can both act as noise-reductors for detailed recommendations to the end-users, as it can be the catalyst to interconnect related information. Over time, metadata thus not only has had different modes of information, but furthermore, metadata’s relation of information to meaning, i.e., “semantics”, evolved. Darwin’s evolutionary propositions, from “species have an unlimited reproductive capacity”, over “natural selection”, to “the cooperation of mutations leads to adaptation to the environment” show remarkable parallels to both metadata’s different modes of information and to its relation of information to meaning over time. In this paper, we will show that the evolution of the use of (metadata can be mapped to Darwin’s nine evolutionary propositions. As mankind and its behavior are products of an evolutionary process, the evolutionary process of metadata with its different modes of information is on the verge of a new-semantic-era.

  20. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    Science.gov (United States)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  1. Delaware Technical & Community College's response to the critical shortage of Delaware secondary science teachers

    Science.gov (United States)

    Campbell, Nancy S.

    This executive position paper examines the critical shortage of Delaware high school science teachers and Delaware Technical & Community College's possible role in addressing this shortage. A concise analysis of economic and political implications of the science teacher shortage is presented. The following topics were researched and evaluated: the specific science teacher needs for Delaware school districts; the science teacher education program offerings at Delaware universities and colleges; the Alternative Route to Teacher Certification (ARTC); and the state of Delaware's scholarship response to the need. Recommendations for Delaware Tech's role include the development and implementation of two new Associate of Arts of Teaching programs in physics secondary science education and chemistry secondary science education.

  2. Inheritance rules for Hierarchical Metadata Based on ISO 19115

    Science.gov (United States)

    Zabala, A.; Masó, J.; Pons, X.

    2012-04-01

    Mainly, ISO19115 has been used to describe metadata for datasets and services. Furthermore, ISO19115 standard (as well as the new draft ISO19115-1) includes a conceptual model that allows to describe metadata at different levels of granularity structured in hierarchical levels, both in aggregated resources such as particularly series, datasets, and also in more disaggregated resources such as types of entities (feature type), types of attributes (attribute type), entities (feature instances) and attributes (attribute instances). In theory, to apply a complete metadata structure to all hierarchical levels of metadata, from the whole series to an individual feature attributes, is possible, but to store all metadata at all levels is completely impractical. An inheritance mechanism is needed to store each metadata and quality information at the optimum hierarchical level and to allow an ease and efficient documentation of metadata in both an Earth observation scenario such as a multi-satellite mission multiband imagery, as well as in a complex vector topographical map that includes several feature types separated in layers (e.g. administrative limits, contour lines, edification polygons, road lines, etc). Moreover, and due to the traditional split of maps in tiles due to map handling at detailed scales or due to the satellite characteristics, each of the previous thematic layers (e.g. 1:5000 roads for a country) or band (Landsat-5 TM cover of the Earth) are tiled on several parts (sheets or scenes respectively). According to hierarchy in ISO 19115, the definition of general metadata can be supplemented by spatially specific metadata that, when required, either inherits or overrides the general case (G.1.3). Annex H of this standard states that only metadata exceptions are defined at lower levels, so it is not necessary to generate the full registry of metadata for each level but to link particular values to the general value that they inherit. Conceptually the metadata

  3. Metadata

    CERN Document Server

    Zeng, Marcia Lei

    2016-01-01

    Metadata remains the solution for describing the explosively growing, complex world of digital information, and continues to be of paramount importance for information professionals. Providing a solid grounding in the variety and interrelationships among different metadata types, Zeng and Qin's thorough revision of their benchmark text offers a comprehensive look at the metadata schemas that exist in the world of library and information science and beyond, as well as the contexts in which they operate. Cementing its value as both an LIS text and a handy reference for professionals already in the field, this book: * Lays out the fundamentals of metadata, including principles of metadata, structures of metadata vocabularies, and metadata descriptions * Surveys metadata standards and their applications in distinct domains and for various communities of metadata practice * Examines metadata building blocks, from modelling to defining properties, and from designing application profiles to implementing value vocabu...

  4. Metabolonote: A wiki-based database for managing hierarchical metadata of metabolome analyses

    Directory of Open Access Journals (Sweden)

    Takeshi eAra

    2015-04-01

    Full Text Available Metabolomics—technology for comprehensive detection of small molecules in an organism—lags behind the other omics in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata, existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called TogoMD, with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data, but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitates the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  5. Creating preservation metadata from XML-metadata profiles

    Science.gov (United States)

    Ulbricht, Damian; Bertelmann, Roland; Gebauer, Petra; Hasler, Tim; Klump, Jens; Kirchner, Ingo; Peters-Kottig, Wolfgang; Mettig, Nora; Rusch, Beate

    2014-05-01

    Registration of dataset DOIs at DataCite makes research data citable and comes with the obligation to keep data accessible in the future. In addition, many universities and research institutions measure data that is unique and not repeatable like the data produced by an observational network and they want to keep these data for future generations. In consequence, such data should be ingested in preservation systems, that automatically care for file format changes. Open source preservation software that is developed along the definitions of the ISO OAIS reference model is available but during ingest of data and metadata there are still problems to be solved. File format validation is difficult, because format validators are not only remarkably slow - due to variety in file formats different validators return conflicting identification profiles for identical data. These conflicts are hard to resolve. Preservation systems have a deficit in the support of custom metadata. Furthermore, data producers are sometimes not aware that quality metadata is a key issue for the re-use of data. In the project EWIG an university institute and a research institute work together with Zuse-Institute Berlin, that is acting as an infrastructure facility, to generate exemplary workflows for research data into OAIS compliant archives with emphasis on the geosciences. The Institute for Meteorology provides timeseries data from an urban monitoring network whereas GFZ Potsdam delivers file based data from research projects. To identify problems in existing preservation workflows the technical work is complemented by interviews with data practitioners. Policies for handling data and metadata are developed. Furthermore, university teaching material is created to raise the future scientists awareness of research data management. As a testbed for ingest workflows the digital preservation system Archivematica [1] is used. During the ingest process metadata is generated that is compliant to the

  6. THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA

    Energy Technology Data Exchange (ETDEWEB)

    Devarakonda, Ranjeet [ORNL; Shrestha, Biva [ORNL; Palanisamy, Giri [ORNL; Hook, Leslie A [ORNL; Killeffer, Terri S [ORNL; Boden, Thomas A [ORNL; Cook, Robert B [ORNL; Zolly, Lisa [United States Geological Service (USGS); Hutchison, Viv [United States Geological Service (USGS); Frame, Mike [United States Geological Service (USGS); Cialella, Alice [Brookhaven National Laboratory (BNL); Lazer, Kathy [Brookhaven National Laboratory (BNL)

    2014-01-01

    Nobody is better suited to describe data than the scientist who created it. This description about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset [1]. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [2][4]. OME is part of ORNL s Mercury software fleet [2][3]. It was jointly developed to support projects funded by the United States Geological Survey (USGS), U.S. Department of Energy (DOE), National Aeronautics and Space Administration (NASA) and National Oceanic and Atmospheric Administration (NOAA). OME s architecture provides a customizable interface to support project-specific requirements. Using this new architecture, the ORNL team developed OME instances for USGS s Core Science Analytics, Synthesis, and Libraries (CSAS&L), DOE s Next Generation Ecosystem Experiments (NGEE) and Atmospheric Radiation Measurement (ARM) Program, and the international Surface Ocean Carbon Dioxide ATlas (SOCAT). Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. From the information on the form, the Metadata Editor can create an XML file on the server that the editor is installed or to the user s personal computer. Researchers can also use the ORNL Metadata Editor to modify existing XML metadata files. As an example, an NGEE Arctic scientist use OME to register

  7. A Programmatic View of Metadata, Metadata Services, and Metadata Flow in ATLAS

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS is considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. Trigger information and data from the Large Hadron Collider itself provide cases in point, but examples abound. Metadata about logical or physics constructs, such as data-taking periods and runs and luminosity blocks and events and algorithms, often need to be mapped to deployment and production constructs, such as datasets and jobs and files and software versions, and vice versa. Metadata at one level of granularity may have implications at another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and ...

  8. Improving Scientific Metadata Interoperability And Data Discoverability using OAI-PMH

    Science.gov (United States)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James M.; Wilson, Bruce E.

    2010-12-01

    While general-purpose search engines (such as Google or Bing) are useful for finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. However, there are a number of different protocols for harvesting metadata, with some challenges for ensuring that updates are propagated and for collaborations with repositories using differing metadata standards. The Open Archive Initiative Protocol for Metadata Handling (OAI-PMH) is a standard that is seeing increased use as a means for exchanging structured metadata. OAI-PMH implementations must support Dublin Core as a metadata standard, with other metadata formats as optional. We have developed tools which enable our structured search tool (Mercury; http://mercury.ornl.gov) to consume metadata from OAI-PMH services in any of the metadata formats we support (Dublin Core, Darwin Core, FCDC CSDGM, GCMD DIF, EML, and ISO 19115/19137). We are also making ORNL DAAC metadata available through OAI-PMH for other metadata tools to utilize, such as the NASA Global Change Master Directory, GCMD). This paper describes Mercury capabilities with multiple metadata formats, in general, and, more specifically, the results of our OAI-PMH implementations and

  9. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    Science.gov (United States)

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  10. A Metadata Standard for Hydroinformatic Data Conforming to International Standards

    Science.gov (United States)

    Notay, Vikram; Carstens, Georg; Lehfeldt, Rainer

    2017-04-01

    The affordable availability of computing power and digital storage has been a boon for the scientific community. The hydroinformatics community has also benefitted from the so-called digital revolution, which has enabled the tackling of more and more complex physical phenomena using hydroinformatic models, instruments, sensors, etc. With models getting more and more complex, computational domains getting larger and the resolution of computational grids and measurement data getting finer, a large amount of data is generated and consumed in any hydroinformatics related project. The ubiquitous availability of internet also contributes to this phenomenon with data being collected through sensor networks connected to telecommunications networks and the internet long before the term Internet of Things existed. Although generally good, this exponential increase in the number of available datasets gives rise to the need to describe this data in a standardised way to not only be able to get a quick overview about the data but to also facilitate interoperability of data from different sources. The Federal Waterways Engineering and Research Institute (BAW) is a federal authority of the German Federal Ministry of Transport and Digital Infrastructure. BAW acts as a consultant for the safe and efficient operation of the German waterways. As part of its consultation role, BAW operates a number of physical and numerical models for sections of inland and marine waterways. In order to uniformly describe the data produced and consumed by these models throughout BAW and to ensure interoperability with other federal and state institutes on the one hand and with EU countries on the other, a metadata profile for hydroinformatic data has been developed at BAW. The metadata profile is composed in its entirety using the ISO 19115 international standard for metadata related to geographic information. Due to the widespread use of the ISO 19115 standard in the existing geodata infrastructure

  11. 78 FR 14060 - Television Broadcasting Services; Seaford, Delaware and Dover, Delaware

    Science.gov (United States)

    2013-03-04

    ...] Television Broadcasting Services; Seaford, Delaware and Dover, Delaware AGENCY: Federal Communications... waiver of the Commission's freeze on the filing of petitions for rulemaking by televisions stations... first local television service, and that Seaford will remain well-served after the reallotment because...

  12. OlyMPUS - The Ontology-based Metadata Portal for Unified Semantics

    Science.gov (United States)

    Huffer, E.; Gleason, J. L.

    2015-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support data consumers and data providers, enabling the latter to register their data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS leverages the semantics and reasoning capabilities of ODISEES to provide data producers with a semi-automated interface for producing the semantically rich metadata needed to support ODISEES' data discovery and access services. It integrates the ODISEES metadata search system with multiple NASA data delivery tools to enable data consumers to create customized data sets for download to their computers, or for NASA Advanced Supercomputing (NAS) facility registered users, directly to NAS storage resources for access by applications running on NAS supercomputers. A core function of NASA's Earth Science Division is research and analysis that uses the full spectrum of data products available in NASA archives. Scientists need to perform complex analyses that identify correlations and non-obvious relationships across all types of Earth System phenomena. Comprehensive analytics are hindered, however, by the fact that many Earth science data products are disparate and hard to synthesize. Variations in how data are collected, processed, gridded, and stored, create challenges for data interoperability and synthesis, which are exacerbated by the sheer volume of available data. Robust, semantically rich metadata can support tools for data discovery and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Such capabilities are critical to enabling the research activities integral to NASA's strategic plans. However, as metadata requirements increase and competing standards emerge

  13. 78 FR 36658 - Safety Zone; Delaware River Waterfront Corp. Fireworks Display, Delaware River; Camden, NJ

    Science.gov (United States)

    2013-06-19

    ... portion of the Delaware River from operating while a fireworks event is taking place. This temporary...-AA00 Safety Zone; Delaware River Waterfront Corp. Fireworks Display, Delaware River; Camden, NJ AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a temporary...

  14. Metadata

    CERN Document Server

    Pomerantz, Jeffrey

    2015-01-01

    When "metadata" became breaking news, appearing in stories about surveillance by the National Security Agency, many members of the public encountered this once-obscure term from information science for the first time. Should people be reassured that the NSA was "only" collecting metadata about phone calls -- information about the caller, the recipient, the time, the duration, the location -- and not recordings of the conversations themselves? Or does phone call metadata reveal more than it seems? In this book, Jeffrey Pomerantz offers an accessible and concise introduction to metadata. In the era of ubiquitous computing, metadata has become infrastructural, like the electrical grid or the highway system. We interact with it or generate it every day. It is not, Pomerantz tell us, just "data about data." It is a means by which the complexity of an object is represented in a simpler form. For example, the title, the author, and the cover art are metadata about a book. When metadata does its job well, it fades i...

  15. The XML Metadata Editor of GFZ Data Services

    Science.gov (United States)

    Ulbricht, Damian; Elger, Kirsten; Tesei, Telemaco; Trippanera, Daniele

    2017-04-01

    Following the FAIR data principles, research data should be Findable, Accessible, Interoperable and Reuseable. Publishing data under these principles requires to assign persistent identifiers to the data and to generate rich machine-actionable metadata. To increase the interoperability, metadata should include shared vocabularies and crosslink the newly published (meta)data and related material. However, structured metadata formats tend to be complex and are not intended to be generated by individual scientists. Software solutions are needed that support scientists in providing metadata describing their data. To facilitate data publication activities of 'GFZ Data Services', we programmed an XML metadata editor that assists scientists to create metadata in different schemata popular in the earth sciences (ISO19115, DIF, DataCite), while being at the same time usable by and understandable for scientists. Emphasis is placed on removing barriers, in particular the editor is publicly available on the internet without registration [1] and the scientists are not requested to provide information that may be generated automatically (e.g. the URL of a specific licence or the contact information of the metadata distributor). Metadata are stored in browser cookies and a copy can be saved to the local hard disk. To improve usability, form fields are translated into the scientific language, e.g. 'creators' of the DataCite schema are called 'authors'. To assist filling in the form, we make use of drop down menus for small vocabulary lists and offer a search facility for large thesauri. Explanations to form fields and definitions of vocabulary terms are provided in pop-up windows and a full documentation is available for download via the help menu. In addition, multiple geospatial references can be entered via an interactive mapping tool, which helps to minimize problems with different conventions to provide latitudes and longitudes. Currently, we are extending the metadata editor

  16. Title, Description, and Subject are the Most Important Metadata Fields for Keyword Discoverability

    Directory of Open Access Journals (Sweden)

    Laura Costello

    2016-09-01

    Full Text Available A Review of: Yang, L. (2016. Metadata effectiveness in internet discovery: An analysis of digital collection metadata elements and internet search engine keywords. College & Research Libraries, 77(1, 7-19. http://doi.org/10.5860/crl.77.1.7 Objective – To determine which metadata elements best facilitate discovery of digital collections. Design – Case study. Setting – A public research university serving over 32,000 graduate and undergraduate students in the Southwestern United States of America. Subjects – A sample of 22,559 keyword searches leading to the institution’s digital repository between August 1, 2013, and July 31, 2014. Methods – The author used Google Analytics to analyze 73,341 visits to the institution’s digital repository. He determined that 22,559 of these visits were due to keyword searches. Using Random Integer Generator, the author identified a random sample of 378 keyword searches. The author then matched the keywords with the Dublin Core and VRA Core metadata elements on the landing page in the digital repository to determine which metadata field had drawn the keyword searcher to that particular page. Many of these keywords matched to more than one metadata field, so the author also analyzed the metadata elements that generated unique keyword hits and those fields that were frequently matched together. Main Results – Title was the most matched metadata field with 279 matched keywords from searches. Description and Subject were also significant fields with 208 and 79 matches respectively. Slightly more than half of the results, 195 keywords, matched the institutional repository in one field only. Both Title and Description had significant match rates both independently and in conjunction with other elements, but Subject keywords were the sole match in only three of the sampled cases. Conclusion – The Dublin Core elements of Title, Description, and Subject were the most frequently matched fields in keyword

  17. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    Directory of Open Access Journals (Sweden)

    Jianwei Liao

    2014-01-01

    Full Text Available This paper presents a novel metadata management mechanism on the metadata server (MDS for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  18. An Intelligent Web Digital Image Metadata Service Platform for Social Curation Commerce Environment

    Directory of Open Access Journals (Sweden)

    Seong-Yong Hong

    2015-01-01

    Full Text Available Information management includes multimedia data management, knowledge management, collaboration, and agents, all of which are supporting technologies for XML. XML technologies have an impact on multimedia databases as well as collaborative technologies and knowledge management. That is, e-commerce documents are encoded in XML and are gaining much popularity for business-to-business or business-to-consumer transactions. Recently, the internet sites, such as e-commerce sites and shopping mall sites, deal with a lot of image and multimedia information. This paper proposes an intelligent web digital image information retrieval platform, which adopts XML technology for social curation commerce environment. To support object-based content retrieval on product catalog images containing multiple objects, we describe multilevel metadata structures representing the local features, global features, and semantics of image data. To enable semantic-based and content-based retrieval on such image data, we design an XML-Schema for the proposed metadata. We also describe how to automatically transform the retrieval results into the forms suitable for the various user environments, such as web browser or mobile device, using XSLT. The proposed scheme can be utilized to enable efficient e-catalog metadata sharing between systems, and it will contribute to the improvement of the retrieval correctness and the user’s satisfaction on semantic-based web digital image information retrieval.

  19. Environmental Assessment: Eagle Heights Housing Area Revitalization Dover Air Force Base, Delaware

    Science.gov (United States)

    2004-07-01

    tidal species. Butterflies were the only insects surveyed, and nine were found on base. Approximately 51 species of birds were recorded on base...Jones River adjacent to the northern border of the housing area, the fro-fruit (Phyla lanceolata) and the hyssop-leaf hedge- nettle (Stachys...other sites in Delaware that this species is found. The hyssop-leaf hedge- nettle thrives in moist sandy soil along the coast and shoreline and occurs

  20. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    Science.gov (United States)

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  1. Improvements to the Ontology-based Metadata Portal for Unified Semantics (OlyMPUS)

    Science.gov (United States)

    Linsinbigler, M. A.; Gleason, J. L.; Huffer, E.

    2016-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support Earth Science data consumers and data providers, enabling the latter to register data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS complements the ODISEES' data discovery system with an intelligent tool to enable data producers to auto-generate semantically enhanced metadata and upload it to the metadata repository that drives ODISEES. Like ODISEES, the OlyMPUS metadata provisioning tool leverages robust semantics, a NoSQL database and query engine, an automated reasoning engine that performs first- and second-order deductive inferencing, and uses a controlled vocabulary to support data interoperability and automated analytics. The ODISEES data discovery portal leverages this metadata to provide a seamless data discovery and access experience for data consumers who are interested in comparing and contrasting the multiple Earth science data products available across NASA data centers. Olympus will support scientists' services and tools for performing complex analyses and identifying correlations and non-obvious relationships across all types of Earth System phenomena using the full spectrum of NASA Earth Science data available. By providing an intelligent discovery portal that supplies users - both human users and machines - with detailed information about data products, their contents and their structure, ODISEES will reduce the level of effort required to identify and prepare large volumes of data for analysis. This poster will explain how OlyMPUS leverages deductive reasoning and other technologies to create an integrated environment for generating and exploiting semantically rich metadata.

  2. Handbook of metadata, semantics and ontologies

    CERN Document Server

    Sicilia, Miguel-Angel

    2013-01-01

    Metadata research has emerged as a discipline cross-cutting many domains, focused on the provision of distributed descriptions (often called annotations) to Web resources or applications. Such associated descriptions are supposed to serve as a foundation for advanced services in many application areas, including search and location, personalization, federation of repositories and automated delivery of information. Indeed, the Semantic Web is in itself a concrete technological framework for ontology-based metadata. For example, Web-based social networking requires metadata describing people and

  3. VT Wireless Internet Service Providers 2006

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VT Wireless Internet Service Provider (ISP) dataset (WISP2006) includes polygons depicting the extent of Vermont's WISP broadband system as of...

  4. VT Wireless Internet Service Providers 2007

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The VT Wireless Internet Service Provider (ISP) dataset (WISP2007) includes polygons depicting the extent of Vermont's WISP broadband system as of...

  5. How libraries use publisher metadata

    Directory of Open Access Journals (Sweden)

    Steve Shadle

    2013-11-01

    Full Text Available With the proliferation of electronic publishing, libraries are increasingly relying on publisher-supplied metadata to meet user needs for discovery in library systems. However, many publisher/content provider staff creating metadata are unaware of the end-user environment and how libraries use their metadata. This article provides an overview of the three primary discovery systems that are used by academic libraries, with examples illustrating how publisher-supplied metadata directly feeds into these systems and is used to support end-user discovery and access. Commonly seen metadata problems are discussed, with recommendations suggested. Based on a series of presentations given in Autumn 2012 to the staff of a large publisher, this article uses the University of Washington Libraries systems and services as illustrative examples. Judging by the feedback received from these presentations, publishers (specifically staff not familiar with the big picture of metadata standards work would benefit from a better understanding of the systems and services libraries provide using the data that is created and managed by publishers.

  6. Delaware Bay Upper Shelf Bottom Sediments 2008-2010

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Coastal Program of Delaware's Division of Soil and Water conservation (DNREC), the University of Delaware, Partnership for the Delaware Estuary, and the New...

  7. Competence Based Educational Metadata for Supporting Lifelong Competence Development Programmes

    NARCIS (Netherlands)

    Sampson, Demetrios; Fytros, Demetrios

    2008-01-01

    Sampson, D., & Fytros, D. (2008). Competence Based Educational Metadata for Supporting Lifelong Competence Development Programmes. In P. Diaz, Kinshuk, I. Aedo & E. Mora (Eds.), Proceedings of the 8th IEEE International Conference on Advanced Learning Technologies (ICALT 2008), pp. 288-292. July,

  8. The RBV metadata catalog

    Science.gov (United States)

    Andre, Francois; Fleury, Laurence; Gaillardet, Jerome; Nord, Guillaume

    2015-04-01

    RBV (Réseau des Bassins Versants) is a French initiative to consolidate the national efforts made by more than 15 elementary observatories funded by various research institutions (CNRS, INRA, IRD, IRSTEA, Universities) that study river and drainage basins. The RBV Metadata Catalogue aims at giving an unified vision of the work produced by every observatory to both the members of the RBV network and any external person interested by this domain of research. Another goal is to share this information with other existing metadata portals. Metadata management is heterogeneous among observatories ranging from absence to mature harvestable catalogues. Here, we would like to explain the strategy used to design a state of the art catalogue facing this situation. Main features are as follows : - Multiple input methods: Metadata records in the catalog can either be entered with the graphical user interface, harvested from an existing catalogue or imported from information system through simplified web services. - Hierarchical levels: Metadata records may describe either an observatory, one of its experimental site or a single dataset produced by one instrument. - Multilingualism: Metadata can be easily entered in several configurable languages. - Compliance to standards : the backoffice part of the catalogue is based on a CSW metadata server (Geosource) which ensures ISO19115 compatibility and the ability of being harvested (globally or partially). On going tasks focus on the use of SKOS thesaurus and SensorML description of the sensors. - Ergonomy : The user interface is built with the GWT Framework to offer a rich client application with a fully ajaxified navigation. - Source code sharing : The work has led to the development of reusable components which can be used to quickly create new metadata forms in other GWT applications You can visit the catalogue (http://portailrbv.sedoo.fr/) or contact us by email rbv@sedoo.fr.

  9. Creating context for the experiment record. User-defined metadata: investigations into metadata usage in the LabTrove ELN.

    Science.gov (United States)

    Willoughby, Cerys; Bird, Colin L; Coles, Simon J; Frey, Jeremy G

    2014-12-22

    The drive toward more transparency in research, the growing willingness to make data openly available, and the reuse of data to maximize the return on research investment all increase the importance of being able to find information and make links to the underlying data. The use of metadata in Electronic Laboratory Notebooks (ELNs) to curate experiment data is an essential ingredient for facilitating discovery. The University of Southampton has developed a Web browser-based ELN that enables users to add their own metadata to notebook entries. A survey of these notebooks was completed to assess user behavior and patterns of metadata usage within ELNs, while user perceptions and expectations were gathered through interviews and user-testing activities within the community. The findings indicate that while some groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users are making little attempts to use it, thereby endangering their ability to recover data in the future. A survey of patterns of metadata use in these notebooks, together with feedback from the user community, indicated that while a few groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users adopt a "minimum required" approach to metadata. To investigate whether the patterns of metadata use in LabTrove were unusual, a series of surveys were undertaken to investigate metadata usage in a variety of platforms supporting user-defined metadata. These surveys also provided the opportunity to investigate whether interface designs in these other environments might inform strategies for encouraging metadata creation and more effective use of metadata in LabTrove.

  10. Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Delaware

    Energy Technology Data Exchange (ETDEWEB)

    Mendon, Vrushali V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhao, Mingjie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Taylor, Zachary T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Poehlman, Eric A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-02-15

    The 2015 IECC provides cost-effective savings for residential buildings in Delaware. Moving to the 2015 IECC from the 2012 IECC base code is cost-effective for residential buildings in all climate zones in Delaware.

  11. Metadata Aided Run Selection at ATLAS

    CERN Document Server

    Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E

    2010-01-01

    Management of the large volume of data collected by any large scale sci- entific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user in- terfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called “runBrowser” makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions at...

  12. Metadata aided run selection at ATLAS

    CERN Document Server

    Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called “runBrowser” makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attrib...

  13. Critical Metadata for Spectroscopy Field Campaigns

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2014-04-01

    Full Text Available A field spectroscopy metadata standard is defined as those data elements that explicitly document the spectroscopy dataset and field protocols, sampling strategies, instrument properties and environmental and logistical variables. Standards for field spectroscopy metadata affect the quality, completeness, reliability, and usability of datasets created in situ. Currently there is no standardized methodology for documentation of in situ spectroscopy data or metadata. This paper presents results of an international experiment comprising a web-based survey and expert panel evaluation that investigated critical metadata in field spectroscopy. The survey participants were a diverse group of scientists experienced in gathering spectroscopy data across a wide range of disciplines. Overall, respondents were in agreement about a core metadataset for generic campaign metadata, allowing for a prioritization of critical metadata elements to be proposed including those relating to viewing geometry, location, general target and sampling properties, illumination, instrument properties, reference standards, calibration, hyperspectral signal properties, atmospheric conditions, and general project details. Consensus was greatest among individual expert groups in specific application domains. The results allow the identification of a core set of metadata fields that enforce long term data storage and serve as a foundation for a metadata standard. This paper is part one in a series about the core elements of a robust and flexible field spectroscopy metadata standard.

  14. Delaware Estuary situation reports. Emergency response: How do emergency management officials address disasters in the Delaware Estuary

    International Nuclear Information System (INIS)

    Sylves, R.T.

    1991-01-01

    From hurricanes and other natural threats to oil spills and other manmade emergencies, the Delaware Estuary has experienced a variety of disasters over the years. The toll that these events take on the estuary and those who live on its shores depends largely upon the degree of emergency preparedness, speed of response, and effectiveness of recovery operations. In Emergency Response: How Do Emergency Management Officials Address Disasters in the Delaware Estuary, the latest addition to its Delaware Estuary Situation Report series, the University of Delaware Sea Grant College Program defines emergency management; examines the roles that the Coast Guard, Army Corps of Engineers, and Environmental Protection Agency play in an emergency; and reviews how each of these federal agencies operated during an actual disaster--the 1985 Grand Eagle oil spill. The report was written by Dr. Richard T. Sylves, a professor of political science at the University of Delaware. Sylves has been studying emergency management for the past 15 years, with special emphasis on oil spill preparedness and response in the Mid-Atlantic Region. The Delaware Estuary Situation Report is 12 pages long and contains maps and photographs, as well as a detailed account of response and recovery operations undertaken during the Grand Eagle oil spill. A comparison of the 1985 Grand Eagle spill and the 1989 Presidente Rivera spill also is included

  15. Building a semantic web-based metadata repository for facilitating detailed clinical modeling in cancer genome studies.

    Science.gov (United States)

    Sharma, Deepak K; Solbrig, Harold R; Tao, Cui; Weng, Chunhua; Chute, Christopher G; Jiang, Guoqian

    2017-06-05

    Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.

  16. Comparative Study of Metadata Elements Used in the Website of Central Library of Universities Subordinate to the Ministry of Science, Research and Technology with the Dublin Core Metadata Elements

    Directory of Open Access Journals (Sweden)

    Kobra Babaei

    2012-03-01

    Full Text Available This research has been carried out with the aim of studying the web sites of central libraries of universities subordinate to the Ministry of Science, Research and Technology usage of metadata elements and its comparison with Dublin Core standard elements. This study was a comparative survey, in which 40 websites of academic library by using Internet Explorer browser. Then the HTML pages of these websites were seen through the Source of View menu, and metadata elements of each websites were extracted and entered in the checklist. Then, with using descriptive statistics (frequency, percentage and mean analysis of data was discussed. Research findings showed that the reviewed websites did not use any Dublin Core metadata elements, general metadata Markup language used in design of all websites, the amount of metadata elements used in website, Central Library of Ferdowsi University of Mashhad and Iran Science and Industries with 57% in first ranked and Shahid Beheshti University with 49% in second ranked and the International University of Imam Khomeini with 40% was in third ranked. The approach to web designers was determined too that as follows: the content of source in first ranked and attention to physical appearance source in second ranked and also ownership of source in third position.

  17. Efficient processing of MPEG-21 metadata in the binary domain

    Science.gov (United States)

    Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas

    2005-10-01

    XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.

  18. Making Interoperability Easier with the NASA Metadata Management Tool

    Science.gov (United States)

    Shum, D.; Reese, M.; Pilone, D.; Mitchell, A. E.

    2016-12-01

    ISO 19115 has enabled interoperability amongst tools, yet many users find it hard to build ISO metadata for their collections because it can be large and overly flexible for their needs. The Metadata Management Tool (MMT), part of NASA's Earth Observing System Data and Information System (EOSDIS), offers users a modern, easy to use browser based tool to develop ISO compliant metadata. Through a simplified UI experience, metadata curators can create and edit collections without any understanding of the complex ISO-19115 format, while still generating compliant metadata. The MMT is also able to assess the completeness of collection level metadata by evaluating it against a variety of metadata standards. The tool provides users with clear guidance as to how to change their metadata in order to improve their quality and compliance. It is based on NASA's Unified Metadata Model for Collections (UMM-C) which is a simpler metadata model which can be cleanly mapped to ISO 19115. This allows metadata authors and curators to meet ISO compliance requirements faster and more accurately. The MMT and UMM-C have been developed in an agile fashion, with recurring end user tests and reviews to continually refine the tool, the model and the ISO mappings. This process is allowing for continual improvement and evolution to meet the community's needs.

  19. Delaware Bay, Delaware Sediment Distribution 2003 to 2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The area of coverage consists of 38 square miles of benthic habitat mapped from 2003 to 2004 along the middle to lower Delaware Bay Coast. The bottom sediment map...

  20. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    Science.gov (United States)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web-based

  1. Science friction: data, metadata, and collaboration.

    Science.gov (United States)

    Edwards, Paul N; Mayernik, Matthew S; Batcheller, Archer L; Bowker, Geoffrey C; Borgman, Christine L

    2011-10-01

    When scientists from two or more disciplines work together on related problems, they often face what we call 'science friction'. As science becomes more data-driven, collaborative, and interdisciplinary, demand increases for interoperability among data, tools, and services. Metadata--usually viewed simply as 'data about data', describing objects such as books, journal articles, or datasets--serve key roles in interoperability. Yet we find that metadata may be a source of friction between scientific collaborators, impeding data sharing. We propose an alternative view of metadata, focusing on its role in an ephemeral process of scientific communication, rather than as an enduring outcome or product. We report examples of highly useful, yet ad hoc, incomplete, loosely structured, and mutable, descriptions of data found in our ethnographic studies of several large projects in the environmental sciences. Based on this evidence, we argue that while metadata products can be powerful resources, usually they must be supplemented with metadata processes. Metadata-as-process suggests the very large role of the ad hoc, the incomplete, and the unfinished in everyday scientific work.

  2. Evolution in Metadata Quality: Common Metadata Repository's Role in NASA Curation Efforts

    Science.gov (United States)

    Gilman, Jason; Shum, Dana; Baynes, Katie

    2016-01-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This poster covers how we use humanizers, a technique for dealing with the symptoms of metadata issues, as well as our plans for future metadata validation enhancements. The CMR currently indexes 35K collections and 300M granules.

  3. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    Fulachier, Jerome; The ATLAS collaboration

    2016-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, Javascript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  4. Metadata and Service at the GFZ ISDC Portal

    Science.gov (United States)

    Ritschel, B.

    2008-05-01

    The online service portal of the GFZ Potsdam Information System and Data Center (ISDC) is an access point for all manner of geoscientific geodata, its corresponding metadata, scientific documentation and software tools. At present almost 2000 national and international users and user groups have the opportunity to request Earth science data from a portfolio of 275 different products types and more than 20 Million single data files with an added volume of approximately 12 TByte. The majority of the data and information, the portal currently offers to the public, are global geomonitoring products such as satellite orbit and Earth gravity field data as well as geomagnetic and atmospheric data for the exploration. These products for Earths changing system are provided via state-of-the art retrieval techniques. The data product catalog system behind these techniques is based on the extensive usage of standardized metadata, which are describing the different geoscientific product types and data products in an uniform way. Where as all ISDC product types are specified by NASA's Directory Interchange Format (DIF), Version 9.0 Parent XML DIF metadata files, the individual data files are described by extended DIF metadata documents. Depending on the beginning of the scientific project, one part of data files are described by extended DIF, Version 6 metadata documents and the other part are specified by data Child XML DIF metadata documents. Both, the product type dependent parent DIF metadata documents and the data file dependent child DIF metadata documents are derived from a base-DIF.xsd xml schema file. The ISDC metadata philosophy defines a geoscientific product as a package consisting of mostly one or sometimes more than one data file plus one extended DIF metadata file. Because NASA's DIF metadata standard has been developed in order to specify a collection of data only, the extension of the DIF standard consists of new and specific attributes, which are necessary for

  5. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.; Tzelnic, Percy; Ting, Dennis P. J.; Ionkov, Latchesar A.; Grider, Gary

    2017-12-26

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata request can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.

  6. Harvesting NASA's Common Metadata Repository

    Science.gov (United States)

    Shum, D.; Mitchell, A. E.; Durbin, C.; Norton, J.

    2017-12-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  7. The Courts, the Legislature, and Delaware's Resegregation: A Report on School Segregation in Delaware, 1989-­2010

    Science.gov (United States)

    Niemeyer, Arielle

    2014-01-01

    Delaware's history with school desegregation is complicated and contradictory. The state both advanced and impeded the goals of "Brown v. Board of Education." After implementing desegregation plans that were ineffective by design, Delaware was ultimately placed under the first metropolitan, multi-district desegregation court order in the…

  8. ATLAS Metadata Interface (AMI), a generic metadata framework

    Science.gov (United States)

    Fulachier, J.; Odier, J.; Lambert, F.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, JavaScript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  9. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    AUTHOR|(SzGeCERN)573735; The ATLAS collaboration; Odier, Jerome; Lambert, Fabian

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, JavaScript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  10. Metadata aided run selection at ATLAS

    International Nuclear Information System (INIS)

    Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  11. Data Mining the Internet Archive Collection

    Directory of Open Access Journals (Sweden)

    Caleb McDaniel

    2014-03-01

    Full Text Available The collections of the Internet Archive (IA include many digitized sources of interest to historians, including early JSTOR journal content, John Adams’s personal library, and the Haiti collection at the John Carter Brown Library. In short, to quote Programming Historian Ian Milligan, “The Internet Archive rocks.” In this lesson, you’ll learn how to download files from such collections using a Python module specifically designed for the Internet Archive. You will also learn how to use another Python module designed for parsing MARC XML records, a widely used standard for formatting bibliographic metadata.

  12. USGIN ISO metadata profile

    Science.gov (United States)

    Richard, S. M.

    2011-12-01

    The USGIN project has drafted and is using a specification for use of ISO 19115/19/39 metadata, recommendations for simple metadata content, and a proposal for a URI scheme to identify resources using resolvable http URI's(see http://lab.usgin.org/usgin-profiles). The principal target use case is a catalog in which resources can be registered and described by data providers for discovery by users. We are currently using the ESRI Geoportal (Open Source), with configuration files for the USGIN profile. The metadata offered by the catalog must provide sufficient content to guide search engines to locate requested resources, to describe the resource content, provenance, and quality so users can determine if the resource will serve for intended usage, and finally to enable human users and sofware clients to obtain or access the resource. In order to achieve an operational federated catalog system, provisions in the ISO specification must be restricted and usage clarified to reduce the heterogeneity of 'standard' metadata and service implementations such that a single client can search against different catalogs, and the metadata returned by catalogs can be parsed reliably to locate required information. Usage of the complex ISO 19139 XML schema allows for a great deal of structured metadata content, but the heterogenity in approaches to content encoding has hampered development of sophisticated client software that can take advantage of the rich metadata; the lack of such clients in turn reduces motivation for metadata producers to produce content-rich metadata. If the only significant use of the detailed, structured metadata is to format into text for people to read, then the detailed information could be put in free text elements and be just as useful. In order for complex metadata encoding and content to be useful, there must be clear and unambiguous conventions on the encoding that are utilized by the community that wishes to take advantage of advanced metadata

  13. Delaware River and Upper Bay Sediment Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The area of coverage consists of 192 square miles of benthic habitat mapped from 2005 to 2007 in the Delaware River and Upper Delaware Bay. The bottom sediment map...

  14. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    Science.gov (United States)

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-01-01

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. PMID:25310473

  15. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    Directory of Open Access Journals (Sweden)

    Min Huang

    2014-10-01

    Full Text Available While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it’s critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB pages which are more reliable than least significant bit (LSB pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme.

  16. Cognitive Maps, AI Agents and Personalized Virtual Environments in Internet Learning Experiences.

    Science.gov (United States)

    Maule, R. William

    1998-01-01

    Develops frameworks to help Internet media designers address end-user information presentation preferences by advancing structures for assessing metadata design variables which are then linked to user cognitive styles. An underlying theme is that artificial intelligence (AI) methodologies may be used to help automate the Internet media design…

  17. EU Law and Mass Internet Metadata Surveillance in the Post-Snowden Era

    Directory of Open Access Journals (Sweden)

    Nora Ni Loideain

    2015-09-01

    Full Text Available Legal frameworks exist within democracies to prevent the misuse and abuse of personal data that law enforcement authorities obtain from private communication service providers. The fundamental rights to respect for private life and the protection of personal data underpin this framework within the European Union. Accordingly, the protection of the principles and safeguards required by these rights is key to ensuring that the oversight of State surveillance powers is robust and transparent. Furthermore, without the robust scrutiny of independent judicial review, the principles and safeguards guaranteed by these rights may become more illusory than real. Following the Edward Snowden revelations, major concerns have been raised worldwide regarding the legality, necessity and proportionality standards governing these laws. In 2014, the highest court in the EU struck down the legal framework that imposed a mandatory duty on communication service providers to undertake the mass retention of metadata for secret intelligence and law enforcement authorities across the EU. This article considers the influence of the Snowden revelations on this landmark judgment. Subsequently, the analysis explores the significance of this ruling for the future reform of EU law governing metadata surveillance and its contribution to the worldwide debate on indiscriminate and covert monitoring in the post-Snowden era.

  18. CCR+: Metadata Based Extended Personal Health Record Data Model Interoperable with the ASTM CCR Standard.

    Science.gov (United States)

    Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong; Kim, Ju Han

    2014-01-01

    Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models.

  19. Metadata Realities for Cyberinfrastructure: Data Authors as Metadata Creators

    Science.gov (United States)

    Mayernik, Matthew Stephen

    2011-01-01

    As digital data creation technologies become more prevalent, data and metadata management are necessary to make data available, usable, sharable, and storable. Researchers in many scientific settings, however, have little experience or expertise in data and metadata management. In this dissertation, I explore the everyday data and metadata…

  20. An Assistant for Loading Learning Object Metadata: An Ontology Based Approach

    Science.gov (United States)

    Casali, Ana; Deco, Claudia; Romano, Agustín; Tomé, Guillermo

    2013-01-01

    In the last years, the development of different Repositories of Learning Objects has been increased. Users can retrieve these resources for reuse and personalization through searches in web repositories. The importance of high quality metadata is key for a successful retrieval. Learning Objects are described with metadata usually in the standard…

  1. The Cost of Clean Water in the Delaware River Basin (USA

    Directory of Open Access Journals (Sweden)

    Gerald J. Kauffman

    2018-01-01

    Full Text Available The Delaware River has made a marked recovery in the half-century since the adoption of the Delaware River Basin Commission (DRBC Compact in 1961 and passage of the Federal Clean Water Act amendments during the 1970s. During the 1960s, the DRBC set a 3.5 mg/L dissolved oxygen criterion for the river based on an economic analysis that concluded that a waste load abatement program designed to meet fishable water quality goals would generate significant recreational and environmental benefits. Scientists with the Delaware Estuary Program have recently called for raising the 1960s dissolved oxygen criterion along the Delaware River from 3.5 mg/L to 5.0 mg/L to protect anadromous American shad and Atlantic sturgeon, and address the prospect of rising temperatures, sea levels, and salinity in the estuary. This research concludes, through a nitrogen marginal abatement cost (MAC analysis, that it would be cost-effective to raise dissolved oxygen levels to meet a more stringent standard by prioritizing agricultural conservation and some wastewater treatment investments in the Delaware River watershed to remove 90% of the nitrogen load by 13.6 million kg N/year (30 million lb N/year for just 35% ($160 million of the $449 million total cost. The annual least cost to reduce nitrogen loads and raise dissolved oxygen levels to meet more stringent water quality standards in the Delaware River totals $45 million for atmospheric NOX reduction, $130 million for wastewater treatment, $132 million for agriculture conservation, and $141 million for urban stormwater retrofitting. This 21st century least cost analysis estimates that an annual investment of $50 million is needed to reduce pollutant loads in the Delaware River to raise dissolved oxygen levels to 4.0 mg/L, $150 million is needed for dissolved oxygen levels to reach 4.5 mg/L, and $449 million is needed for dissolved oxygen levels to reach 5.0 mg/L.

  2. The metadata manual a practical workbook

    CERN Document Server

    Lubas, Rebecca; Schneider, Ingrid

    2013-01-01

    Cultural heritage professionals have high levels of training in metadata. However, the institutions in which they practice often depend on support staff, volunteers, and students in order to function. With limited time and funding for training in metadata creation for digital collections, there are often many questions about metadata without a reliable, direct source for answers. The Metadata Manual provides such a resource, answering basic metadata questions that may appear, and exploring metadata from a beginner's perspective. This title covers metadata basics, XML basics, Dublin Core, VRA C

  3. Predicting age groups of Twitter users based on language and metadata features.

    Directory of Open Access Journals (Sweden)

    Antonio A Morgan-Lopez

    Full Text Available Health organizations are increasingly using social media, such as Twitter, to disseminate health messages to target audiences. Determining the extent to which the target audience (e.g., age groups was reached is critical to evaluating the impact of social media education campaigns. The main objective of this study was to examine the separate and joint predictive validity of linguistic and metadata features in predicting the age of Twitter users. We created a labeled dataset of Twitter users across different age groups (youth, young adults, adults by collecting publicly available birthday announcement tweets using the Twitter Search application programming interface. We manually reviewed results and, for each age-labeled handle, collected the 200 most recent publicly available tweets and user handles' metadata. The labeled data were split into training and test datasets. We created separate models to examine the predictive validity of language features only, metadata features only, language and metadata features, and words/phrases from another age-validated dataset. We estimated accuracy, precision, recall, and F1 metrics for each model. An L1-regularized logistic regression model was conducted for each age group, and predicted probabilities between the training and test sets were compared for each age group. Cohen's d effect sizes were calculated to examine the relative importance of significant features. Models containing both Tweet language features and metadata features performed the best (74% precision, 74% recall, 74% F1 while the model containing only Twitter metadata features were least accurate (58% precision, 60% recall, and 57% F1 score. Top predictive features included use of terms such as "school" for youth and "college" for young adults. Overall, it was more challenging to predict older adults accurately. These results suggest that examining linguistic and Twitter metadata features to predict youth and young adult Twitter users may

  4. Metadata Authoring with Versatility and Extensibility

    Science.gov (United States)

    Pollack, Janine; Olsen, Lola

    2004-01-01

    NASA's Global Change Master Directory (GCMD) assists the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 13,800 data set descriptions in Directory Interchange Format (DIF) and 700 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information and direct links to the data, thus allowing researchers to discover data pertaining to a geographic location of interest, then quickly acquire those data. The GCMD strives to be the preferred data locator for world-wide directory-level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are attracting widespread usage; however, a need for tools that are portable, customizable and versatile still exists. With tool usage directly influencing metadata population, it has become apparent that new tools are needed to fill these voids. As a result, the GCMD has released a new authoring tool allowing for both web-based and stand-alone authoring of descriptions. Furthermore, this tool incorporates the ability to plug-and-play the metadata format of choice, offering users options of DIF, SERF, FGDC, ISO or any other defined standard. Allowing data holders to work with their preferred format, as well as an option of a stand-alone application or web-based environment, docBUlLDER will assist the scientific community in efficiently creating quality data and services metadata.

  5. Mining Building Metadata by Data Stream Comparison

    DEFF Research Database (Denmark)

    Holmegaard, Emil; Kjærgaard, Mikkel Baun

    2016-01-01

    to handle data streams with only slightly similar patterns. We have evaluated Metafier with points and data from one building located in Denmark. We have evaluated Metafier with 903 points, and the overall accuracy, with only 3 known examples, was 94.71%. Furthermore we found that using DTW for mining...... ways to annotate sensor and actuation points. This makes it difficult to create intuitive queries for retrieving data streams from points. Another problem is the amount of insufficient or missing metadata. We introduce Metafier, a tool for extracting metadata from comparing data streams. Metafier...... enables a semi-automatic labeling of metadata to building instrumentation. Metafier annotates points with metadata by comparing the data from a set of validated points with unvalidated points. Metafier has three different algorithms to compare points with based on their data. The three algorithms...

  6. The Global Streamflow Indices and Metadata Archive (GSIM – Part 1: The production of a daily streamflow archive and metadata

    Directory of Open Access Journals (Sweden)

    H. X. Do

    2018-04-01

    Full Text Available This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM, a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections. It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477: (1 a GSIM catalogue collating basic metadata associated with each time series, (2 catchment boundaries for the contributing area of each gauge, and (3 catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.

  7. The Global Streamflow Indices and Metadata Archive (GSIM) - Part 1: The production of a daily streamflow archive and metadata

    Science.gov (United States)

    Do, Hong Xuan; Gudmundsson, Lukas; Leonard, Michael; Westra, Seth

    2018-04-01

    This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM), a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections). It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477" target="_blank">https://doi.pangaea.de/10.1594/PANGAEA.887477): (1) a GSIM catalogue collating basic metadata associated with each time series, (2) catchment boundaries for the contributing area of each gauge, and (3) catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.

  8. A Metadata-Rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2009-01-07

    Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  9. 76 FR 60850 - Delaware; Emergency and Related Determinations

    Science.gov (United States)

    2011-09-30

    ... determined that the emergency conditions in the State of Delaware resulting from Hurricane Irene beginning on... have been designated as adversely affected by this declared emergency: The entire State of Delaware for... Management Assistance Grant; 97.048, Disaster Housing Assistance to Individuals and Households in...

  10. Use of Light Detection and Ranging (LiDAR) to Obtain High-Resolution Elevation Data for Sussex County, Delaware

    Science.gov (United States)

    Barlow, Roger A.; Nardi, Mark R.; Reyes, Betzaida

    2008-01-01

    Sussex County, Delaware, occupies a 938-square-mile area of low relief near sea level in the Atlantic Coastal Plain. The county is bounded on the east by the Delaware Bay and the Atlantic Ocean, including a barrier-island system, and inland bays that provide habitat for valuable living resources. Eastern Sussex County is an area of rapid population growth with a long-established beach-resort community, where land elevation is a key factor in determining areas that are appropriate for development. Of concern to State and local planners are evacuation routes inland to escape flooding from severe coastal storms, as most major transportation routes traverse areas of low elevation that are subject to inundation. The western half of the county is typically rural in character, and land use is largely agricultural with some scattered forest land cover. Western Sussex County has several low-relief river flood-prone areas, where accurate high-resolution elevation data are needed for Federal Emergency Management Agency (FEMA) Digital Flood Insurance Rate Map (DFIRM) studies. This fact sheet describes the methods and techniques used to collect and process LiDAR elevation data, the generation of the digital elevation model (DEM) and the 2-foot contours, and the quality-assurance procedures and results. It indicates where to view metadata on the data sets and where to acquire bare-earth mass points, DEM data, and contour data.

  11. METADATA, DESKRIPSI SERTA TITIK AKSESNYA DAN INDOMARC

    Directory of Open Access Journals (Sweden)

    Sulistiyo Basuki

    2012-07-01

    Full Text Available lstilah metadata mulai sering muncul dalam literature tentang database management systems (DBMS pada tahun 1980 an. lstilah tersebut digunakan untuk menggambarkan informasi yang diperlukan untuk mencatat karakteristik informasi yang terdapat pada pangkalan data. Banyak sumber yang mengartikan istilah metadata. Metadata dapat diartikan sumber, menunjukan lokasi dokumen, serta memberikan ringkasan yang diperlukan untuk memanfaat-kannya. Secara umum ada 3 bagian yang digunakan untuk membuat metadata sebagai sebuah paket informasi, dan penyandian (encoding pembuatan deskripsi paket informasi, dan penyediaan akses terhadap deskripsi tersebut. Dalam makalah ini diuraikan mengenai konsep data dalam kaitannya dengan perpustakaan. Uraian meliputi definisi metadata; fungsi metadata; standar penyandian (encoding, cantuman bibliografis. surogat, metadata; penciptaan isi cantuman surogat; ancangan terhadap format metadata; serta metadata dan standar metadata.

  12. Hydrogeologic framework, hydrology, and refined conceptual model of groundwater flow for Coastal Plain aquifers at the Standard Chlorine of Delaware, Inc. Superfund Site, New Castle County, Delaware, 2005-12

    Science.gov (United States)

    Brayton, Michael J.; Cruz, Roberto M.; Myers, Luke; Degnan, James R.; Raffensperger, Jeff P.

    2015-01-01

    From 1966 to 2002, activities at the Standard Chlorine of Delaware chemical facility in New Castle County, Delaware resulted in the contamination of groundwater, soils, and wetland sediment. In 2005, the U.S. Geological Survey (USGS), in partnership with the U.S. Environmental Protection Agency, Region 3, and the Delaware Department of Natural Resources and Environmental Control began a multi-year investigation of the hydrogeologic framework and hydrology of the confined aquifer system. The goals of the ongoing study at the site (the Potomac Aquifer Study) are to determine the hydraulic connection between the Columbia and Potomac aquifers, determine the direction of groundwater flow in the Potomac aquifer, and identify factors affecting the fate of contaminated groundwater. This report describes progress made towards these goals based on available data collected through September 2012.

  13. 77 FR 69490 - Delaware; Emergency and Related Determinations

    Science.gov (United States)

    2012-11-19

    ... determined that the emergency conditions in the State of Delaware resulting from Hurricane Sandy beginning on... areas of the State of Delaware have been designated as adversely affected by this declared emergency... Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to...

  14. Mercury Toolset for Spatiotemporal Metadata

    Science.gov (United States)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris

    2010-06-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  15. Mercury Toolset for Spatiotemporal Metadata

    Science.gov (United States)

    Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James

    2010-01-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  16. Social Vulnerability Index (SoVI) for Delaware based on 2000 Census Block Groups

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data depicts the social vulnerability of Delaware census block groups to environmental hazards. Data were culled primarily from the 2000 Decennial Census.

  17. Extended Aquifer Air Sparging/Soil Vapor Extraction Treatability Study for Site SS59 (WP-21) Dover Air Force Base, Dover, Delaware

    National Research Council Canada - National Science Library

    1995-01-01

    Site 5559 at Dover Air Force Base, Delaware is under investigation for the remediation of ground water which was contaminated by a system of industrial waste basins operated to the north of this area...

  18. Phonion: Practical Protection of Metadata in Telephony Networks

    Directory of Open Access Journals (Sweden)

    Heuser Stephan

    2017-01-01

    Full Text Available The majority of people across the globe rely on telephony networks as their primary means of communication. As such, many of the most sensitive personal, corporate and government related communications pass through these systems every day. Unsurprisingly, such connections are subject to a wide range of attacks. Of increasing concern is the use of metadata contained in Call Detail Records (CDRs, which contain source, destination, start time and duration of a call. This information is potentially dangerous as the very act of two parties communicating can reveal significant details about their relationship and put them in the focus of targeted observation or surveillance, which is highly critical especially for journalists and activists. To address this problem, we develop the Phonion architecture to frustrate such attacks by separating call setup functions from call delivery. Specifically, Phonion allows users to preemptively establish call circuits across multiple providers and technologies before dialing into the circuit and does not require constant Internet connectivity. Since no single carrier can determine the ultimate destination of the call, it provides unlinkability for its users and helps them to avoid passive surveillance. We define and discuss a range of adversary classes and analyze why current obfuscation technologies fail to protect users against such metadata attacks. In our extensive evaluation we further analyze advanced anonymity technologies (e.g., VoIP over Tor, which do not preserve our functional requirements for high voice quality in the absence of constant broadband Internet connectivity and compatibility with landline and feature phones. Phonion is the first practical system to provide guarantees of unlinkable communication against a range of practical adversaries in telephony systems.

  19. Streamlining geospatial metadata in the Semantic Web

    Science.gov (United States)

    Fugazza, Cristiano; Pepe, Monica; Oggioni, Alessandro; Tagliolato, Paolo; Carrara, Paola

    2016-04-01

    In the geospatial realm, data annotation and discovery rely on a number of ad-hoc formats and protocols. These have been created to enable domain-specific use cases generalized search is not feasible for. Metadata are at the heart of the discovery process and nevertheless they are often neglected or encoded in formats that either are not aimed at efficient retrieval of resources or are plainly outdated. Particularly, the quantum leap represented by the Linked Open Data (LOD) movement did not induce so far a consistent, interlinked baseline in the geospatial domain. In a nutshell, datasets, scientific literature related to them, and ultimately the researchers behind these products are only loosely connected; the corresponding metadata intelligible only to humans, duplicated on different systems, seldom consistently. Instead, our workflow for metadata management envisages i) editing via customizable web- based forms, ii) encoding of records in any XML application profile, iii) translation into RDF (involving the semantic lift of metadata records), and finally iv) storage of the metadata as RDF and back-translation into the original XML format with added semantics-aware features. Phase iii) hinges on relating resource metadata to RDF data structures that represent keywords from code lists and controlled vocabularies, toponyms, researchers, institutes, and virtually any description one can retrieve (or directly publish) in the LOD Cloud. In the context of a distributed Spatial Data Infrastructure (SDI) built on free and open-source software, we detail phases iii) and iv) of our workflow for the semantics-aware management of geospatial metadata.

  20. Definition of a CDI metadata profile and its ISO 19139 based encoding

    Science.gov (United States)

    Boldrini, Enrico; de Korte, Arjen; Santoro, Mattia; Schaap, Dick M. A.; Nativi, Stefano; Manzella, Giuseppe

    2010-05-01

    The Common Data Index (CDI) is the middleware service adopted by SeaDataNet for discovery and query. The primary goal of the EU funded project SeaDataNet is to develop a system which provides transparent access to marine data sets and data products from 36 countries in and around Europe. The European context of SeaDataNet requires that the developed system complies with European Directive INSPIRE. In order to assure the required conformity a GI-cat based solution is proposed. GI-cat is a broker service able to mediate from different metadata sources and publish them through a consistent and unified interface. In this case GI-cat is used as a front end to the SeaDataNet portal publishing the original data, based on CDI v.1 XML schema, through an ISO 19139 application profile catalog interface (OGC CSW AP ISO). The choice of ISO 19139 is supported and driven by INSPIRE Implementing Rules, that have been used as a reference through the whole development process. A mapping from the CDI data model to the ISO 19139 was hence to be implemented in GI-cat and a first draft quickly developed, as both CDI v.1 and ISO 19139 happen to be XML implementations based on the same abstract data model (standard ISO 19115 - metadata about geographic information). This first draft mapping pointed out the CDI metadata model differences with respect to ISO 19115, as it was not possible to accommodate all the information contained in CDI v.1 into ISO 19139. Moreover some modifications were needed in order to reach INSPIRE compliance. The consequent work consisted in the definition of the CDI metadata model as a profile of ISO 19115. This included checking of all the metadata elements present in CDI and their cardinality. A comparison was made with respect to ISO 19115 and possible extensions were individuated. ISO 19139 was then chosen as a natural XML implementation of this new CDI metadata profile. The mapping and the profile definition processes were iteratively refined leading up to a

  1. Department of the Interior metadata implementation guide—Framework for developing the metadata component for data resource management

    Science.gov (United States)

    Obuch, Raymond C.; Carlino, Jennifer; Zhang, Lin; Blythe, Jonathan; Dietrich, Christopher; Hawkinson, Christine

    2018-04-12

    The Department of the Interior (DOI) is a Federal agency with over 90,000 employees across 10 bureaus and 8 agency offices. Its primary mission is to protect and manage the Nation’s natural resources and cultural heritage; provide scientific and other information about those resources; and honor its trust responsibilities or special commitments to American Indians, Alaska Natives, and affiliated island communities. Data and information are critical in day-to-day operational decision making and scientific research. DOI is committed to creating, documenting, managing, and sharing high-quality data and metadata in and across its various programs that support its mission. Documenting data through metadata is essential in realizing the value of data as an enterprise asset. The completeness, consistency, and timeliness of metadata affect users’ ability to search for and discover the most relevant data for the intended purpose; and facilitates the interoperability and usability of these data among DOI bureaus and offices. Fully documented metadata describe data usability, quality, accuracy, provenance, and meaning.Across DOI, there are different maturity levels and phases of information and metadata management implementations. The Department has organized a committee consisting of bureau-level points-of-contacts to collaborate on the development of more consistent, standardized, and more effective metadata management practices and guidance to support this shared mission and the information needs of the Department. DOI’s metadata implementation plans establish key roles and responsibilities associated with metadata management processes, procedures, and a series of actions defined in three major metadata implementation phases including: (1) Getting started—Planning Phase, (2) Implementing and Maintaining Operational Metadata Management Phase, and (3) the Next Steps towards Improving Metadata Management Phase. DOI’s phased approach for metadata management addresses

  2. Quantification and probabilistic modeling of CRT obsolescence for the State of Delaware

    International Nuclear Information System (INIS)

    Schumacher, Kelsea A.; Schumacher, Thomas; Agbemabiese, Lawrence

    2014-01-01

    Highlights: • We modeled the obsolescence of cathode ray tube devices in the State of Delaware. • 411,654 CRT units or ∼16,500 metric tons have been recycled in Delaware since 2002. • The peak of the CRT obsolescence in Delaware passed by 2012. • The Delaware average CRT recycling rate between 2002 and 13 was approximately 27.5%. • CRTs will continue to infiltrate the system likely until 2033. - Abstract: The cessation of production and replacement of cathode ray tube (CRT) displays with flat screen displays have resulted in the proliferation of CRTs in the electronic waste (e-waste) recycle stream. However, due to the nature of the technology and presence of hazardous components such as lead, CRTs are the most challenging of electronic components to recycle. In the State of Delaware it is due to this challenge and the resulting expense combined with the large quantities of CRTs in the recycle stream that electronic recyclers now charge to accept Delaware’s e-waste. Therefore it is imperative that the Delaware Solid Waste Authority (DSWA) understand future quantities of CRTs entering the waste stream. This study presents the results of an assessment of CRT obsolescence in the State of Delaware. A prediction model was created utilizing publicized sales data, a variety of lifespan data as well as historic Delaware CRT collection rates. Both a deterministic and a probabilistic approach using Monte Carlo Simulation (MCS) were performed to forecast rates of CRT obsolescence to be anticipated in the State of Delaware. Results indicate that the peak of CRT obsolescence in Delaware has already passed, although CRTs are anticipated to enter the waste stream likely until 2033

  3. Describing Geospatial Assets in the Web of Data: A Metadata Management Scenario

    Directory of Open Access Journals (Sweden)

    Cristiano Fugazza

    2016-12-01

    Full Text Available Metadata management is an essential enabling factor for geospatial assets because discovery, retrieval, and actual usage of the latter are tightly bound to the quality of these descriptions. Unfortunately, the multi-faceted landscape of metadata formats, requirements, and conventions makes it difficult to identify editing tools that can be easily tailored to the specificities of a given project, workgroup, and Community of Practice. Our solution is a template-driven metadata editing tool that can be customised to any XML-based schema. Its output is constituted by standards-compliant metadata records that also have a semantics-aware counterpart eliciting novel exploitation techniques. Moreover, external data sources can easily be plugged in to provide autocompletion functionalities on the basis of the data structures made available on the Web of Data. Beside presenting the essentials on customisation of the editor by means of two use cases, we extend the methodology to the whole life cycle of geospatial metadata. We demonstrate the novel capabilities enabled by RDF-based metadata representation with respect to traditional metadata management in the geospatial domain.

  4. ONEMercury: Towards Automatic Annotation of Earth Science Metadata

    Science.gov (United States)

    Tuarob, S.; Pouchard, L. C.; Noy, N.; Horsburgh, J. S.; Palanisamy, G.

    2012-12-01

    Earth sciences have become more data-intensive, requiring access to heterogeneous data collected from multiple places, times, and thematic scales. For example, research on climate change may involve exploring and analyzing observational data such as the migration of animals and temperature shifts across the earth, as well as various model-observation inter-comparison studies. Recently, DataONE, a federated data network built to facilitate access to and preservation of environmental and ecological data, has come to exist. ONEMercury has recently been implemented as part of the DataONE project to serve as a portal for discovering and accessing environmental and observational data across the globe. ONEMercury harvests metadata from the data hosted by multiple data repositories and makes it searchable via a common search interface built upon cutting edge search engine technology, allowing users to interact with the system, intelligently filter the search results on the fly, and fetch the data from distributed data sources. Linking data from heterogeneous sources always has a cost. A problem that ONEMercury faces is the different levels of annotation in the harvested metadata records. Poorly annotated records tend to be missed during the search process as they lack meaningful keywords. Furthermore, such records would not be compatible with the advanced search functionality offered by ONEMercury as the interface requires a metadata record be semantically annotated. The explosion of the number of metadata records harvested from an increasing number of data repositories makes it impossible to annotate the harvested records manually, urging the need for a tool capable of automatically annotating poorly curated metadata records. In this paper, we propose a topic-model (TM) based approach for automatic metadata annotation. Our approach mines topics in the set of well annotated records and suggests keywords for poorly annotated records based on topic similarity. We utilize the

  5. Viewing and Editing Earth Science Metadata MOBE: Metadata Object Browser and Editor in Java

    Science.gov (United States)

    Chase, A.; Helly, J.

    2002-12-01

    Metadata is an important, yet often neglected aspect of successful archival efforts. However, to generate robust, useful metadata is often a time consuming and tedious task. We have been approaching this problem from two directions: first by automating metadata creation, pulling from known sources of data, and in addition, what this (paper/poster?) details, developing friendly software for human interaction with the metadata. MOBE and COBE(Metadata Object Browser and Editor, and Canonical Object Browser and Editor respectively), are Java applications for editing and viewing metadata and digital objects. MOBE has already been designed and deployed, currently being integrated into other areas of the SIOExplorer project. COBE is in the design and development stage, being created with the same considerations in mind as those for MOBE. Metadata creation, viewing, data object creation, and data object viewing, when taken on a small scale are all relatively simple tasks. Computer science however, has an infamous reputation for transforming the simple into complex. As a system scales upwards to become more robust, new features arise and additional functionality is added to the software being written to manage the system. The software that emerges from such an evolution, though powerful, is often complex and difficult to use. With MOBE the focus is on a tool that does a small number of tasks very well. The result has been an application that enables users to manipulate metadata in an intuitive and effective way. This allows for a tool that serves its purpose without introducing additional cognitive load onto the user, an end goal we continue to pursue.

  6. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  7. Managing ebook metadata in academic libraries taming the tiger

    CERN Document Server

    Frederick, Donna E

    2016-01-01

    Managing ebook Metadata in Academic Libraries: Taming the Tiger tackles the topic of ebooks in academic libraries, a trend that has been welcomed by students, faculty, researchers, and library staff. However, at the same time, the reality of acquiring ebooks, making them discoverable, and managing them presents library staff with many new challenges. Traditional methods of cataloging and managing library resources are no longer relevant where the purchasing of ebooks in packages and demand driven acquisitions are the predominant models for acquiring new content. Most academic libraries have a complex metadata environment wherein multiple systems draw upon the same metadata for different purposes. This complexity makes the need for standards-based interoperable metadata more important than ever. In addition to complexity, the nature of the metadata environment itself typically varies slightly from library to library making it difficult to recommend a single set of practices and procedures which would be releva...

  8. Web Approach for Ontology-Based Classification, Integration, and Interdisciplinary Usage of Geoscience Metadata

    Directory of Open Access Journals (Sweden)

    B Ritschel

    2012-10-01

    Full Text Available The Semantic Web is a W3C approach that integrates the different sources of semantics within documents and services using ontology-based techniques. The main objective of this approach in the geoscience domain is the improvement of understanding, integration, and usage of Earth and space science related web content in terms of data, information, and knowledge for machines and people. The modeling and representation of semantic attributes and relations within and among documents can be realized by human readable concept maps and machine readable OWL documents. The objectives for the usage of the Semantic Web approach in the GFZ data center ISDC project are the design of an extended classification of metadata documents for product types related to instruments, platforms, and projects as well as the integration of different types of metadata related to data product providers, users, and data centers. Sources of content and semantics for the description of Earth and space science product types and related classes are standardized metadata documents (e.g., DIF documents, publications, grey literature, and Web pages. Other sources are information provided by users, such as tagging data and social navigation information. The integration of controlled vocabularies as well as folksonomies plays an important role in the design of well formed ontologies.

  9. The Development of a Competency Based Food Preparations Curriculum for High School Special Needs Students in New Castle County, Delaware.

    Science.gov (United States)

    Stewart, Richard Lee

    A competency-based culinary arts food preparation curriculum for Delaware high school students with special needs was developed during a project that included the following activities: review of the state's existing culinary arts curriculum for regular education students; incumbent worker survey administered to 24 restaurant…

  10. Tethys Acoustic Metadata Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Tethys database houses the metadata associated with the acoustic data collection efforts by the Passive Acoustic Group. These metadata include dates, locations...

  11. An integrated overview of metadata in ATLAS

    International Nuclear Information System (INIS)

    Gallas, E J; Malon, D; Hawkings, R J; Albrand, S; Torrence, E

    2010-01-01

    Metadata (data about data) arise in many contexts, from many diverse sources, and at many levels in ATLAS. Familiar examples include run-level, luminosity-block-level, and event-level metadata, and, related to processing and organization, dataset-level and file-level metadata, but these categories are neither exhaustive nor orthogonal. Some metadata are known a priori, in advance of data taking or simulation; other metadata are known only after processing, and occasionally, quite late (e.g., detector status or quality updates that may appear after initial reconstruction is complete). Metadata that may seem relevant only internally to the distributed computing infrastructure under ordinary conditions may become relevant to physics analysis under error conditions ('What can I discover about data I failed to process?'). This talk provides an overview of metadata and metadata handling in ATLAS, and describes ongoing work to deliver integrated metadata services in support of physics analysis.

  12. Handling Metadata in a Neurophysiology Laboratory

    Directory of Open Access Journals (Sweden)

    Lyuba Zehl

    2016-07-01

    Full Text Available To date, non-reproducibility of neurophysiological research is a matterof intense discussion in the scientific community. A crucial componentto enhance reproducibility is to comprehensively collect and storemetadata, that is all information about the experiment, the data,and the applied preprocessing steps on the data, such that they canbe accessed and shared in a consistent and simple manner. However,the complexity of experiments, the highly specialized analysis workflowsand a lack of knowledge on how to make use of supporting softwaretools often overburden researchers to perform such a detailed documentation.For this reason, the collected metadata are often incomplete, incomprehensiblefor outsiders or ambiguous. Based on our research experience in dealingwith diverse datasets, we here provide conceptual and technical guidanceto overcome the challenges associated with the collection, organization,and storage of metadata in a neurophysiology laboratory. Through theconcrete example of managing the metadata of a complex experimentthat yields multi-channel recordings from monkeys performing a behavioralmotor task, we practically demonstrate the implementation of theseapproaches and solutions with the intention that they may be generalizedto a specific project at hand. Moreover, we detail five use casesthat demonstrate the resulting benefits of constructing a well-organizedmetadata collection when processing or analyzing the recorded data,in particular when these are shared between laboratories in a modernscientific collaboration. Finally, we suggest an adaptable workflowto accumulate, structure and store metadata from different sourcesusing, by way of example, the odML metadata framework.

  13. Enriching The Metadata On CDS

    CERN Document Server

    Chhibber, Nalin

    2014-01-01

    The project report revolves around the open source software package called Invenio. It provides the tools for management of digital assets in a repository and drives CERN Document Server. Primary objective is to enhance the existing metadata in CDS with data from other libraries. An implicit part of this task is to manage disambiguation (within incoming data), removal of multiple entries and handle replications between new and existing records. All such elements and their corresponding changes are integrated within Invenio to make the upgraded metadata available on the CDS. Latter part of the report discuss some changes related to the Invenio code-base itself.

  14. Metadata and Ontologies in Learning Resources Design

    Science.gov (United States)

    Vidal C., Christian; Segura Navarrete, Alejandra; Menéndez D., Víctor; Zapata Gonzalez, Alfredo; Prieto M., Manuel

    Resource design and development requires knowledge about educational goals, instructional context and information about learner's characteristics among other. An important information source about this knowledge are metadata. However, metadata by themselves do not foresee all necessary information related to resource design. Here we argue the need to use different data and knowledge models to improve understanding the complex processes related to e-learning resources and their management. This paper presents the use of semantic web technologies, as ontologies, supporting the search and selection of resources used in design. Classification is done, based on instructional criteria derived from a knowledge acquisition process, using information provided by IEEE-LOM metadata standard. The knowledge obtained is represented in an ontology using OWL and SWRL. In this work we give evidence of the implementation of a Learning Object Classifier based on ontology. We demonstrate that the use of ontologies can support the design activities in e-learning.

  15. Metadata Life Cycles, Use Cases and Hierarchies

    Directory of Open Access Journals (Sweden)

    Ted Habermann

    2018-05-01

    Full Text Available The historic view of metadata as “data about data” is expanding to include data about other items that must be created, used, and understood throughout the data and project life cycles. In this context, metadata might better be defined as the structured and standard part of documentation, and the metadata life cycle can be described as the metadata content that is required for documentation in each phase of the project and data life cycles. This incremental approach to metadata creation is similar to the spiral model used in software development. Each phase also has distinct users and specific questions to which they need answers. In many cases, the metadata life cycle involves hierarchies where latter phases have increased numbers of items. The relationships between metadata in different phases can be captured through structure in the metadata standard, or through conventions for identifiers. Metadata creation and management can be streamlined and simplified by re-using metadata across many records. Many of these ideas have been developed to various degrees in several Geoscience disciplines and are being used in metadata for documenting the integrated life cycle of environmental research in the Arctic, including projects, collection sites, and datasets.

  16. Active Marine Station Metadata

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Active Marine Station Metadata is a daily metadata report for active marine bouy and C-MAN (Coastal Marine Automated Network) platforms from the National Data...

  17. The Theory and Implementation for Metadata in Digital Library/Museum

    Directory of Open Access Journals (Sweden)

    Hsueh-hua Chen

    1998-12-01

    Full Text Available Digital Libraries and Museums (DL/M have become one of the important research issues of Library and Information Science as well as other related fields. This paper describes the basic concepts of DL/M and briefly introduces the development of Taiwan Digital Museum Project. Based on the features of various collections, wediscuss how to maintain, to manage and to exchange metadata, especially from the viewpoint of users. We propose the draft of metadata, MICI (Metadata Interchange for Chinese Information , developed by ROSS (Resources Organization and SearchingSpecification team. Finally, current problems and future development of metadata will be touched.[Article content in Chinese

  18. A Metadata based Knowledge Discovery Methodology for Seeding Translational Research.

    Science.gov (United States)

    Kothari, Cartik R; Payne, Philip R O

    2015-01-01

    In this paper, we present a semantic, metadata based knowledge discovery methodology for identifying teams of researchers from diverse backgrounds who can collaborate on interdisciplinary research projects: projects in areas that have been identified as high-impact areas at The Ohio State University. This methodology involves the semantic annotation of keywords and the postulation of semantic metrics to improve the efficiency of the path exploration algorithm as well as to rank the results. Results indicate that our methodology can discover groups of experts from diverse areas who can collaborate on translational research projects.

  19. Delaware Bay Database; Delaware Sea Grant College Program, 28 June 1988 (NODC Accession 8900151)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Delaware Bay database contains records of discrete quality observations, collected on 40 oceanographic cruises between May 1978 and October 1985. Each record...

  20. Delaware Basin Monitoring Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Washington Regulatory and Environmental Services; Washington TRU Solutions LLC

    2001-09-28

    The Delaware Basin Drilling Surveillance Program (DBDSP) is designed to monitor drilling activities in the vicinity of the Waste Isolation Pilot Plant (WIPP). This program is based on Environmental Protection Agency (EPA) requirements. EPA requires the Department of Energy (DOE) to demonstrate the expected performance of the disposal system using a probabilistic risk assessment or performance assessment (PA). This PA must show that the expected repository performance will not release radioactive material above limits set by the EPA's standard and must consider inadvertent drilling into the repository at some future time.

  1. Delaware Basin Monitoring Annual Report

    International Nuclear Information System (INIS)

    2001-01-01

    The Delaware Basin Drilling Surveillance Program (DBDSP) is designed to monitor drilling activities in the vicinity of the Waste Isolation Pilot Plant (WIPP). This program is based on Environmental Protection Agency (EPA) requirements. EPA requires the Department of Energy (DOE) to demonstrate the expected performance of the disposal system using a probabilistic risk assessment or performance assessment (PA). This PA must show that the expected repository performance will not release radioactive material above limits set by the EPA's standard and must consider inadvertent drilling into the repository at some future time.

  2. Statistical Data Processing with R – Metadata Driven Approach

    Directory of Open Access Journals (Sweden)

    Rudi SELJAK

    2016-06-01

    Full Text Available In recent years the Statistical Office of the Republic of Slovenia has put a lot of effort into re-designing its statistical process. We replaced the classical stove-pipe oriented production system with general software solutions, based on the metadata driven approach. This means that one general program code, which is parametrized with process metadata, is used for data processing for a particular survey. Currently, the general program code is entirely based on SAS macros, but in the future we would like to explore how successfully statistical software R can be used for this approach. Paper describes the metadata driven principle for data validation, generic software solution and main issues connected with the use of statistical software R for this approach.

  3. PERANCANGAN SISTEM METADATA UNTUK DATA WAREHOUSE DENGAN STUDI KASUS REVENUE TRACKING PADA PT. TELKOM DIVRE V JAWA TIMUR

    Directory of Open Access Journals (Sweden)

    Yudhi Purwananto

    2004-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Data warehouse merupakan media penyimpanan data dalam perusahaan yang diambil dari berbagai sistem dan dapat digunakan untuk berbagai keperluan seperti analisis dan pelaporan. Di PT Telkom Divre V Jawa Timur telah dibangun sebuah data warehouse yang disebut dengan Regional Database. Di Regional Database memerlukan sebuah komponen penting dalam data warehouse yaitu metadata. Definisi metadata secara sederhana adalah "data tentang data". Dalam penelitian ini dirancang sistem metadata dengan studi kasus Revenue Tracking sebagai komponen analisis dan pelaporan pada Regional Database. Metadata sangat perlu digunakan dalam pengelolaan dan memberikan informasi tentang data warehouse. Proses - proses di dalam data warehouse serta komponen - komponen yang berkaitan dengan data warehouse harus saling terintegrasi untuk mewujudkan karakteristik data warehouse yang subject-oriented, integrated, time-variant, dan non-volatile. Karena itu metadata juga harus memiliki kemampuan mempertukarkan informasi (exchange antar komponen dalam data warehouse tersebut. Web service digunakan sebagai mekanisme pertukaran ini. Web service menggunakan teknologi XML dan protokol HTTP dalam berkomunikasi. Dengan web service, setiap komponen

  4. A Generic Metadata Editor Supporting System Using Drupal CMS

    Science.gov (United States)

    Pan, J.; Banks, N. G.; Leggott, M.

    2011-12-01

    Metadata handling is a key factor in preserving and reusing scientific data. In recent years, standardized structural metadata has become widely used in Geoscience communities. However, there exist many different standards in Geosciences, such as the current version of the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata (FGDC CSDGM), the Ecological Markup Language (EML), the Geography Markup Language (GML), and the emerging ISO 19115 and related standards. In addition, there are many different subsets within the Geoscience subdomain such as the Biological Profile of the FGDC (CSDGM), or for geopolitical regions, such as the European Profile or the North American Profile in the ISO standards. It is therefore desirable to have a software foundation to support metadata creation and editing for multiple standards and profiles, without re-inventing the wheels. We have developed a software module as a generic, flexible software system to do just that: to facilitate the support for multiple metadata standards and profiles. The software consists of a set of modules for the Drupal Content Management System (CMS), with minimal inter-dependencies to other Drupal modules. There are two steps in using the system's metadata functions. First, an administrator can use the system to design a user form, based on an XML schema and its instances. The form definition is named and stored in the Drupal database as a XML blob content. Second, users in an editor role can then use the persisted XML definition to render an actual metadata entry form, for creating or editing a metadata record. Behind the scenes, the form definition XML is transformed into a PHP array, which is then rendered via Drupal Form API. When the form is submitted the posted values are used to modify a metadata record. Drupal hooks can be used to perform custom processing on metadata record before and after submission. It is trivial to store the metadata record as an actual XML file

  5. Derivation of Delaware Bay tidal parameters from space shuttle photography

    International Nuclear Information System (INIS)

    Zheng, Quanan; Yan, Xiaohai; Klemas, V.

    1993-01-01

    The tide-related parameters of the Delaware Bay are derived from space shuttle time-series photographs. The water areas in the bay are measured from interpretation maps of the photographs with a CALCOMP 9100 digitizer and ERDAS Image Processing System. The corresponding tidal levels are calculated using the exposure time annotated on the photographs. From these data, an approximate function relating the water area to the tidal level at a reference point is determined. Based on the function, the water areas of the Delaware Bay at mean high water (MHW) and mean low water (MLW), below 0 m, and for the tidal zone are inferred. With MHW and MLW areas and the mean tidal range, the authors calculate the tidal influx of the Delaware Bay, which is 2.76 x 1O 9 m 3 . Furthermore, the velocity of flood tide at the bay mouth is determined using the tidal flux and an integral of the velocity distribution function at the cross section between Cape Henlopen and Cape May. The result is 132 cm/s, which compares well with the data on tidal current charts

  6. Why internet-based education?

    Science.gov (United States)

    Gernsbacher, Morton Ann

    2014-01-01

    This essay illustrates five ways that Internet-based higher education can capitalize on fundamental principles of learning. Internet-based education can enable better mastery through distributed (shorter, more frequent) practice rather than massed (longer, less frequent) practice; it can optimize performance because it allows students to learn at their peak time of their day; it can deepen memory because it requires cheat-proof assignments and tests; it can promote critical thinking because it necessitates intellectual winnowing and sifting; and it can enhance writing skills by requiring students to write frequently and for a broad audience.

  7. The impact of the 2002 Delaware smoking ordinance on heart attack and asthma.

    Science.gov (United States)

    Moraros, John; Bird, Yelena; Chen, Shande; Buckingham, Robert; Meltzer, Richard S; Prapasiri, Surasri; Solis, Luis H

    2010-12-01

    In the United States, smoking is the leading cause of death - having a mortality rate of approximately 435,000 people in 2000-accounting for 8.1% of all US deaths recorded that year. Consequently, we analyzed the Delaware Hospital Discharge Database, and identified state and non-state residents discharged with AMI or asthma for the years 1999 to 2004. Statistical data analysis compared the incidence of AMI or asthma for each group before (1999-2002) and after (2003-2004) the amendment. As a result, we found that pre-ordinance and post-ordinance quarterly rates of AMI for Delaware residents were 451 (se = 21) and 430 (se = 21) respectively, representing a 4.7% reduction. Over the same time period, there was negligible change in the incidence of AMI for non-Delaware residents. After adjusting for population growth, the Risk Ratio (RR) for asthma in Delaware residents post-ordinance was 0.95 (95% CI, 0.90 to 0.999), which represented a significant reduction (P = 0.046). By comparison, non-Delaware residents had an increased RR for asthma post-ordinance of 1.62 (95% CI, 1.46 to 1.86; P asthma in Delaware residents when compared to non-Delaware residents.

  8. Internet-based instruction in college teaching

    Science.gov (United States)

    Flickinger, Kathleen Anne

    Distance education and Internet instruction are increasingly being used in college science teaching. In an effort to reach more students, Iowa State University's Human Anatomy and Physiology course was offered via Internet as well as via traditional lecture format. To assess the educational ramifications of this offering, three studies were conducted. In the first study, a collective case study approach was utilized to describe the learning environment created by an Internet-based college science course. In this study, three students were followed as they worked their way through the course. Collective case study methodologies were used to provide a rich description of the learning environment experienced by these students. Motivation, computer savvy, and academic and personal self-confidence appeared to impact the satisfaction level of the students enrolled in the class. To evaluate the effectiveness of the learning environment offered through the Internet-based science course, a quantitative comparison study was undertaken. In this study a comparison of achievement scores and study habits between students enrolled in the Internet-based class and those enrolled in the traditional section was made. Results from this study indicated that content understanding and retention did not appear to be effected by the type of instruction. Desirable study habits were reportedly used more frequently in the Internet section of the class than in the traditional class. To complete the description of the Internet course experience, a qualitative examination of Internet instructors' time commitment and level of teaching satisfaction was conducted. Data for this study consisted of interviews and researcher observations. Instructor time-on-task was initially quite high, and remained above the average spent on average face-to-face instruction in subsequent semesters. Additionally the role of the faculty member changed dramatically, causing some lessening of job satisfaction. Taken as

  9. The effect of Delaware law on firm value: Evidence from poison pill adoptions

    Directory of Open Access Journals (Sweden)

    Terry L. Campbell II

    2010-07-01

    Full Text Available As the leading location for firm incorporations and corporate law, Delaware occupies a unique place in corporate governance and control. In this paper, we provide fresh evidence on whether Delaware’s dominance arises from its takeover laws being in the best interest of shareholders versus managers by investigating the role of the state in which a firm is incorporated on the firm’s adoption of a poison pill. Our results indicate that announcements of adoptions of poison pills by Delaware firms are associated with returns not significantly different from those for non-Delaware firms. Moreover, Delaware firms that adopt poison pills are no more likely to receive a takeover bid, be successfully acquired, or receive better merger terms than non-Delaware firms. Overall, it appears that Delaware law, with regards to takeovers, promotes an environment consistent with a “race to the middle” philosophy, neutral to management and shareholders.

  10. Mercury- Distributed Metadata Management, Data Discovery and Access System

    Science.gov (United States)

    Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.

    2007-12-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  11. Delaware State Briefing Book on low-level radioactive-waste management

    International Nuclear Information System (INIS)

    1981-07-01

    The Delaware State Briefing Book is one of a series of state briefing books on low-level radioactive waste management practices. It has been prepared to assist state and federal agency officials in planning for safe low-level radioactive waste disposal. The report contains a profile of low-level radioactive waste generators in Delaware. The profile is the result of a survey of NRC licensees in Delaware. The briefing book also contains a comprehensive assessment of low-level radioactive waste management issues and concerns as defined by all major interested parties including industry, government, the media, and interest groups. The assessment was developed through personal communications with representatives of interested parties and through a review of media sources. Lastly, the briefing book provides demographic and socioeconomic data and a discussion of relevant government agencies and activities, all of which may impact waste management practices in Delaware

  12. XML for catalogers and metadata librarians

    CERN Document Server

    Cole, Timothy W

    2013-01-01

    How are today's librarians to manage and describe the everexpanding volumes of resources, in both digital and print formats? The use of XML in cataloging and metadata workflows can improve metadata quality, the consistency of cataloging workflows, and adherence to standards. This book is intended to enable current and future catalogers and metadata librarians to progress beyond a bare surfacelevel acquaintance with XML, thereby enabling them to integrate XML technologies more fully into their cataloging workflows. Building on the wealth of work on library descriptive practices, cataloging, and metadata, XML for Catalogers and Metadata Librarians explores the use of XML to serialize, process, share, and manage library catalog and metadata records. The authors' expert treatment of the topic is written to be accessible to those with little or no prior practical knowledge of or experience with how XML is used. Readers will gain an educated appreciation of the nuances of XML and grasp the benefit of more advanced ...

  13. Security in a Replicated Metadata Catalogue

    CERN Document Server

    Koblitz, B

    2007-01-01

    The gLite-AMGA metadata has been developed by NA4 to provide simple relational metadata access for the EGEE user community. As advanced features, which will be the focus of this presentation, AMGA provides very fine-grained security also in connection with the built-in support for replication and federation of metadata. AMGA is extensively used by the biomedical community to store medical images metadata, digital libraries, in HEP for logging and bookkeeping data and in the climate community. The biomedical community intends to deploy a distributed metadata system for medical images consisting of various sites, which range from hospitals to computing centres. Only safe sharing of the highly sensitive metadata as provided in AMGA makes such a scenario possible. Other scenarios are digital libraries, which federate copyright protected (meta-) data into a common catalogue. The biomedical and digital libraries have been deployed using a centralized structure already for some time. They now intend to decentralize ...

  14. Why Internet-based Education?

    Directory of Open Access Journals (Sweden)

    Morton Ann Gernsbacher

    2015-01-01

    Full Text Available This essay illustrates five ways that Internet-based higher education can capitalize on fundamental principles of learning. Internet-based education can enable better mastery through distributed (shorter, more frequent practice rather than massed (longer, less frequent practice; it can optimize performance because it allows students to learn at their peak time of their day; it can deepen memory because it requires cheat-proof assignments and tests; it can promote critical thinking because it necessitates intellectual winnowing and sifting; and it can enhance writing skills by requiring students to write frequently and for a broad audience.

  15. A Collaborative Study of Disproportionate Chemical Risks in Seven Delaware Communities

    Science.gov (United States)

    Dryden, O.; Goldman, G. T.; White, R.; Moore, D.; Roberts, M.; Thomas, J.; Johnson, C.

    2017-12-01

    Studies have found that, compared to national averages, a significantly greater percentage of Blacks (African-Americans), Latinos (Hispanics), and people at or near poverty levels tend to live near industrial facilities that use large quantities of toxic chemicals and present a risk of major chemical disasters with potentially severe consequences for nearby communities. The Union of Concerned Scientists, the Environmental Justice Health Alliance for Chemical Policy Reform, and Delaware Concerned Residents for Environmental Justice collaborated on a study to examine the potential for cumulative impacts from health and safety risks for seven Delaware communities with a percentage of people of color and/or poverty levels greater than the Delaware average located along an industrial corridor in the northern portion of Delaware's New Castle County. These risks include close proximity to major industrial sources, as well as facilities that use large quantities of toxic, flammable or explosive chemicals and pose a high risk of a major chemical release or catastrophic incident. Additionally, proximity to contaminated waste sites was assessed, as well as the risk of cancer and potential for respiratory disease impacts from exposure to toxic air pollution. We found that people in these seven communities face a substantial cumulative health risk from exposure to toxic air pollution, proximity to polluting industrial facilities and hazardous chemical facilities, as well as contaminated waste sites. These health risks are substantially greater when compared to a wealthier and predominantly White Delaware community and for Delaware as a whole. Significant and expedited improvements in regulatory and public policy are needed at the national, state, and municipal levels to address the health and well-being of at-risk communities in Delaware and elsewhere.

  16. Mdmap: A Tool for Metadata Collection and Matching

    Directory of Open Access Journals (Sweden)

    Rico Simke

    2014-10-01

    Full Text Available This paper describes a front-end for the semi-automatic collection, matching, and generation of bibliographic metadata obtained from different sources for use within a digitization architecture. The Library of a Billion Words project is building an infrastructure for digitizing text that requires high-quality bibliographic metadata, but currently only sparse metadata from digitized editions is available. The project’s approach is to collect metadata for each digitized item from as many sources as possible. An expert user can then use an intuitive front-end tool to choose matching metadata. The collected metadata are centrally displayed in an interactive grid view. The user can choose which metadata they want to assign to a certain edition, and export these data as MARCXML. This paper presents a new approach to bibliographic work and metadata correction. We try to achieve a high quality of the metadata by generating a large amount of metadata to choose from, as well as by giving librarians an intuitive tool to manage their data.

  17. Provenance metadata gathering and cataloguing of EFIT++ code execution

    Energy Technology Data Exchange (ETDEWEB)

    Lupelli, I., E-mail: ivan.lupelli@ccfe.ac.uk [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Muir, D.G.; Appel, L.; Akers, R.; Carr, M. [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Abreu, P. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal)

    2015-10-15

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  18. Provenance metadata gathering and cataloguing of EFIT++ code execution

    International Nuclear Information System (INIS)

    Lupelli, I.; Muir, D.G.; Appel, L.; Akers, R.; Carr, M.; Abreu, P.

    2015-01-01

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  19. The essential guide to metadata for books

    CERN Document Server

    Register, Renee

    2013-01-01

    In The Essential Guide to Metadata for Books, you will learn exactly what you need to know to effectively generate, handle and disseminate metadata for books and ebooks. This comprehensive but digestible document will explain the life-cycle of book metadata, industry standards, XML, ONIX and the essential elements of metadata. It will also show you how effective, well-organized metadata can improve your efforts to sell a book, especially when it comes to marketing, discoverability and converting at the point of sale. This information-packed document also includes a glossary of terms

  20. Characteristics of Wind Generated Waves in the Delaware Estuary

    Science.gov (United States)

    Chen, J. L.; Ralston, D. K.; Geyer, W. R.; Chant, R. J.; Sommerfield, C. K.

    2016-02-01

    Coastal marshes provide important services for human uses such as fishery industry, recreation, ports and marine operations. Bombay Hook Wildlife Refuge, located along the western shore of the Delaware Estuary, has experienced substantial loss of salt marsh in recent decades. To evaluate the importance of different mechanisms which cause observed shoreline retreat, wave gauges were deployed along the dredged navigation channel and shoreline in the Delaware Estuary. A coupled wave and circulation modeling system (SWAN/ROMS) based on the most recent bathymetry (last updated 2013) is validated with waves observed during both calm and energetic conditions in November 2015. Simulation results based on different model parameterizations of whitecapping, bottom friction and the wind input source are compared. The tendency of observed wave steepness is more similar to a revised whitecapping source term [Westhuysen, 2007] than the default in SWAN model. Both model results and field data show that the generation/dissipation of waves in the Delaware estuary is determined by the local wind speed and channel depth. Whitecapping-induced energy dissipation is dominant in the channel, while dissipation due to bottom friction and depth-induced breaking become important on lateral shoals. To characterize the effects of wind fetch on waves in estuaries more generally, simulations with an idealized domain and varying wind conditions are compared and the results are expressed in terms of non-dimensional parameters. The simulations based on a 10m-depth uniform idealized channel show that the dissipation of waves is mainly controlled by whitecapping in all wind conditions. Under strong wind conditions (wind speed >10m/s) the effect of bottom friction becomes important so the simulated wave heights are no longer linearly correlated with wind speed.

  1. Water quality in the surficial aquifer near agricultural areas in the Delaware Coastal Plain, 2014

    Science.gov (United States)

    Fleming, Brandon J.; Mensch, Laura L.; Denver, Judith M.; Cruz, Roberto M.; Nardi, Mark R.

    2017-07-27

    The U.S. Geological Survey, in cooperation with the Delaware Department of Agriculture, developed a network of wells to monitor groundwater quality in the surficial aquifer of the Delaware Coastal Plain. Well-drained soils, a flat landscape, and accessible water in the Delaware Coastal Plain make for a productive agricultural setting. As such, agriculture is one of the largest industries in the State of Delaware. This setting enables the transport of chemicals from agriculture and other land uses to shallow groundwater. Efforts to mitigate nutrient transport to groundwater by the implementation of agricultural best management practices (BMPs) have been ongoing for several decades. To measure the effectiveness of BMPs on a regional scale, a network of 48 wells was designed to measure shallow groundwater quality (particularly nitrate) over time near agricultural land in the Delaware Coastal Plain. Water characteristics, major ions, nutrients, and dissolved gases were measured in groundwater samples collected from network wells during fall 2014. Wells were organized into three groups based on their geochemical similarity and these groups were used to describe nitrate and chloride concentrations and factors that affect the variability among the groups. The results from this study are intended to establish waterquality conditions in 2014 to enable comparison of future conditions and evaluate the effectiveness of agricultural BMPs on a regional scale.

  2. Delaware's Forests 2008

    Science.gov (United States)

    Tonya W. Lister; Glenn Gladders; Charles J. Barnett; Gary J. Brand; Brett J. Butler; Susan J. Crocker; Grant M. Domke; Douglas M. Griffith; Mark A. Hatfield; Cassandra M. Kurtz; Andrew J. Lister; Randall S. Morin; W. Keith Moser; Mark D. Nelson; Charles H. Perry; Ronald J. Piva; Rachel Riemann; Christopher W. Woodall

    2012-01-01

    The fifth full inventory of Delaware's forests reports an 8 percent decrease in the area of forest land to 352,000 acres, which cover 28 percent of the State's land area and has a volume of approximately 2,352 cubic feet per acre. Twenty-one percent of the growing-stock volume is red maple, followed by sweetgum (13 percent), and loblolly pine (12 percent)....

  3. National Status and Trends: Bioeffects Assessment Program, Delaware Bay Summary Database (1997)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This study was based on the sediment quality triad (SQT) approach. A stratified probabilistic sampling design was utilized to characterize the Delaware Bay system in...

  4. Internet-Based Communication

    Science.gov (United States)

    Gernsbacher, Morton Ann

    2014-01-01

    Google the question, "How is the Internet changing the way we communicate?," and you will find no shortage of opinions, or fears, about the Internet altering the way we communicate. Although the Internet is not necessarily making communication briefer (neither is the Internet making communication less formal), the Internet is manifesting…

  5. A snapshot of tobacco-related messages relayed in pediatric offices in Delaware.

    Science.gov (United States)

    Feinson, Judith; Raughley, Erin; Chang, Christine D; Chidekel, Aaron

    2003-10-01

    Much research exists demonstrating that pediatricians should counsel patients and families about tobacco. However, few data are available about tobacco-related messages relayed in pediatric offices. Since an anti-tobacco office environment can be a strong component of an active tobacco prevention program, we evaluated pediatric offices in Delaware to characterize tobacco-related messages. A convenience sample of 32 of 63 (51%) pediatric offices in Delaware was directly evaluated for the presence of tobacco-related messages. Fifty-five of 63 (87%) pediatric practices in Delaware were contacted by telephone to inquire about the presence of a tobacco coordinator. The 32 practices represented 71 physicians, were located in all three counties throughout the state, and were urban and non-urban in setting. The same investigator evaluated practices in a single site visit. All were located in smoke-free buildings. At one office, people were seen smoking outside; however, the presence of discarded cigarettes was much more common. Thirteen practices (41%) employed smokers, most of whom smoked outside during work hours. Twenty-one of 28 practices (75%) had waiting room magazines containing tobacco advertisements. Fifteen practices (47%) offered anti-tobacco literature while six practices (19%) displayed visual media, none exclusively addressing tobacco. Nine practices (28%) use chart flags to identify smokers. None of 55 pediatric practices in Delaware contacted by telephone identified an office tobacco prevention coordinator. Our data indicate that, in Delaware, the pediatric offices we visited overall convey a limited message about tobacco and could strengthen tobacco prevention strategies. Research measuring the impact of office-based anti-tobacco messages is needed. If these messages are effective in preventing tobacco use, practitioners can supplement active counseling with indirect interventions that require minimal maintenance once established and that place no

  6. Metadata Design in the New PDS4 Standards - Something for Everybody

    Science.gov (United States)

    Raugh, Anne C.; Hughes, John S.

    2015-11-01

    The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data

  7. 40 CFR 81.55 - Northeast Pennsylvania-Upper Delaware Valley Interstate Air Quality Control Region.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Northeast Pennsylvania-Upper Delaware... Designation of Air Quality Control Regions § 81.55 Northeast Pennsylvania-Upper Delaware Valley Interstate Air Quality Control Region. The Northeast Pennsylvania-Upper Delaware Valley Interstate Air Quality Control...

  8. Flood-inundation maps for the West Branch Delaware River, Delhi, New York, 2012

    Science.gov (United States)

    Coon, William F.; Breaker, Brian K.

    2012-01-01

    Digital flood-inundation maps for a 5-mile reach of the West Branch Delaware River through the Village and part of the Town of Delhi, New York, were created by the U.S. Geological Survey (USGS) in cooperation with the Village of Delhi, the Delaware County Soil and Water Conservation District, and the Delaware County Planning Department. The inundation maps, which can be accessed through the USGS Flood Inundation Mapping Science Web site at http://water.usgs.gov/osw/flood_inundation/ and the Federal Flood Inundation Mapper Web site at http://wim.usgs.gov/FIMI/FloodInundationMapper.html, depict estimates of the areal extent and depth of flooding corresponding to selected water levels (stages) referenced to the USGS streamgage at West Branch Delaware River upstream from Delhi, N.Y. (station number 01421900). In this study, flood profiles were computed for the stream reach by means of a one-dimensional step-backwater model that had been used to produce the flood insurance rate maps for the most recent flood insurance study for the Town and Village of Delhi. This hydraulic model was used to compute 10 water-surface profiles for flood stages at 1-foot (ft) intervals referenced to the streamgage datum and ranging from 7 ft or near bankfull to 16 ft, which exceeds the stages that correspond to both the estimated 0.2-percent annual-exceedance-probability flood (500-year recurrence interval flood) and the maximum recorded peak flow. The simulated water-surface profiles were then combined with a geographic information system (GIS) digital elevation model, which was derived from Light Detection and Ranging (LiDAR) data with a 1.2-ft (0.61-ft root mean squared error) vertical accuracy and 3.3-ft (1-meter) horizontal resolution, to delineate the area flooded at each water level. A map that was produced using this method to delineate the inundated area for the flood that occurred on August 28, 2011, agreed well with highwater marks that had been located in the field using a

  9. Residential Energy Efficiency Potential: Delaware

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Eric J [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-11-16

    Energy used by Delaware single-family homes that can be saved through cost-effective improvements. Prepared by Eric Wilson and Noel Merket, NREL, and Erin Boyd, U.S. Department of Energy Office of Energy Policy and Systems Analysis.

  10. 78 FR 39601 - Safety Zone, Sugar House Casino Fireworks Display, Delaware River; Philadelphia, PA

    Science.gov (United States)

    2013-07-02

    ...-AA00 Safety Zone, Sugar House Casino Fireworks Display, Delaware River; Philadelphia, PA AGENCY: Coast... the Delaware River. Sugar House Casino has contracted with Pyrotecnico Fireworks to arrange for this display. The Captain of the Port, Sector Delaware Bay, has determined that the Sugar House Casino...

  11. The Impact of the 2002 Delaware Smoking Ordinance on Heart Attack and Asthma

    Directory of Open Access Journals (Sweden)

    Luis H. Solis

    2010-12-01

    Full Text Available In the United States, smoking is the leading cause of death - having a mortality rate of approximately 435,000 people in 2000—accounting for 8.1% of all US deaths recorded that year. Consequently, we analyzed the Delaware Hospital Discharge Database, and identified state and non-state residents discharged with AMI or asthma for the years 1999 to 2004. Statistical data analysis compared the incidence of AMI or asthma for each group before (1999–2002 and after (2003–2004 the amendment. As a result, we found that pre-ordinance and post-ordinance quarterly rates of AMI for Delaware residents were 451 (se = 21 and 430 (se = 21 respectively, representing a 4.7% reduction. Over the same time period, there was negligible change in the incidence of AMI for non-Delaware residents. After adjusting for population growth, the Risk Ratio (RR for asthma in Delaware residents post-ordinance was 0.95 (95% CI, 0.90 to 0.999, which represented a significant reduction (P = 0.046. By comparison, non-Delaware residents had an increased RR for asthma post-ordinance of 1.62 (95% CI, 1.46 to 1.86; P < 0.0001.The results suggest that Delaware’s comprehensive non-smoking ordinance effectively was associated with a statistically significant decrease in the incidence of AMI and asthma in Delaware residents when compared to non-Delaware residents.

  12. Optimising metadata workflows in a distributed information environment

    OpenAIRE

    Robertson, R. John; Barton, Jane

    2005-01-01

    The different purposes present within a distributed information environment create the potential for repositories to enhance their metadata by capitalising on the diversity of metadata available for any given object. This paper presents three conceptual reference models required to achieve this optimisation of metadata workflow: the ecology of repositories, the object lifecycle model, and the metadata lifecycle model. It suggests a methodology for developing the metadata lifecycle model, and ...

  13. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  14. The open research system: a web-based metadata and data repository for collaborative research

    Science.gov (United States)

    Charles M. Schweik; Alexander Stepanov; J. Morgan Grove

    2005-01-01

    Beginning in 1999, a web-based metadata and data repository we call the "open research system" (ORS) was designed and built to assist geographically distributed scientific research teams. The purpose of this innovation was to promote the open sharing of data within and across organizational lines and across geographic distances. As the use of the system...

  15. [Differences in access to Internet and Internet-based information seeking according to the type of psychiatric disorder].

    Science.gov (United States)

    Brunault, P; Bray, A; Rerolle, C; Cognet, S; Gaillard, P; El-Hage, W

    2017-04-01

    Internet has become a major tool for patients to search for health-related information and to communicate on health. We currently lack data on how patients with psychiatric disorders access and use Internet to search for information on their mental health. This study aimed to assess, in patients followed for a psychiatric disorder (schizophrenia, bipolar disorder, mood and anxiety disorder, substance-related and addictive disorders and eating disorders), prevalence of Internet access and use, and patient expectations and needs regarding the use of Internet to search for mental-health information depending on the psychiatric disorder. We conducted this cross-sectional study between May 2013 and July 2013 in 648 patients receiving psychiatric care in 8 hospitals from the Region Centre, France. We used multivariate logistic regression adjusted for age, gender, socio-educational level and professional status to compare use, expectations and needs regarding Internet-based information about the patient's psychiatric disorder (65-items self-administered questionnaires) as a function of the psychiatric disorders. We identified patients clusters with multiple correspondence analysis and ascending hierarchical classification. Although 65.6% of our population accessed Internet at home, prevalence for Internet access varied depending on the type of psychiatric disorder and was much more related to limited access to a computer and low income than to a lack of interest in the Internet. Most of the patients who used Internet were interested in having access to reliable Internet-based information on their health (76.8%), and most used Internet to search for Internet based health-information about their psychiatric disorder (58.8%). We found important differences in terms of expectations and needs depending on the patient's psychiatric disorder (e.g., higher interest in Internet-based information among patients with bipolar disorder, substance-related and addictive disorders

  16. Building an Internet of Samples: The Australian Contribution

    Science.gov (United States)

    Wyborn, Lesley; Klump, Jens; Bastrakova, Irina; Devaraju, Anusuriya; McInnes, Brent; Cox, Simon; Karssies, Linda; Martin, Julia; Ross, Shawn; Morrissey, John; Fraser, Ryan

    2017-04-01

    Physical samples are often the ground truth to research reported in the scientific literature across multiple domains. They are collected by many different entities (individual researchers, laboratories, government agencies, mining companies, citizens, museums, etc.). Samples must be curated over the long-term to ensure both that their existence is known, and to allow any data derived from them through laboratory and field tests to be linked to the physical samples. For example, having unique identifiers that link back ground truth data on the original sample helps calibrate large volumes of remotely sensed data. Access to catalogues of reliably identified samples from several collections promotes collaboration across all Earth Science disciplines. It also increases the cost effectiveness of research by reducing the need to re-collect samples in the field. The assignment of web identifiers to the digital representations of these physical objects allows us to link to data, literature, investigators and institutions, thus creating an "Internet of Samples". An Australian implementation of the "Internet of Samples" is using the IGSN (International Geo Sample Number, http://igsn.github.io) to identify samples in a globally unique and persistent way. IGSN was developed in the solid earth science community and is recommended for sample identification by the Coalition for Publishing Data in the Earth and Space Sciences (COPDESS). IGSN is interoperable with other persistent identifier systems such as DataCite. Furthermore, the basic IGSN description metadata schema is compatible with existing schemas such as OGC Observations and Measurements (O&M) and DataCite Metadata Schema which makes crosswalks to other metadata schemas easy. IGSN metadata is disseminated through the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) allowing it to be aggregated in other applications such as portals (e.g. the Australian IGSN catalogue http://igsn2.csiro.au). The

  17. A Geospatial Data Recommender System based on Metadata and User Behaviour

    Science.gov (United States)

    Li, Y.; Jiang, Y.; Yang, C. P.; Armstrong, E. M.; Huang, T.; Moroni, D. F.; Finch, C. J.; McGibbney, L. J.

    2017-12-01

    Earth observations are produced in a fast velocity through real time sensors, reaching tera- to peta- bytes of geospatial data daily. Discovering and accessing the right data from the massive geospatial data is like finding needle in the haystack. To help researchers find the right data for study and decision support, quite a lot of research focusing on improving search performance have been proposed including recommendation algorithm. However, few papers have discussed the way to implement a recommendation algorithm in geospatial data retrieval system. In order to address this problem, we propose a recommendation engine to improve discovering relevant geospatial data by mining and utilizing metadata and user behavior data: 1) metadata based recommendation considers the correlation of each attribute (i.e., spatiotemporal, categorical, and ordinal) to data to be found. In particular, phrase extraction method is used to improve the accuracy of the description similarity; 2) user behavior data are utilized to predict the interest of a user through collaborative filtering; 3) an integration method is designed to combine the results of the above two methods to achieve better recommendation Experiments show that in the hybrid recommendation list, the all the precisions are larger than 0.8 from position 1 to 10.

  18. Integrated Array/Metadata Analytics

    Science.gov (United States)

    Misev, Dimitar; Baumann, Peter

    2015-04-01

    Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.

  19. Improving Earth Science Metadata: Modernizing ncISO

    Science.gov (United States)

    O'Brien, K.; Schweitzer, R.; Neufeld, D.; Burger, E. F.; Signell, R. P.; Arms, S. C.; Wilcox, K.

    2016-12-01

    ncISO is a package of tools developed at NOAA's National Center for Environmental Information (NCEI) that facilitates the generation of ISO 19115-2 metadata from NetCDF data sources. The tool currently exists in two iterations: a command line utility and a web-accessible service within the THREDDS Data Server (TDS). Several projects, including NOAA's Unified Access Framework (UAF), depend upon ncISO to generate the ISO-compliant metadata from their data holdings and use the resulting information to populate discovery tools such as NCEI's ESRI Geoportal and NOAA's data.noaa.gov CKAN system. In addition to generating ISO 19115-2 metadata, the tool calculates a rubric score based on how well the dataset follows the Attribute Conventions for Dataset Discovery (ACDD). The result of this rubric calculation, along with information about what has been included and what is missing is displayed in an HTML document generated by the ncISO software package. Recently ncISO has fallen behind in terms of supporting updates to conventions such updates to the ACDD. With the blessing of the original programmer, NOAA's UAF has been working to modernize the ncISO software base. In addition to upgrading ncISO to utilize version1.3 of the ACDD, we have been working with partners at Unidata and IOOS to unify the tool's code base. In essence, we are merging the command line capabilities into the same software that will now be used by the TDS service, allowing easier updates when conventions such as ACDD are updated in the future. In this presentation, we will discuss the work the UAF project has done to support updated conventions within ncISO, as well as describe how the updated tool is helping to improve metadata throughout the earth and ocean sciences.

  20. Metadata in Scientific Dialects

    Science.gov (United States)

    Habermann, T.

    2011-12-01

    Discussions of standards in the scientific community have been compared to religious wars for many years. The only things scientists agree on in these battles are either "standards are not useful" or "everyone can benefit from using my standard". Instead of achieving the goal of facilitating interoperable communities, in many cases the standards have served to build yet another barrier between communities. Some important progress towards diminishing these obstacles has been made in the data layer with the merger of the NetCDF and HDF scientific data formats. The universal adoption of XML as the standard for representing metadata and the recent adoption of ISO metadata standards by many groups around the world suggests that similar convergence is underway in the metadata layer. At the same time, scientists and tools will likely need support for native tongues for some time. I will describe an approach that combines re-usable metadata "components" and restful web services that provide those components in many dialects. This approach uses advanced XML concepts of referencing and linking to construct complete records that include reusable components and builds on the ISO Standards as the "unabridged dictionary" that encompasses the content of many other dialects.

  1. Metadata Wizard: an easy-to-use tool for creating FGDC-CSDGM metadata for geospatial datasets in ESRI ArcGIS Desktop

    Science.gov (United States)

    Ignizio, Drew A.; O'Donnell, Michael S.; Talbert, Colin B.

    2014-01-01

    Creating compliant metadata for scientific data products is mandated for all federal Geographic Information Systems professionals and is a best practice for members of the geospatial data community. However, the complexity of the The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata, the limited availability of easy-to-use tools, and recent changes in the ESRI software environment continue to make metadata creation a challenge. Staff at the U.S. Geological Survey Fort Collins Science Center have developed a Python toolbox for ESRI ArcDesktop to facilitate a semi-automated workflow to create and update metadata records in ESRI’s 10.x software. The U.S. Geological Survey Metadata Wizard tool automatically populates several metadata elements: the spatial reference, spatial extent, geospatial presentation format, vector feature count or raster column/row count, native system/processing environment, and the metadata creation date. Once the software auto-populates these elements, users can easily add attribute definitions and other relevant information in a simple Graphical User Interface. The tool, which offers a simple design free of esoteric metadata language, has the potential to save many government and non-government organizations a significant amount of time and costs by facilitating the development of The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata compliant metadata for ESRI software users. A working version of the tool is now available for ESRI ArcDesktop, version 10.0, 10.1, and 10.2 (downloadable at http:/www.sciencebase.gov/metadatawizard).

  2. The Machinic Temporality of Metadata

    Directory of Open Access Journals (Sweden)

    Claudio Celis

    2015-03-01

    Full Text Available In 1990 Deleuze introduced the hypothesis that disciplinary societies are gradually being replaced by a new logic of power: control. Accordingly, Matteo Pasquinelli has recently argued that we are moving towards societies of metadata, which correspond to a new stage of what Deleuze called control societies. Societies of metadata are characterised for the central role that meta-information acquires both as a source of surplus value and as an apparatus of social control. The aim of this article is to develop Pasquinelli’s thesis by examining the temporal scope of these emerging societies of metadata. In particular, this article employs Guattari’s distinction between human and machinic times. Through these two concepts, this article attempts to show how societies of metadata combine the two poles of capitalist power formations as identified by Deleuze and Guattari, i.e. social subjection and machinic enslavement. It begins by presenting the notion of metadata in order to identify some of the defining traits of contemporary capitalism. It then examines Berardi’s account of the temporality of the attention economy from the perspective of the asymmetric relation between cyber-time and human time. The third section challenges Berardi’s definition of the temporality of the attention economy by using Guattari’s notions of human and machinic times. Parts four and five fall back upon Deleuze and Guattari’s notions of machinic surplus labour and machinic enslavement, respectively. The concluding section tries to show that machinic and human times constitute two poles of contemporary power formations that articulate the temporal dimension of societies of metadata.

  3. Incorporating ISO Metadata Using HDF Product Designer

    Science.gov (United States)

    Jelenak, Aleksandar; Kozimor, John; Habermann, Ted

    2016-01-01

    The need to store in HDF5 files increasing amounts of metadata of various complexity is greatly overcoming the capabilities of the Earth science metadata conventions currently in use. Data producers until now did not have much choice but to come up with ad hoc solutions to this challenge. Such solutions, in turn, pose a wide range of issues for data managers, distributors, and, ultimately, data users. The HDF Group is experimenting on a novel approach of using ISO 19115 metadata objects as a catch-all container for all the metadata that cannot be fitted into the current Earth science data conventions. This presentation will showcase how the HDF Product Designer software can be utilized to help data producers include various ISO metadata objects in their products.

  4. An Assessment of the Evolving Common Metadata Repository Standards for Airborne Field Campaigns

    Science.gov (United States)

    Northup, E. A.; Chen, G.; Early, A. B.; Beach, A. L., III; Walter, J.; Conover, H.

    2016-12-01

    The NASA Earth Venture Program has led to a dramatic increase in airborne observations, requiring updated data management practices with clearly defined data standards and protocols for metadata. While the current data management practices demonstrate some success in serving airborne science team data user needs, existing metadata models and standards such as NASA's Unified Metadata Model (UMM) for Collections (UMM-C) present challenges with respect to accommodating certain features of airborne science metadata. UMM is the model implemented in the Common Metadata Repository (CMR), which catalogs all metadata records for NASA's Earth Observing System Data and Information System (EOSDIS). One example of these challenges is with representation of spatial and temporal metadata. In addition, many airborne missions target a particular geophysical event, such as a developing hurricane. In such cases, metadata about the event is also important for understanding the data. While coverage of satellite missions is highly predictable based on orbit characteristics, airborne missions feature complicated flight patterns where measurements can be spatially and temporally discontinuous. Therefore, existing metadata models will need to be expanded for airborne measurements and sampling strategies. An Airborne Metadata Working Group was established under the auspices of NASA's Earth Science Data Systems Working Group (ESDSWG) to identify specific features of airborne metadata that can not be currently represented in the UMM and to develop new recommendations. The group includes representation from airborne data users and providers. This presentation will discuss the challenges and recommendations in an effort to demonstrate how airborne metadata curation/management can be improved to streamline data ingest and discoverability to a broader user community.

  5. Dynamic Management of Releases for the Delaware River Basin using NYC's Operations Support Tool

    Science.gov (United States)

    Weiss, W.; Wang, L.; Murphy, T.; Muralidhar, D.; Tarrier, B.

    2011-12-01

    The New York City Department of Environmental Protection (DEP) has initiated design of an Operations Support Tool (OST), a state-of-the-art decision support system to provide computational and predictive support for water supply operations and planning. Using an interim version of OST, DEP and the New York State Department of Environmental Conservation (DEC) have developed a provisional, one-year Delaware River Basin reservoir release program to succeed the existing Flexible Flow Management Program (FFMP) which expired on May 31, 2011. The FFMP grew out of the Good Faith Agreement of 1983 among the four Basin states (New York, New Jersey, Pennsylvania, and Delaware) that established modified diversions and flow targets during drought conditions. It provided a set of release schedules as a framework for managing diversions and releases from New York City's Delaware Basin reservoirs in order to support multiple objectives, including water supply, drought mitigation, flood mitigation, tailwaters fisheries, main stem habitat, recreation, and salinity repulsion. The provisional program (OST-FFMP) defines available water based on current Upper Delaware reservoir conditions and probabilistic forecasts of reservoir inflow. Releases are then set based on a set of release schedules keyed to the water availability. Additionally, OST-FFMP attempts to provide enhanced downstream flood protection by making spill mitigation releases to keep the Delaware System reservoirs at a seasonally varying conditional storage objective. The OST-FFMP approach represents a more robust way of managing downstream releases, accounting for predicted future hydrologic conditions by making more water available for release when conditions are forecasted to be wet and protecting water supply reliability when conditions are forecasted to be dry. Further, the dynamic nature of the program allows the release decision to be adjusted as hydrologic conditions change. OST simulations predict that this

  6. Internet-Based Education for Prostate Cancer Screening

    National Research Council Canada - National Science Library

    Taylor, Kathryn L

    2008-01-01

    .... Abundant evidence documents the expanding role of the Internet in increasing access to and understanding of health information and the need for systematic evaluations of Internet-based interventions. The print- and web-based interventions have been completed and we have accrued 618 participants to the randomized trial.

  7. Evaluating the privacy properties of telephone metadata

    Science.gov (United States)

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C.

    2016-01-01

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences. PMID:27185922

  8. U.S. EPA Metadata Editor (EME)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Metadata Editor (EME) allows users to create geospatial metadata that meets EPA's requirements. The tool has been developed as a desktop application that...

  9. Geospatial metadata retrieval from web services

    Directory of Open Access Journals (Sweden)

    Ivanildo Barbosa

    Full Text Available Nowadays, producers of geospatial data in either raster or vector formats are able to make them available on the World Wide Web by deploying web services that enable users to access and query on those contents even without specific software for geoprocessing. Several providers around the world have deployed instances of WMS (Web Map Service, WFS (Web Feature Service and WCS (Web Coverage Service, all of them specified by the Open Geospatial Consortium (OGC. In consequence, metadata about the available contents can be retrieved to be compared with similar offline datasets from other sources. This paper presents a brief summary and describes the matching process between the specifications for OGC web services (WMS, WFS and WCS and the specifications for metadata required by the ISO 19115 - adopted as reference for several national metadata profiles, including the Brazilian one. This process focuses on retrieving metadata about the identification and data quality packages as well as indicates the directions to retrieve metadata related to other packages. Therefore, users are able to assess whether the provided contents fit to their purposes.

  10. 77 FR 1039 - Internet-Based Telecommunications Relay Service Numbering

    Science.gov (United States)

    2012-01-09

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 64 [WC Docket No. 10-191; Report No. 2939] Internet... toll-free numbers by users of Internet- based Telecommunications Relay Services (iTRS). DATES... any rules of particular applicability. Subject: Internet-Based Telecommunications Relay Service...

  11. The PDS4 Metadata Management System

    Science.gov (United States)

    Raugh, A. C.; Hughes, J. S.

    2018-04-01

    We present the key features of the Planetary Data System (PDS) PDS4 Information Model as an extendable metadata management system for planetary metadata related to data structure, analysis/interpretation, and provenance.

  12. Internet-based interventions for smoking cessation.

    Science.gov (United States)

    Civljak, Marta; Sheikh, Aziz; Stead, Lindsay F; Car, Josip

    2010-09-08

    The Internet has become a regular part of daily life for the majority of people in many parts of the world. It now offers an additional means of effecting changes to behaviour such as smoking. To determine the effectiveness of Internet-based interventions for smoking cessation. We searched the Cochrane Tobacco Addiction Group Specialized Register, with additional searches of MEDLINE, EMBASE, CINAHL, PsycINFO, and Google Scholar. There were no restrictions placed on language of publication or publication date. The most recent search was in June 2010. We included randomized and quasi-randomized trials. Participants were people who smoked, with no exclusions based on age, gender, ethnicity, language or health status. Any type of Internet-based intervention was eligible. The comparison condition could be a no-intervention control or a different Internet site or programme. Methodological and study quality details were extracted using a standardised form. We selected smoking cessation outcomes at short term (one to three months) and long term (6 months or more) follow up, and reported study effects as a risk ratio with 95% confidence intervals. Only limited meta-analysis was performed, as the heterogeneity of the data for populations, interventions and outcomes allowed for very little pooling. Twenty trials met the inclusion criteria. There were more female than male participants. Some Internet programmes were intensive and included multiple outreach contacts with participants, whilst others relied on participants to initiate and maintain use.Ten trials compared an Internet intervention to a non-Internet based smoking cessation intervention or to a no intervention control. Six of these recruited adults, one recruited young adult university students and three recruited adolescents. Two trials of the same intensive automated intervention in populations of adult who smoked showed significantly increased cessation compared to printed self-help materials at 12 months. In one

  13. A case for user-generated sensor metadata

    Science.gov (United States)

    Nüst, Daniel

    2015-04-01

    Cheap and easy to use sensing technology and new developments in ICT towards a global network of sensors and actuators promise previously unthought of changes for our understanding of the environment. Large professional as well as amateur sensor networks exist, and they are used for specific yet diverse applications across domains such as hydrology, meteorology or early warning systems. However the impact this "abundance of sensors" had so far is somewhat disappointing. There is a gap between (community-driven) sensor networks that could provide very useful data and the users of the data. In our presentation, we argue this is due to a lack of metadata which allows determining the fitness of use of a dataset. Syntactic or semantic interoperability for sensor webs have made great progress and continue to be an active field of research, yet they often are quite complex, which is of course due to the complexity of the problem at hand. But still, we see the most generic information to determine fitness for use is a dataset's provenance, because it allows users to make up their own minds independently from existing classification schemes for data quality. In this work we will make the case how curated user-contributed metadata has the potential to improve this situation. This especially applies for scenarios in which an observed property is applicable in different domains, and for set-ups where the understanding about metadata concepts and (meta-)data quality differs between data provider and user. On the one hand a citizen does not understand the ISO provenance metadata. On the other hand a researcher might find issues in publicly accessible time series published by citizens, which the latter might not be aware of or care about. Because users will have to determine fitness for use for each application on their own anyway, we suggest an online collaboration platform for user-generated metadata based on an extremely simplified data model. In the most basic fashion

  14. Acceptability of internet-based interventions for depression in Indonesia

    NARCIS (Netherlands)

    Arjadi, Retha; Nauta, Maaike H.; Bockting, Claudi L.H.

    2018-01-01

    AbstractBackground In Indonesia, internet-based interventions may represent a promising strategy to reduce the mental health gap given that the level of internet usage in the country continues to increase. To check the acceptability of internet-based interventions, this study investigates factors

  15. Data, Metadata, and Ted

    OpenAIRE

    Borgman, Christine L.

    2014-01-01

    Ted Nelson coined the term “hypertext” and developed Xanadu in a universe parallel to the one in which librarians, archivists, and documentalists were creating metadata to establish cross-connections among the myriad topics of this world. When these universes collided, comets exploded as ontologies proliferated. Black holes were formed as data disappeared through lack of description. Today these universes coexist, each informing the other, if not always happily: the formal rules of metadata, ...

  16. Delaware's first serial killer.

    Science.gov (United States)

    Inguito, G B; Sekula-Perlman, A; Lynch, M J; Callery, R T

    2000-11-01

    The violent murder of Shirley Ellis on November 29, 1987, marked the beginning of the strange and terrible tale of Steven Bryan Pennell's reign as the state of Delaware's first convicted serial killer. Three more bodies followed the first victim, and all had been brutally beaten and sadistically tortured. The body of a fifth woman has never been found. State and county police collaborated with the FBI to identify and hunt down their suspect, forming a task force of over 100 officers and spending about one million dollars. Through their knowledge and experience with other serial killers, the FBI was able to make an amazingly accurate psychological profile of Delaware's serial killer. After months of around-the-clock surveillance, Steven Pennell was arrested on November 29, 1988, one year to the day after the first victim was found. Pennell was found guilty in the deaths of the first two victims on November 29, 1989, and plead no contest to the murder of two others on October 30, 1991. Still maintaining his innocence, he asked for the death penalty so that he could spare his family further agony. Steven Pennell was executed by lethal injection on March 15, 1992.

  17. Geo-Enrichment and Semantic Enhancement of Metadata Sets to Augment Discovery in Geoportals

    Directory of Open Access Journals (Sweden)

    Bernhard Vockner

    2014-03-01

    Full Text Available Geoportals are established to function as main gateways to find, evaluate, and start “using” geographic information. Still, current geoportal implementations face problems in optimizing the discovery process due to semantic heterogeneity issues, which leads to low recall and low precision in performing text-based searches. Therefore, we propose an enhanced semantic discovery approach that supports multilingualism and information domain context. Thus, we present workflow that enriches existing structured metadata with synonyms, toponyms, and translated terms derived from user-defined keywords based on multilingual thesauri and ontologies. To make the results easier and understandable, we also provide automated translation capabilities for the resource metadata to support the user in conceiving the thematic content of the descriptive metadata, even if it has been documented using a language the user is not familiar with. In addition, to text-enable spatial filtering capabilities, we add additional location name keywords to metadata sets. These are based on the existing bounding box and shall tweak discovery scores when performing single text line queries. In order to improve the user’s search experience, we tailor faceted search strategies presenting an enhanced query interface for geo-metadata discovery that are transparently leveraging the underlying thesauri and ontologies.

  18. Pembuatan Aplikasi Metadata Generator untuk Koleksi Peninggalan Warisan Budaya

    Directory of Open Access Journals (Sweden)

    Wimba Agra Wicesa

    2017-03-01

    Full Text Available Warisan budaya merupakan suatu aset penting yang digunakan sebagai sumber informasi dalam mempelajari ilmu sejarah. Mengelola data warisan budaya menjadi suatu hal yang harus diperhatikan guna menjaga keutuhan data warisan budaya di masa depan. Menciptakan sebuah metadata warisan budaya merupakan salah satu langkah yang dapat diambil untuk menjaga nilai dari sebuah artefak. Dengan menggunakan konsep metadata, informasi dari setiap objek warisan budaya tersebut menjadi mudah untuk dibaca, dikelola, maupun dicari kembali meskipun telah tersimpan lama. Selain itu dengan menggunakan konsep metadata, informasi tentang warisan budaya dapat digunakan oleh banyak sistem. Metadata warisan budaya merupakan metadata yang cukup besar. Sehingga untuk membangun metada warisan budaya dibutuhkan waktu yang cukup lama. Selain itu kesalahan (human error juga dapat menghambat proses pembangunan metadata warisan budaya. Proses pembangkitan metadata warisan budaya melalui Aplikasi Metadata Generator menjadi lebih cepat dan mudah karena dilakukan secara otomatis oleh sistem. Aplikasi ini juga dapat menekan human error sehingga proses pembangkitan menjadi lebih efisien.

  19. Internet-based interventions for smoking cessation.

    Science.gov (United States)

    Civljak, Marta; Stead, Lindsay F; Hartmann-Boyce, Jamie; Sheikh, Aziz; Car, Josip

    2013-07-10

    The Internet is now an indispensable part of daily life for the majority of people in many parts of the world. It offers an additional means of effecting changes to behaviour such as smoking. To determine the effectiveness of Internet-based interventions for smoking cessation. We searched the Cochrane Tobacco Addiction Group Specialized Register. There were no restrictions placed on language of publication or publication date. The most recent search was conducted in April 2013. We included randomized and quasi-randomized trials. Participants were people who smoked, with no exclusions based on age, gender, ethnicity, language or health status. Any type of Internet intervention was eligible. The comparison condition could be a no-intervention control, a different Internet intervention, or a non-Internet intervention. Two authors independently assessed and extracted data. Methodological and study quality details were extracted using a standardized form. We extracted smoking cessation outcomes of six months follow-up or more, reporting short-term outcomes where longer-term outcomes were not available. We reported study effects as a risk ratio (RR) with a 95% confidence interval (CI). Clinical and statistical heterogeneity limited our ability to pool studies. This updated review includes a total of 28 studies with over 45,000 participants. Some Internet programmes were intensive and included multiple outreach contacts with participants, whilst others relied on participants to initiate and maintain use.Fifteen trials compared an Internet intervention to a non-Internet-based smoking cessation intervention or to a no-intervention control. Ten of these recruited adults, one recruited young adult university students and two recruited adolescents. Seven of the trials in adults had follow-up at six months or longer and compared an Internet intervention to usual care or printed self help. In a post hoc subgroup analysis, pooled results from three trials that compared

  20. Evolving Metadata in NASA Earth Science Data Systems

    Science.gov (United States)

    Mitchell, A.; Cechini, M. F.; Walter, J.

    2011-12-01

    NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 3500 data products ranging from various types of science disciplines. EOSDIS is currently comprised of 12 discipline specific data centers that are collocated with centers of science discipline expertise. Metadata is used in all aspects of NASA's Earth Science data lifecycle from the initial measurement gathering to the accessing of data products. Missions use metadata in their science data products when describing information such as the instrument/sensor, operational plan, and geographically region. Acting as the curator of the data products, data centers employ metadata for preservation, access and manipulation of data. EOSDIS provides a centralized metadata repository called the Earth Observing System (EOS) ClearingHouse (ECHO) for data discovery and access via a service-oriented-architecture (SOA) between data centers and science data users. ECHO receives inventory metadata from data centers who generate metadata files that complies with the ECHO Metadata Model. NASA's Earth Science Data and Information System (ESDIS) Project established a Tiger Team to study and make recommendations regarding the adoption of the international metadata standard ISO 19115 in EOSDIS. The result was a technical report recommending an evolution of NASA data systems towards a consistent application of ISO 19115 and related standards including the creation of a NASA-specific convention for core ISO 19115 elements. Part of

  1. Testing Metadata Existence of Web Map Services

    Directory of Open Access Journals (Sweden)

    Jan Růžička

    2011-05-01

    Full Text Available For a general user is quite common to use data sources available on WWW. Almost all GIS software allow to use data sources available via Web Map Service (ISO/OGC standard interface. The opportunity to use different sources and combine them brings a lot of problems that were discussed many times on conferences or journal papers. One of the problem is based on non existence of metadata for published sources. The question was: were the discussions effective? The article is partly based on comparison of situation for metadata between years 2007 and 2010. Second part of the article is focused only on 2010 year situation. The paper is created in a context of research of intelligent map systems, that can be used for an automatic or a semi-automatic map creation or a map evaluation.

  2. Internet-based tools for behaviour change

    Energy Technology Data Exchange (ETDEWEB)

    Bottrill, Catherine [Environmental Change Inst., Oxford Unversity Centre for the Environment (United Kingdom)

    2007-07-01

    Internet-based carbon calculators have the potential to be powerful tools for helping people to understand their personal energy use derived from fossil fuels and to take action to reduce the related carbon emissions. This paper reviews twenty-three calculators concluding that in most cases this environmental learning tool is falling short of giving people the ability to accurately monitor their energy use; to receive meaningful feedback and guidance for altering their energy use; or to connect with others also going through the same learning process of saving energy and conserving carbon. This paper presents the findings of research into the accuracy and effectiveness of carbon calculators. Based on the assessment of the calculators the paper discusses the opportunities Internet technology could be offering for engagement, communication, encouragement and guidance on low-carbon lifestyle choices. Finally, recommendations are made for the development of accurate, informative and social Internet-based carbon calculators.

  3. Improving Metadata Compliance for Earth Science Data Records

    Science.gov (United States)

    Armstrong, E. M.; Chang, O.; Foster, D.

    2014-12-01

    One of the recurring challenges of creating earth science data records is to ensure a consistent level of metadata compliance at the granule level where important details of contents, provenance, producer, and data references are necessary to obtain a sufficient level of understanding. These details are important not just for individual data consumers but also for autonomous software systems. Two of the most popular metadata standards at the granule level are the Climate and Forecast (CF) Metadata Conventions and the Attribute Conventions for Dataset Discovery (ACDD). Many data producers have implemented one or both of these models including the Group for High Resolution Sea Surface Temperature (GHRSST) for their global SST products and the Ocean Biology Processing Group for NASA ocean color and SST products. While both the CF and ACDD models contain various level of metadata richness, the actual "required" attributes are quite small in number. Metadata at the granule level becomes much more useful when recommended or optional attributes are implemented that document spatial and temporal ranges, lineage and provenance, sources, keywords, and references etc. In this presentation we report on a new open source tool to check the compliance of netCDF and HDF5 granules to the CF and ACCD metadata models. The tool, written in Python, was originally implemented to support metadata compliance for netCDF records as part of the NOAA's Integrated Ocean Observing System. It outputs standardized scoring for metadata compliance for both CF and ACDD, produces an objective summary weight, and can be implemented for remote records via OPeNDAP calls. Originally a command-line tool, we have extended it to provide a user-friendly web interface. Reports on metadata testing are grouped in hierarchies that make it easier to track flaws and inconsistencies in the record. We have also extended it to support explicit metadata structures and semantic syntax for the GHRSST project that can be

  4. Handling multiple metadata streams regarding digital learning material

    NARCIS (Netherlands)

    Roes, J.B.M.; Vuuren, J. van; Verbeij, N.; Nijstad, H.

    2010-01-01

    This paper presents the outcome of a study performed in the Netherlands on handling multiple metadata streams regarding digital learning material. The paper describes the present metadata architecture in the Netherlands, the present suppliers and users of metadata and digital learning materials. It

  5. Research of MOOC Platform BasedInternet +”

    OpenAIRE

    Wang Guang Xing; Chen Yan

    2016-01-01

    This paper is devoted to examining the construction of MOOC platform basedInternet +”. It firstly introduced the “Internet+” model, the “Internet + education” influence and the significance of education. Then, it analyzed the existing problems in the development of the current MOOC, and put forward the model of “Internet + MOOC” through the method of software engineering. It explored the advantage of “Internet + MOOC” mode and promoting the function to the education teaching. This paper put...

  6. ncISO Facilitating Metadata and Scientific Data Discovery

    Science.gov (United States)

    Neufeld, D.; Habermann, T.

    2011-12-01

    Increasing the usability and availability climate and oceanographic datasets for environmental research requires improved metadata and tools to rapidly locate and access relevant information for an area of interest. Because of the distributed nature of most environmental geospatial data, a common approach is to use catalog services that support queries on metadata harvested from remote map and data services. A key component to effectively using these catalog services is the availability of high quality metadata associated with the underlying data sets. In this presentation, we examine the use of ncISO, and Geoportal as open source tools that can be used to document and facilitate access to ocean and climate data available from Thematic Realtime Environmental Distributed Data Services (THREDDS) data services. Many atmospheric and oceanographic spatial data sets are stored in the Network Common Data Format (netCDF) and served through the Unidata THREDDS Data Server (TDS). NetCDF and THREDDS are becoming increasingly accepted in both the scientific and geographic research communities as demonstrated by the recent adoption of netCDF as an Open Geospatial Consortium (OGC) standard. One important source for ocean and atmospheric based data sets is NOAA's Unified Access Framework (UAF) which serves over 3000 gridded data sets from across NOAA and NOAA-affiliated partners. Due to the large number of datasets, browsing the data holdings to locate data is impractical. Working with Unidata, we have created a new service for the TDS called "ncISO", which allows automatic generation of ISO 19115-2 metadata from attributes and variables in TDS datasets. The ncISO metadata records can be harvested by catalog services such as ESSI-labs GI-Cat catalog service, and ESRI's Geoportal which supports query through a number of services, including OpenSearch and Catalog Services for the Web (CSW). ESRI's Geoportal Server provides a number of user friendly search capabilities for end users

  7. 77 FR 60089 - Approval and Promulgation of Air Quality Implementation Plans; Delaware, New Jersey, and...

    Science.gov (United States)

    2012-10-02

    ... Producers Council, et al. v. EPA, 559 F.3d 512 (DC Cir. 2009). As a result of this challenge, the U.S. Court... 2010 value status \\3\\ Delaware New Castle........ 10-003-1003 * 31.6 23.2 24.3 26 Max quarter. Delaware New Castle........ 10-003-1007 28.1 * 20.6 27.5 25 Max quarter. Delaware New Castle........ 10-003...

  8. Developing Cyberinfrastructure Tools and Services for Metadata Quality Evaluation

    Science.gov (United States)

    Mecum, B.; Gordon, S.; Habermann, T.; Jones, M. B.; Leinfelder, B.; Powers, L. A.; Slaughter, P.

    2016-12-01

    Metadata and data quality are at the core of reusable and reproducible science. While great progress has been made over the years, much of the metadata collected only addresses data discovery, covering concepts such as titles and keywords. Improving metadata beyond the discoverability plateau means documenting detailed concepts within the data such as sampling protocols, instrumentation used, and variables measured. Given that metadata commonly do not describe their data at this level, how might we improve the state of things? Giving scientists and data managers easy to use tools to evaluate metadata quality that utilize community-driven recommendations is the key to producing high-quality metadata. To achieve this goal, we created a set of cyberinfrastructure tools and services that integrate with existing metadata and data curation workflows which can be used to improve metadata and data quality across the sciences. These tools work across metadata dialects (e.g., ISO19115, FGDC, EML, etc.) and can be used to assess aspects of quality beyond what is internal to the metadata such as the congruence between the metadata and the data it describes. The system makes use of a user-friendly mechanism for expressing a suite of checks as code in popular data science programming languages such as Python and R. This reduces the burden on scientists and data managers to learn yet another language. We demonstrated these services and tools in three ways. First, we evaluated a large corpus of datasets in the DataONE federation of data repositories against a metadata recommendation modeled after existing recommendations such as the LTER best practices and the Attribute Convention for Dataset Discovery (ACDD). Second, we showed how this service can be used to display metadata and data quality information to data producers during the data submission and metadata creation process, and to data consumers through data catalog search and access tools. Third, we showed how the centrally

  9. Radiation monitoring system based on Internet

    International Nuclear Information System (INIS)

    Drndarevic, V.R.; Popovic, A.T; Bolic, M.D.; Pavlovic, R.S.

    2001-01-01

    This paper presents concept and realization of the modern distributed radiation monitoring system. The system uses existing conventional computer network and it is based on the standard Internet technology. One personal computer (PC) serves as host and system server, while a number of client computers, link to the server computer via standard local area network (LAN), are used as distributed measurement nodes. The interconnection between the server and clients are based on Transmission Control Protocol/Internet Protocol (TCP/IP). System software is based on server-client model. Based on this concept distributed system for gamma ray monitoring in the region of the Institute of Nuclear Sciences Vinca has been implemented. (author)

  10. From CLARIN Component Metadata to Linked Open Data

    NARCIS (Netherlands)

    Durco, M.; Windhouwer, Menzo

    2014-01-01

    In the European CLARIN infrastructure a growing number of resources are described with Component Metadata. In this paper we describe a transformation to make this metadata available as linked data. After this first step it becomes possible to connect the CLARIN Component Metadata with other valuable

  11. Cytometry metadata in XML

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.

    2016-04-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). CytometryML will serve as a common metadata standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). The datatypes are primarily based on the Flow Cytometry and the Digital Imaging and Communication (DICOM) standards. A small section of the code was formatted with standard HTML formatting elements (p, h1, h2, etc.). Results:1) The part of MIFlowCyt that describes the Experimental Overview including the specimen and substantial parts of several other major elements has been implemented as CytometryML XML schemas (www.cytometryml.org). 2) The feasibility of using MIFlowCyt to provide the combination of an overview, table of contents, and/or an index of a scientific paper or a report has been demonstrated. Previously, a sample electronic publication, EPUB, was created that could contain both MIFlowCyt metadata as well as the binary data. Conclusions: The use of CytometryML technology together with XHTML5 and CSS permits the metadata to be directly formatted and together with the binary data to be stored in an EPUB container. This will facilitate: formatting, data- mining, presentation, data verification, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate a publication's adherence to the MIFlowCyt standard, promote interoperability and should also result in the textual and numeric data being published using web technology without any change in composition.

  12. Internet-based interventions for smoking cessation.

    Science.gov (United States)

    Taylor, Gemma M J; Dalili, Michael N; Semwal, Monika; Civljak, Marta; Sheikh, Aziz; Car, Josip

    2017-09-04

    Tobacco use is estimated to kill 7 million people a year. Nicotine is highly addictive, but surveys indicate that almost 70% of US and UK smokers would like to stop smoking. Although many smokers attempt to give up on their own, advice from a health professional increases the chances of quitting. As of 2016 there were 3.5 billion Internet users worldwide, making the Internet a potential platform to help people quit smoking. To determine the effectiveness of Internet-based interventions for smoking cessation, whether intervention effectiveness is altered by tailoring or interactive features, and if there is a difference in effectiveness between adolescents, young adults, and adults. We searched the Cochrane Tobacco Addiction Group Specialised Register, which included searches of MEDLINE, Embase and PsycINFO (through OVID). There were no restrictions placed on language, publication status or publication date. The most recent search was conducted in August 2016. We included randomised controlled trials (RCTs). Participants were people who smoked, with no exclusions based on age, gender, ethnicity, language or health status. Any type of Internet intervention was eligible. The comparison condition could be a no-intervention control, a different Internet intervention, or a non-Internet intervention. To be included, studies must have measured smoking cessation at four weeks or longer. Two review authors independently assessed and extracted data. We extracted and, where appropriate, pooled smoking cessation outcomes of six-month follow-up or more, reporting short-term outcomes narratively where longer-term outcomes were not available. We reported study effects as a risk ratio (RR) with a 95% confidence interval (CI).We grouped studies according to whether they (1) compared an Internet intervention with a non-active control arm (e.g. printed self-help guides), (2) compared an Internet intervention with an active control arm (e.g. face-to-face counselling), (3) evaluated the

  13. Collection Metadata Solutions for Digital Library Applications

    Science.gov (United States)

    Hill, Linda L.; Janee, Greg; Dolin, Ron; Frew, James; Larsgaard, Mary

    1999-01-01

    Within a digital library, collections may range from an ad hoc set of objects that serve a temporary purpose to established library collections intended to persist through time. The objects in these collections vary widely, from library and data center holdings to pointers to real-world objects, such as geographic places, and the various metadata schemas that describe them. The key to integrated use of such a variety of collections in a digital library is collection metadata that represents the inherent and contextual characteristics of a collection. The Alexandria Digital Library (ADL) Project has designed and implemented collection metadata for several purposes: in XML form, the collection metadata "registers" the collection with the user interface client; in HTML form, it is used for user documentation; eventually, it will be used to describe the collection to network search agents; and it is used for internal collection management, including mapping the object metadata attributes to the common search parameters of the system.

  14. Internet-based interface for STRMDEPL08

    Science.gov (United States)

    Reeves, Howard W.; Asher, A. Jeremiah

    2010-01-01

    The core of the computer program STRMDEPL08 that estimates streamflow depletion by a pumping well with one of four analytical solutions was re-written in the Javascript software language and made available through an internet-based interface (web page). In the internet-based interface, the user enters data for one of the four analytical solutions, Glover and Balmer (1954), Hantush (1965), Hunt (1999), and Hunt (2003), and the solution is run for constant pumping for a desired number of simulation days. Results are returned in tabular form to the user. For intermittent pumping, the interface allows the user to request that the header information for an input file for the stand-alone executable STRMDEPL08 be created. The user would add the pumping information to this header information and run the STRMDEPL08 executable that is available for download through the U.S. Geological Survey. Results for the internet-based and stand-alone versions of STRMDEPL08 are shown to match.

  15. Computer-based multi-channel analyzer based on internet

    International Nuclear Information System (INIS)

    Zhou Xinzhi; Ning Jiaoxian

    2001-01-01

    Combined the technology of Internet with computer-based multi-channel analyzer, a new kind of computer-based multi-channel analyzer system which is based on browser is presented. Its framework and principle as well as its implementation are discussed

  16. Archaeology in Delaware. Pupil's Guide.

    Science.gov (United States)

    Delaware State Dept. of Public Instruction, Dover.

    The archeology of Delaware, for all practical purposes meaning Indian prehistory, is the focus of this set consisting of teacher's and pupil's guides. Intended primarily for use at the fourth grade level, the material can successfully be adapted for use in grades 5 through 8. The teacher's guide is flexible and non-structured, allowing for…

  17. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    Science.gov (United States)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial

  18. A document centric metadata registration tool constructing earth environmental data infrastructure

    Science.gov (United States)

    Ichino, M.; Kinutani, H.; Ono, M.; Shimizu, T.; Yoshikawa, M.; Masuda, K.; Fukuda, K.; Kawamoto, H.

    2009-12-01

    DIAS (Data Integration and Analysis System) is one of GEOSS activities in Japan. It is also a leading part of the GEOSS task with the same name defined in GEOSS Ten Year Implementation Plan. The main mission of DIAS is to construct data infrastructure that can effectively integrate earth environmental data such as observation data, numerical model outputs, and socio-economic data provided from the fields of climate, water cycle, ecosystem, ocean, biodiversity and agriculture. Some of DIAS's data products are available at the following web site of http://www.jamstec.go.jp/e/medid/dias. Most of earth environmental data commonly have spatial and temporal attributes such as the covering geographic scope or the created date. The metadata standards including these common attributes are published by the geographic information technical committee (TC211) in ISO (the International Organization for Standardization) as specifications of ISO 19115:2003 and 19139:2007. Accordingly, DIAS metadata is developed with basing on ISO/TC211 metadata standards. From the viewpoint of data users, metadata is useful not only for data retrieval and analysis but also for interoperability and information sharing among experts, beginners and nonprofessionals. On the other hand, from the viewpoint of data providers, two problems were pointed out after discussions. One is that data providers prefer to minimize another tasks and spending time for creating metadata. Another is that data providers want to manage and publish documents to explain their data sets more comprehensively. Because of solving these problems, we have been developing a document centric metadata registration tool. The features of our tool are that the generated documents are available instantly and there is no extra cost for data providers to generate metadata. Also, this tool is developed as a Web application. So, this tool does not demand any software for data providers if they have a web-browser. The interface of the tool

  19. Metadata to Support Data Warehouse Evolution

    Science.gov (United States)

    Solodovnikova, Darja

    The focus of this chapter is metadata necessary to support data warehouse evolution. We present the data warehouse framework that is able to track evolution process and adapt data warehouse schemata and data extraction, transformation, and loading (ETL) processes. We discuss the significant part of the framework, the metadata repository that stores information about the data warehouse, logical and physical schemata and their versions. We propose the physical implementation of multiversion data warehouse in a relational DBMS. For each modification of a data warehouse schema, we outline the changes that need to be made to the repository metadata and in the database.

  20. The Geoscience Internet of Things

    Science.gov (United States)

    Lehnert, K.; Klump, J.

    2012-04-01

    Internet of Things is a term that refers to "uniquely identifiable objects (things) and their virtual representations in an Internet-like structure" (Wikipedia). We here use the term to describe new and innovative ways to integrate physical samples in the Earth Sciences into the emerging digital infrastructures that are developed to support research and education in the Geosciences. Many Earth Science data are acquired on solid earth samples through observations and experiments conducted in the field or in the lab. The application and long-term utility of sample-based data for science is critically dependent on (a) the availability of information (metadata) about the samples such as geographical location where the sample was collected, time of sampling, sampling method, etc. (b) links between the different data types available for individual samples that are dispersed in the literature and in digital data repositories, and (c) access to the samples themselves. Neither of these requirements could be achieved in the past due to incomplete documentation of samples in publications, use of ambiguous sample names, and the lack of a central catalog that allows researchers to find a sample's archiving location. New internet-based capabilities have been developed over the past few years for the registration and unique identification of samples that make it possible to overcome these problems. Services for the registration and unique identification of samples are provided by the System for Earth Sample Registration SESAR (www.geosamples.org). SESAR developed the International Geo Sample Number, or IGSN, as a unique identifier for samples and specimens collected from our natural environment. Since December 2011, the IGSN is governed by an international organization, the IGSN eV (www.igsn.org), which endorses and promotes an internationally unified approach for registration and discovery of physical specimens in the Geoscience community and is establishing a new modular and

  1. A Laboratory Safety Program at Delaware.

    Science.gov (United States)

    Whitmyre, George; Sandler, Stanley I.

    1986-01-01

    Describes a laboratory safety program at the University of Delaware. Includes a history of the program's development, along with standard safety training and inspections now being implemented. Outlines a two-day laboratory safety course given to all graduate students and staff in chemical engineering. (TW)

  2. SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.

    Directory of Open Access Journals (Sweden)

    Benjamin C Hitz

    Full Text Available The Encyclopedia of DNA elements (ENCODE project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data has been released as a separate Python package.

  3. Internet-based surveillance systems for monitoring emerging infectious diseases.

    Science.gov (United States)

    Milinovich, Gabriel J; Williams, Gail M; Clements, Archie C A; Hu, Wenbiao

    2014-02-01

    Emerging infectious diseases present a complex challenge to public health officials and governments; these challenges have been compounded by rapidly shifting patterns of human behaviour and globalisation. The increase in emerging infectious diseases has led to calls for new technologies and approaches for detection, tracking, reporting, and response. Internet-based surveillance systems offer a novel and developing means of monitoring conditions of public health concern, including emerging infectious diseases. We review studies that have exploited internet use and search trends to monitor two such diseases: influenza and dengue. Internet-based surveillance systems have good congruence with traditional surveillance approaches. Additionally, internet-based approaches are logistically and economically appealing. However, they do not have the capacity to replace traditional surveillance systems; they should not be viewed as an alternative, but rather an extension. Future research should focus on using data generated through internet-based surveillance and response systems to bolster the capacity of traditional surveillance systems for emerging infectious diseases. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Water quality trends in the Delaware River Basin (USA) from 1980 to 2005.

    Science.gov (United States)

    Kauffman, Gerald J; Homsey, Andrew R; Belden, Andrew C; Sanchez, Jessica Rittler

    2011-06-01

    In 1940, the tidal Delaware River was "one of the most grossly polluted areas in the United States." During the 1950s, water quality was so poor along the river at Philadelphia that zero oxygen levels prevented migration of American shad leading to near extirpation of the species. Since then, water quality in the Delaware Basin has improved with implementation of the 1961 Delaware River Basin Compact and 1970s Federal Clean Water Act Amendments. At 15 gages along the Delaware River and major tributaries between 1980 and 2005, water quality for dissolved oxygen, phosphorus, nitrogen, and sediment improved at 39%, remained constant at 51%, and degraded at 10% of the stations. Since 1980, improved water-quality stations outnumbered degraded stations by a 4 to 1 margin. Water quality remains good in the nontidal river above Trenton and, while improved, remains fair to poor for phosphorus and nitrogen in the tidal estuary near Philadelphia and in the Lehigh and Schuylkill tributaries. Water quality is good in heavily forested watersheds (>50%) and poor in highly cultivated watersheds. Water quality recovery in the Delaware Basin is coincident with implementation of environmental laws enacted in the 1960s and 1970s and is congruent with return of striped bass, shad, blue crab, and bald eagle populations.

  5. Prediction of Solar Eruptions Using Filament Metadata

    Science.gov (United States)

    Aggarwal, Ashna; Schanche, Nicole; Reeves, Katharine K.; Kempton, Dustin; Angryk, Rafal

    2018-05-01

    We perform a statistical analysis of erupting and non-erupting solar filaments to determine the properties related to the eruption potential. In order to perform this study, we correlate filament eruptions documented in the Heliophysics Event Knowledgebase (HEK) with HEK filaments that have been grouped together using a spatiotemporal tracking algorithm. The HEK provides metadata about each filament instance, including values for length, area, tilt, and chirality. We add additional metadata properties such as the distance from the nearest active region and the magnetic field decay index. We compare trends in the metadata from erupting and non-erupting filament tracks to discover which properties present signs of an eruption. We find that a change in filament length over time is the most important factor in discriminating between erupting and non-erupting filament tracks, with erupting tracks being more likely to have decreasing length. We attempt to find an ensemble of predictive filament metadata using a Random Forest Classifier approach, but find the probability of correctly predicting an eruption with the current metadata is only slightly better than chance.

  6. Problematic Internet Use among Turkish University Students: A Multidimensional Investigation Based on Demographics and Internet Activities

    Science.gov (United States)

    Tekinarslan, Erkan; Gurer, Melih Derya

    2011-01-01

    This study investigated the Turkish undergraduate university students' problematic Internet use (PIU) levels on different dimensions based on demographics (e.g., gender, Internet use by time of day), and Internet activities (e.g., chat, entertainment, social networking, information searching, etc.). Moreover, the study explored some predictors of…

  7. Creating metadata that work for digital libraries and Google

    OpenAIRE

    Dawson, Alan

    2004-01-01

    For many years metadata has been recognised as a significant component of the digital information environment. Substantial work has gone into creating complex metadata schemes for describing digital content. Yet increasingly Web search engines, and Google in particular, are the primary means of discovering and selecting digital resources, although they make little use of metadata. This article considers how digital libraries can gain more value from their metadata by adapting it for Google us...

  8. Technologies for metadata management in scientific a

    OpenAIRE

    Castro-Romero, Alexander; González-Sanabria, Juan S.; Ballesteros-Ricaurte, Javier A.

    2015-01-01

    The use of Semantic Web technologies has been increasing, so it is common using them in different ways. This article evaluates how these technologies can contribute to improve the indexing in articles in scientific journals. Initially, there is a conceptual review about metadata. Later, studying the most important technologies for the use of metadata in Web and, this way, choosing one of them to apply it in the case of study of scientific articles indexing, in order to determine the metadata ...

  9. The timber resources of Delaware

    Science.gov (United States)

    Roland H. Ferguson; Carl E. Mayer

    1974-01-01

    Under the authority of the McSweeney-McNary Forest Research Act of May 22, 1928, and subsequent amendments, the Forest Service, U.S. Department of Agriculture, conducts a series of continuing forest surveys of all states to provide up-to-date information about the forest resources of the Nation. The first forest survey of Delaware was made in 1956 by the Northeastern...

  10. Metadata Quality in Institutional Repositories May be Improved by Addressing Staffing Issues

    Directory of Open Access Journals (Sweden)

    Elizabeth Stovold

    2016-09-01

    Full Text Available A Review of: Moulaison, S. H., & Dykas, F. (2016. High-quality metadata and repository staffing: Perceptions of United States–based OpenDOAR participants. Cataloging & Classification Quarterly, 54(2, 101-116. http://dx.doi.org/10.1080/01639374.2015.1116480 Objective – To investigate the quality of institutional repository metadata, metadata practices, and identify barriers to quality. Design – Survey questionnaire. Setting – The OpenDOAR online registry of worldwide repositories. Subjects – A random sample of 50 from 358 administrators of institutional repositories in the United States of America listed in the OpenDOAR registry. Methods – The authors surveyed a random sample of administrators of American institutional repositories included in the OpenDOAR registry. The survey was distributed electronically. Recipients were asked to forward the email if they felt someone else was better suited to respond. There were questions about the demographics of the repository, the metadata creation environment, metadata quality, standards and practices, and obstacles to quality. Results were analyzed in Excel, and qualitative responses were coded by two researchers together. Main results – There was a 42% (n=21 response rate to the section on metadata quality, a 40% (n=20 response rate to the metadata creation section, and 40% (n=20 to the section on obstacles to quality. The majority of respondents rated their metadata quality as average (65%, n=13 or above average (30%, n=5. No one rated the quality as high or poor, while 10% (n=2 rated the quality as below average. The survey found that the majority of descriptive metadata was created by professional (84%, n=16 or paraprofessional (53%, n=10 library staff. Professional staff were commonly involved in creating administrative metadata, reviewing the metadata, and selecting standards and documentation. Department heads and advisory committees were also involved in standards and documentation

  11. The role of metadata in managing large environmental science datasets. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Melton, R.B.; DeVaney, D.M. [eds.] [Pacific Northwest Lab., Richland, WA (United States); French, J. C. [Univ. of Virginia, (United States)

    1995-06-01

    The purpose of this workshop was to bring together computer science researchers and environmental sciences data management practitioners to consider the role of metadata in managing large environmental sciences datasets. The objectives included: establishing a common definition of metadata; identifying categories of metadata; defining problems in managing metadata; and defining problems related to linking metadata with primary data.

  12. Meta-Data Objects as the Basis for System Evolution

    CERN Document Server

    Estrella, Florida; Tóth, N; Kovács, Z; Le Goff, J M; Clatchey, Richard Mc; Toth, Norbert; Kovacs, Zsolt; Goff, Jean-Marie Le

    2001-01-01

    One of the main factors driving object-oriented software development in the Web- age is the need for systems to evolve as user requirements change. A crucial factor in the creation of adaptable systems dealing with changing requirements is the suitability of the underlying technology in allowing the evolution of the system. A reflective system utilizes an open architecture where implicit system aspects are reified to become explicit first-class (meta-data) objects. These implicit system aspects are often fundamental structures which are inaccessible and immutable, and their reification as meta-data objects can serve as the basis for changes and extensions to the system, making it self- describing. To address the evolvability issue, this paper proposes a reflective architecture based on two orthogonal abstractions - model abstraction and information abstraction. In this architecture the modeling abstractions allow for the separation of the description meta-data from the system aspects they represent so that th...

  13. Research on key technologies for data-interoperability-based metadata, data compression and encryption, and their application

    Science.gov (United States)

    Yu, Xu; Shao, Quanqin; Zhu, Yunhai; Deng, Yuejin; Yang, Haijun

    2006-10-01

    With the development of informationization and the separation between data management departments and application departments, spatial data sharing becomes one of the most important objectives for the spatial information infrastructure construction, and spatial metadata management system, data transmission security and data compression are the key technologies to realize spatial data sharing. This paper discusses the key technologies for metadata based on data interoperability, deeply researches the data compression algorithms such as adaptive Huffman algorithm, LZ77 and LZ78 algorithm, studies to apply digital signature technique to encrypt spatial data, which can not only identify the transmitter of spatial data, but also find timely whether the spatial data are sophisticated during the course of network transmission, and based on the analysis of symmetric encryption algorithms including 3DES,AES and asymmetric encryption algorithm - RAS, combining with HASH algorithm, presents a improved mix encryption method for spatial data. Digital signature technology and digital watermarking technology are also discussed. Then, a new solution of spatial data network distribution is put forward, which adopts three-layer architecture. Based on the framework, we give a spatial data network distribution system, which is efficient and safe, and also prove the feasibility and validity of the proposed solution.

  14. Delaware Basin Monitoring Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Washington Regulatory and Environmental Services; Washington TRU Solutions LLC

    2002-09-21

    The Delaware Basin Drilling Surveillance Program (DBDSP) is designed to monitor drilling activities in the vicinity of the Waste Isolation Pilot Plant (WIPP). This program is based on Environmental Protection Agency (EPA) requirements. The EPA environmental standards for the management and disposal of transuranic (TRU) radioactive waste are codified in 40 CFR Part 191 (EPA 1993). Subparts B and C of the standard address the disposal of radioactive waste. The standard requires the Department of Energy (DOE) to demonstrate the expected performance of the disposal system using a probabilistic risk assessment or performance assessment (PA). This PA must show that the expected repository performance will not release radioactive material above limits set by the EPA's standard. This assessment must include the consideration of inadvertent drilling into the repository at some future time.

  15. Delaware Basin Monitoring Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Washington Regulatory and Environmental Services; Washington TRU Solutions LLC

    2003-09-30

    The Delaware Basin Drilling Surveillance Program (DBDSP) is designed to monitor drilling activities in the vicinity of the Waste Isolation Pilot Plant (WIPP). This program is based on Environmental Protection Agency (EPA) requirements. The EPA environmental standards for the management and disposal of transuranic (TRU) radioactive waste are codified in 40 CFR Part 191 (EPA 1993). Subparts B and C of the standard address the disposal of radioactive waste. The standard requires the Department of Energy (DOE) to demonstrate the expected performance of the disposal system using a probabilistic risk assessment or performance assessment (PA). This PA must show that the expected repository performance will not release radioactive material above limits set by the EPA's standard. This assessment must include the consideration of inadvertent drilling into the repository at some future time.

  16. Delaware Basin Monitoring Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Washington Regulatory and Environmental Services; Washington TRU Solutions LLC

    2005-09-30

    The Delaware Basin Drilling Surveillance Program (DBDSP) is designed to monitor drilling activities in the vicinity of the Waste Isolation Pilot Plant (WIPP). This program is based on Environmental Protection Agency (EPA) requirements. The EPA environmental standards for the management and disposal of transuranic (TRU) radioactive waste are codified in 40 CFR Part 191 (EPA 1993). Subparts B and C of the standard address the disposal of radioactive waste. The standard requires the Department of Energy (DOE) to demonstrate the expected performance of the disposal system using a probabilistic risk assessment or performance assessment (PA). This PA must show that the expected repository performance will not release radioactive material above limits set by the EPA's standard. This assessment must include the consideration of inadvertent drilling into the repository at some future time.

  17. Delaware Basin Monitoring Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Washington Regulatory and Environmental Services; Washington TRU Solutions LLC

    2004-09-30

    The Delaware Basin Drilling Surveillance Program (DBDSP) is designed to monitor drilling activities in the vicinity of the Waste Isolation Pilot Plant (WIPP). This program is based on Environmental Protection Agency (EPA) requirements. The EPA environmental standards for the management and disposal of transuranic (TRU) radioactive waste are codified in 40 CFR Part 191 (EPA 1993). Subparts B and C of the standard address the disposal of radioactive waste. The standard requires the Department of Energy (DOE) to demonstrate the expected performance of the disposal system using a probabilistic risk assessment or performance assessment (PA). This PA must show that the expected repository performance will not release radioactive material above limits set by the EPA's standard. This assessment must include the consideration of inadvertent drilling into the repository at some future time.

  18. Delaware Basin Monitoring Annual Report

    International Nuclear Information System (INIS)

    2005-01-01

    The Delaware Basin Drilling Surveillance Program (DBDSP) is designed to monitor drilling activities in the vicinity of the Waste Isolation Pilot Plant (WIPP). This program is based on Environmental Protection Agency (EPA) requirements. The EPA environmental standards for the management and disposal of transuranic (TRU) radioactive waste are codified in 40 CFR Part 191 (EPA 1993). Subparts B and C of the standard address the disposal of radioactive waste. The standard requires the Department of Energy (DOE) to demonstrate the expected performance of the disposal system using a probabilistic risk assessment or performance assessment (PA). This PA must show that the expected repository performance will not release radioactive material above limits set by the EPA's standard. This assessment must include the consideration of inadvertent drilling into the repository at some future time.

  19. Delaware Basin Monitoring Annual Report

    International Nuclear Information System (INIS)

    2002-01-01

    The Delaware Basin Drilling Surveillance Program (DBDSP) is designed to monitor drilling activities in the vicinity of the Waste Isolation Pilot Plant (WIPP). This program is based on Environmental Protection Agency (EPA) requirements. The EPA environmental standards for the management and disposal of transuranic (TRU) radioactive waste are codified in 40 CFR Part 191 (EPA 1993). Subparts B and C of the standard address the disposal of radioactive waste. The standard requires the Department of Energy (DOE) to demonstrate the expected performance of the disposal system using a probabilistic risk assessment or performance assessment (PA). This PA must show that the expected repository performance will not release radioactive material above limits set by the EPA's standard. This assessment must include the consideration of inadvertent drilling into the repository at some future time.

  20. Delaware Basin Monitoring Annual Report

    International Nuclear Information System (INIS)

    2004-01-01

    The Delaware Basin Drilling Surveillance Program (DBDSP) is designed to monitor drilling activities in the vicinity of the Waste Isolation Pilot Plant (WIPP). This program is based on Environmental Protection Agency (EPA) requirements. The EPA environmental standards for the management and disposal of transuranic (TRU) radioactive waste are codified in 40 CFR Part 191 (EPA 1993). Subparts B and C of the standard address the disposal of radioactive waste. The standard requires the Department of Energy (DOE) to demonstrate the expected performance of the disposal system using a probabilistic risk assessment or performance assessment (PA). This PA must show that the expected repository performance will not release radioactive material above limits set by the EPA's standard. This assessment must include the consideration of inadvertent drilling into the repository at some future time.

  1. Delaware Basin Monitoring Annual Report

    International Nuclear Information System (INIS)

    2003-01-01

    The Delaware Basin Drilling Surveillance Program (DBDSP) is designed to monitor drilling activities in the vicinity of the Waste Isolation Pilot Plant (WIPP). This program is based on Environmental Protection Agency (EPA) requirements. The EPA environmental standards for the management and disposal of transuranic (TRU) radioactive waste are codified in 40 CFR Part 191 (EPA 1993). Subparts B and C of the standard address the disposal of radioactive waste. The standard requires the Department of Energy (DOE) to demonstrate the expected performance of the disposal system using a probabilistic risk assessment or performance assessment (PA). This PA must show that the expected repository performance will not release radioactive material above limits set by the EPA's standard. This assessment must include the consideration of inadvertent drilling into the repository at some future time.

  2. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  3. Internet-enabled collaborative agent-based supply chains

    Science.gov (United States)

    Shen, Weiming; Kremer, Rob; Norrie, Douglas H.

    2000-12-01

    This paper presents some results of our recent research work related to the development of a new Collaborative Agent System Architecture (CASA) and an Infrastructure for Collaborative Agent Systems (ICAS). Initially being proposed as a general architecture for Internet based collaborative agent systems (particularly complex industrial collaborative agent systems), the proposed architecture is very suitable for managing the Internet enabled complex supply chain for a large manufacturing enterprise. The general collaborative agent system architecture with the basic communication and cooperation services, domain independent components, prototypes and mechanisms are described. Benefits of implementing Internet enabled supply chains with the proposed infrastructure are discussed. A case study on Internet enabled supply chain management is presented.

  4. Feasibility Study of Economics and Performance of Solar Photovoltaics at the Standard Chlorine of Delaware Superfund Site in Delaware City, Delaware. A Study Prepared in Partnership with the Environmental Protection Agency for the RE-Powering America's Land Initiative: Siting Renewable Energy on Potentially Contaminated Land and Mine Sites

    Energy Technology Data Exchange (ETDEWEB)

    Salasovich, J.; Geiger, J.; Mosey, G.; Healey, V.

    2013-06-01

    The U.S. Environmental Protection Agency (EPA), in accordance with the RE-Powering America's Land initiative, selected the Standard Chlorine of Delaware site in Delaware City, Delaware, for a feasibility study of renewable energy production. The National Renewable Energy Laboratory (NREL) provided technical assistance for this project. The purpose of this report is to assess the site for a possible photovoltaic (PV) system installation and estimate the cost, performance, and site impacts of different PV options. In addition, the report recommends financing options that could assist in the implementation of a PV system at the site.

  5. 77 FR 58953 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Control Technique...

    Science.gov (United States)

    2012-09-25

    ... Environmental Control (DNREC). The revisions amend Delaware's regulation for the Control of Volatile Organic... approval of the Delaware SIP revision that amends Regulation No. 1124, Control of Volatile Organic..., specifies standards and exemptions, and specifies control devices, test methods, compliance certification...

  6. openPDS: protecting the privacy of metadata through SafeAnswers.

    Directory of Open Access Journals (Sweden)

    Yves-Alexandre de Montjoye

    Full Text Available The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1 we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2 we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research.

  7. iLOG: A Framework for Automatic Annotation of Learning Objects with Empirical Usage Metadata

    Science.gov (United States)

    Miller, L. D.; Soh, Leen-Kiat; Samal, Ashok; Nugent, Gwen

    2012-01-01

    Learning objects (LOs) are digital or non-digital entities used for learning, education or training commonly stored in repositories searchable by their associated metadata. Unfortunately, based on the current standards, such metadata is often missing or incorrectly entered making search difficult or impossible. In this paper, we investigate…

  8. An internet-based teaching file on clinical nuclear medicine

    International Nuclear Information System (INIS)

    Jiang Zhong; Wu Jinchang

    2001-01-01

    Objective: The goal of this project was to develop an internet-based interactive digital teaching file on nuclide imaging in clinical nuclear medicine, with the capability of access to internet. Methods: On the basis of academic teaching contents in nuclear medicine textbook for undergraduates who major in nuclear medicine, Frontpage 2000, HTML language, and JavaScript language in some parts of the contents, were utilized in the internet-based teaching file developed in this study. Results: A practical and comprehensive teaching file was accomplished and may get access with acceptable speed to internet. Besides basic teaching contents of nuclide imagings, a large number of typical and rare clinical cases, questionnaire with answers and update data in the field of nuclear medicine were included in the file. Conclusion: This teaching file meets its goal of providing an easy-to-use and internet-based digital teaching file, characteristically with the contents instant and enriched, and with the modes diversified and colorful

  9. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    Science.gov (United States)

    Lambert, F.; Odier, J.; Fulachier, J.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  10. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins.

    CERN Document Server

    AUTHOR|(SzGeCERN)637120; The ATLAS collaboration; Odier, Jerome; Fulachier, Jerome

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  11. NAIP National Metadata

    Data.gov (United States)

    Farm Service Agency, Department of Agriculture — The NAIP National Metadata Map contains USGS Quarter Quad and NAIP Seamline boundaries for every year NAIP imagery has been collected. Clicking on the map also makes...

  12. Web and Internet-based Capabilities (IbC) Policies - U.S. Department of

    Science.gov (United States)

    &IIC DCIO IE DCIO R&A DCIO CS In the News Library Contact us Web and Internet-based Capabilities (IbC) Policies Army Navy Air Force Marine Corps General DoD Internet Services and Internet-Based ) Information Collection under the Paperwork Reduction Act (PRA) (OMB Memo) Internet Domain Name and Internet

  13. Leucine incorporation by aerobic anoxygenic phototrophic bacteria in the Delaware estuary

    OpenAIRE

    Stegman, Monica R; Cottrell, Matthew T; Kirchman, David L

    2014-01-01

    Aerobic anoxygenic phototrophic (AAP) bacteria are well known to be abundant in estuaries, coastal regions and in the open ocean, but little is known about their activity in any aquatic ecosystem. To explore the activity of AAP bacteria in the Delaware estuary and coastal waters, single-cell 3H-leucine incorporation by these bacteria was examined with a new approach that combines infrared epifluorescence microscopy and microautoradiography. The approach was used on samples from the Delaware c...

  14. Standardizing metadata and taxonomic identification in metabarcoding studies

    NARCIS (Netherlands)

    Tedersoo, Leho; Ramirez, Kelly; Nilsson, R; Kaljuvee, Aivi; Koljalg, Urmas; Abarenkov, Kessy

    2015-01-01

    High-throughput sequencing-based metabarcoding studies produce vast amounts of ecological data, but a lack of consensus on standardization of metadata and how to refer to the species recovered severely hampers reanalysis and comparisons among studies. Here we propose an automated workflow covering

  15. Metadata database and data analysis software for the ground-based upper atmospheric data developed by the IUGONET project

    Science.gov (United States)

    Hayashi, H.; Tanaka, Y.; Hori, T.; Koyama, Y.; Shinbori, A.; Abe, S.; Kagitani, M.; Kouno, T.; Yoshida, D.; Ueno, S.; Kaneda, N.; Yoneda, M.; Tadokoro, H.; Motoba, T.; Umemura, N.; Iugonet Project Team

    2011-12-01

    The Inter-university Upper atmosphere Global Observation NETwork (IUGONET) is a Japanese inter-university project by the National Institute of Polar Research (NIPR), Tohoku University, Nagoya University, Kyoto University, and Kyushu University to build a database of metadata for ground-based observations of the upper atmosphere. The IUGONET institutes/universities have been collecting various types of data by radars, magnetometers, photometers, radio telescopes, helioscopes, etc. at various locations all over the world and at various altitude layers from the Earth's surface to the Sun. The metadata database will be of great help to researchers in efficiently finding and obtaining these observational data spread over the institutes/universities. This should also facilitate synthetic analysis of multi-disciplinary data, which will lead to new types of research in the upper atmosphere. The project has also been developing a software to help researchers download, visualize, and analyze the data provided from the IUGONET institutes/universities. The metadata database system is built on the platform of DSpace, which is an open source software for digital repositories. The data analysis software is written in the IDL language with the TDAS (THEMIS Data Analysis Software suite) library. These products have been just released for beta-testing.

  16. A metadata schema for data objects in clinical research.

    Science.gov (United States)

    Canham, Steve; Ohmann, Christian

    2016-11-24

    A large number of stakeholders have accepted the need for greater transparency in clinical research and, in the context of various initiatives and systems, have developed a diverse and expanding number of repositories for storing the data and documents created by clinical studies (collectively known as data objects). To make the best use of such resources, we assert that it is also necessary for stakeholders to agree and deploy a simple, consistent metadata scheme. The relevant data objects and their likely storage are described, and the requirements for metadata to support data sharing in clinical research are identified. Issues concerning persistent identifiers, for both studies and data objects, are explored. A scheme is proposed that is based on the DataCite standard, with extensions to cover the needs of clinical researchers, specifically to provide (a) study identification data, including links to clinical trial registries; (b) data object characteristics and identifiers; and (c) data covering location, ownership and access to the data object. The components of the metadata scheme are described. The metadata schema is proposed as a natural extension of a widely agreed standard to fill a gap not tackled by other standards related to clinical research (e.g., Clinical Data Interchange Standards Consortium, Biomedical Research Integrated Domain Group). The proposal could be integrated with, but is not dependent on, other moves to better structure data in clinical research.

  17. Dementia caregivers' responses to 2 Internet-based intervention programs.

    Science.gov (United States)

    Marziali, Elsa; Garcia, Linda J

    2011-02-01

    The aim of this study was to examine the impact on dementia caregivers' experienced stress and health status of 2 Internet-based intervention programs. Ninety-one dementia caregivers were given the choice of being involved in either an Internet-based chat support group or an Internet-based video conferencing support group. Pre-post outcome measures focused on distress, health status, social support, and service utilization. In contrast to the Chat Group, the Video Group showed significantly greater improvement in mental health status. Also, for the Video Group, improvements in self-efficacy, neuroticism, and social support were associated with lower stress response to coping with the care recipient's cognitive impairment and decline in function. The results show that, of 2 Internet-based intervention programs for dementia caregivers, the video conferencing intervention program was more effective in improving mental health status and improvement in personal characteristics were associated with lower caregiver stress response.

  18. An emergent theory of digital library metadata enrich then filter

    CERN Document Server

    Stevens, Brett

    2015-01-01

    An Emergent Theory of Digital Library Metadata is a reaction to the current digital library landscape that is being challenged with growing online collections and changing user expectations. The theory provides the conceptual underpinnings for a new approach which moves away from expert defined standardised metadata to a user driven approach with users as metadata co-creators. Moving away from definitive, authoritative, metadata to a system that reflects the diversity of users’ terminologies, it changes the current focus on metadata simplicity and efficiency to one of metadata enriching, which is a continuous and evolving process of data linking. From predefined description to information conceptualised, contextualised and filtered at the point of delivery. By presenting this shift, this book provides a coherent structure in which future technological developments can be considered.

  19. Monitoring coastal water properties and current circulation with ERTS-1. [Delaware Bay

    Science.gov (United States)

    Klemas, V.; Otley, M.; Wethe, C.; Rogers, R.

    1974-01-01

    Imagery and digital tapes from nine successful ERTS-1 passes over Delaware Bay during different portions of the tidal cycle have been analyzed with special emphasis on turbidity, current circulation, waste disposal plumes and convergent boundaries between different water masses. ERTS-1 image radiance correlated well with Secchi depth and suspended sediment concentration. Circulation patterns observed by ERTS-1 during different parts of the tidal cycle, agreed well with predicted and measured currents throughout Delaware Bay. Convergent shear boundaries between different water masses were observed from ERTS-1. In several ERTS-1 frames, waste disposal plumes have been detected 36 miles off Delaware's Atlantic coast. The ERTS-1 results are being used to extend and verify hydrodynamic models of the bay, developed for predicting oil slick movement and estimating sediment transport.

  20. Design and Implementation of a Metadata-rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  1. The Delaware Bay Estuary as a Classroom: A Research Experience for Future Elementary Grade-Level Teachers

    Science.gov (United States)

    Madsen, J.; Fifield, S.; Allen, D.; Shipman, H.; Ford, D.; Dagher, Z.; Brickhouse, N.

    2004-05-01

    With supplemental funding from the National Science Foundation (NSF), students from the University of Delaware's Science Semester course took part in a two-day research cruise in the Delaware Bay Estuary. The Science Semester, an NSF-funded project, is an integrated 15-credit sequence that encompasses the entire course work for the spring semester for approximately 60 sophomore-level elementary education majors. The semester includes the earth, life, and physical science content courses and the education science methods course integrated into one curriculum. In this curriculum, problem-based learning and other inquiry-based approaches are applied to foster integrated understandings of science and pedagogy that future elementary teachers need to effectively use inquiry-based approaches in their classrooms. The research cruise was conducted as part of one of the four major investigations during the course. The investigation focused on Delaware's state marine animal, Limulus polyphemus. It is one of the four remaining species of horseshoe crabs; the largest spawning population of Limulus is found in Delaware Bay. Within the problem- and inquiry-based learning approaches of the Science Semester course, the students became aware that very little data exists on the benthic habitat of Limulus polyphemus. In order to learn more about this habitat, a cohort of seven students from the course was recruited as part of the scientific party to take part in the research cruise to collect data on the floor of Delaware Bay. The data included: multibeam bathymetry/backscatter data, grab samples of bay bottom sediments, and CTD profiles. Prior to the cruise, all students in the course took part in laboratory exercises to learn about topographic maps and navigation charts using the Delaware Bay area as the region of study. While "at-sea", the cruise participants sent the ship's latitude and longitude positions as a function of time. The positions were used by the on-land students to

  2. Improving Access to NASA Earth Science Data through Collaborative Metadata Curation

    Science.gov (United States)

    Sisco, A. W.; Bugbee, K.; Shum, D.; Baynes, K.; Dixon, V.; Ramachandran, R.

    2017-12-01

    The NASA-developed Common Metadata Repository (CMR) is a high-performance metadata system that currently catalogs over 375 million Earth science metadata records. It serves as the authoritative metadata management system of NASA's Earth Observing System Data and Information System (EOSDIS), enabling NASA Earth science data to be discovered and accessed by a worldwide user community. The size of the EOSDIS data archive is steadily increasing, and the ability to manage and query this archive depends on the input of high quality metadata to the CMR. Metadata that does not provide adequate descriptive information diminishes the CMR's ability to effectively find and serve data to users. To address this issue, an innovative and collaborative review process is underway to systematically improve the completeness, consistency, and accuracy of metadata for approximately 7,000 data sets archived by NASA's twelve EOSDIS data centers, or Distributed Active Archive Centers (DAACs). The process involves automated and manual metadata assessment of both collection and granule records by a team of Earth science data specialists at NASA Marshall Space Flight Center. The team communicates results to DAAC personnel, who then make revisions and reingest improved metadata into the CMR. Implementation of this process relies on a network of interdisciplinary collaborators leveraging a variety of communication platforms and long-range planning strategies. Curating metadata at this scale and resolving metadata issues through community consensus improves the CMR's ability to serve current and future users and also introduces best practices for stewarding the next generation of Earth Observing System data. This presentation will detail the metadata curation process, its outcomes thus far, and also share the status of ongoing curation activities.

  3. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    Science.gov (United States)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible

  4. Internet-Based Education for Prostrate Cancer Screening. Addendum

    Science.gov (United States)

    2010-12-01

    of site usage and changes in depression and anxiety scores. Journal of Medical Internet Research , 4(1), e3. Clarke, G., Eubanks, D., Reid, E...skills program with reminders. Journal of Medical Internet Research , 7(2), e16. Cobb, N. K., Graham, A. L., Bock, B. C., Papandonatos, G., & Abrams, D...programs. Journal of Medical Internet Research , 8(3), e15. Danaher, B. G. & Seeley, J. R. (2009). Methodological issues in research on web-based

  5. Influenza Weekly Surveillance Reports - Delaware Health and Social Services

    Science.gov (United States)

    & Wellness Healthy Homes Healthy Workplaces Laboratory Restaurant Inspections Screening and Testing ; Travel Contact Us Corporations Franchise Tax Gross Receipts Tax Withholding Tax Delaware Topics Help

  6. ASDC Collaborations and Processes to Ensure Quality Metadata and Consistent Data Availability

    Science.gov (United States)

    Trapasso, T. J.

    2017-12-01

    With the introduction of new tools, faster computing, and less expensive storage, increased volumes of data are expected to be managed with existing or fewer resources. Metadata management is becoming a heightened challenge from the increase in data volume, resulting in more metadata records needed to be curated for each product. To address metadata availability and completeness, NASA ESDIS has taken significant strides with the creation of the United Metadata Model (UMM) and Common Metadata Repository (CMR). These UMM helps address hurdles experienced by the increasing number of metadata dialects and the CMR provides a primary repository for metadata so that required metadata fields can be served through a growing number of tools and services. However, metadata quality remains an issue as metadata is not always inherent to the end-user. In response to these challenges, the NASA Atmospheric Science Data Center (ASDC) created the Collaboratory for quAlity Metadata Preservation (CAMP) and defined the Product Lifecycle Process (PLP) to work congruently. CAMP is unique in that it provides science team members a UI to directly supply metadata that is complete, compliant, and accurate for their data products. This replaces back-and-forth communication that often results in misinterpreted metadata. Upon review by ASDC staff, metadata is submitted to CMR for broader distribution through Earthdata. Further, approval of science team metadata in CAMP automatically triggers the ASDC PLP workflow to ensure appropriate services are applied throughout the product lifecycle. This presentation will review the design elements of CAMP and PLP as well as demonstrate interfaces to each. It will show the benefits that CAMP and PLP provide to the ASDC that could potentially benefit additional NASA Earth Science Data and Information System (ESDIS) Distributed Active Archive Centers (DAACs).

  7. 76 FR 72124 - Internet-Based Telecommunications Relay Service Numbering

    Science.gov (United States)

    2011-11-22

    ... Docket No. 10-191; FCC 11-123] Internet-Based Telecommunications Relay Service Numbering AGENCY: Federal..., the information collection associated with the Commission's Internet- Based Telecommunications Relay... Telecommunications Relay Service Numbering, CG Docket No. 03-123; WC Docket No. 05-196; WC Docket No. 10-191; FCC 11...

  8. Internet-Based Interventions for Addictive Behaviours: A Systematic Review.

    Science.gov (United States)

    Chebli, Jaymee-Lee; Blaszczynski, Alexander; Gainsbury, Sally M

    2016-12-01

    Internet-based interventions have emerged as a new treatment and intervention modality for psychological disorders. Given their features of treatment flexibility, anonymity and confidentiality, this modality may be well suited in the management of addictive behaviours. A systematic literature review of the effectiveness and treatment outcomes of Internet-based interventions for smoking cessation, problematic alcohol use, substance abuse and gambling was performed. Studies were included if they met the following criteria: clients received a structured therapeutic Internet-based intervention for a problematic and addictive behaviour; included more than five clients; effectiveness was based on at least one outcome; outcome variables were measured before and immediately following the interventions; had a follow-up period; and involved at least minimal therapist contact over the course of the program. Sixteen relevant studies were found; nine addressed the effects of Internet-based interventions on smoking cessation, four on gambling, two on alcohol and one on opioid dependence. All studies demonstrated positive treatment outcomes for their respective addictive behaviours. The current review concluded that Internet-based interventions are effective in achieving positive behavioural change through reducing problematic behaviours. This mode of therapy has been found to have the capacity to provide effective and practical services for those who might have remained untreated, subsequently reducing the barriers for help-seekers. This in turn provides imperative information to treatment providers, policy makers, and academic researchers.

  9. Embedding Metadata and Other Semantics in Word Processing Documents

    Directory of Open Access Journals (Sweden)

    Peter Sefton

    2009-10-01

    Full Text Available This paper describes a technique for embedding document metadata, and potentially other semantic references inline in word processing documents, which the authors have implemented with the help of a software development team. Several assumptions underly the approach; It must be available across computing platforms and work with both Microsoft Word (because of its user base and OpenOffice.org (because of its free availability. Further the application needs to be acceptable to and usable by users, so the initial implementation covers only small number of features, which will only be extended after user-testing. Within these constraints the system provides a mechanism for encoding not only simple metadata, but for inferring hierarchical relationships between metadata elements from a ‘flat’ word processing file.The paper includes links to open source code implementing the techniques as part of a broader suite of tools for academic writing. This addresses tools and software, semantic web and data curation, integrating curation into research workflows and will provide a platform for integrating work on ontologies, vocabularies and folksonomies into word processing tools.

  10. Making the Case for Embedded Metadata in Digital Images

    DEFF Research Database (Denmark)

    Smith, Kari R.; Saunders, Sarah; Kejser, U.B.

    2014-01-01

    This paper discusses the standards, methods, use cases, and opportunities for using embedded metadata in digital images. In this paper we explain the past and current work engaged with developing specifications, standards for embedding metadata of different types, and the practicalities of data...... exchange in heritage institutions and the culture sector. Our examples and findings support the case for embedded metadata in digital images and the opportunities for such use more broadly in non-heritage sectors as well. We encourage the adoption of embedded metadata by digital image content creators...... and curators as well as those developing software and hardware that support the creation or re-use of digital images. We conclude that the usability of born digital images as well as physical objects that are digitized can be extended and the files preserved more readily with embedded metadata....

  11. Interpreting the ASTM 'content standard for digital geospatial metadata'

    Science.gov (United States)

    Nebert, Douglas D.

    1996-01-01

    ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.

  12. Making the Case for Embedded Metadata in Digital Images

    DEFF Research Database (Denmark)

    Smith, Kari R.; Saunders, Sarah; Kejser, U.B.

    2014-01-01

    exchange in heritage institutions and the culture sector. Our examples and findings support the case for embedded metadata in digital images and the opportunities for such use more broadly in non-heritage sectors as well. We encourage the adoption of embedded metadata by digital image content creators......This paper discusses the standards, methods, use cases, and opportunities for using embedded metadata in digital images. In this paper we explain the past and current work engaged with developing specifications, standards for embedding metadata of different types, and the practicalities of data...... and curators as well as those developing software and hardware that support the creation or re-use of digital images. We conclude that the usability of born digital images as well as physical objects that are digitized can be extended and the files preserved more readily with embedded metadata....

  13. DPH Healthy Living Information: Immunizations - Delaware Health and Social

    Science.gov (United States)

    ; Wellness Healthy Homes Healthy Workplaces Laboratory Restaurant Inspections Screening and Testing WIC ; Travel Contact Us Corporations Franchise Tax Gross Receipts Tax Withholding Tax Delaware Topics Help

  14. Adherence to internet-based mobile-supported stress management

    DEFF Research Database (Denmark)

    Zarski, A C; Lehr, D.; Berking, M.

    2016-01-01

    of this study was to investigate the influence of different guidance formats (content-focused guidance, adherence-focused guidance, and administrative guidance) on adherence and to identify predictors of nonadherence in an Internet-based mobile-supported stress management intervention (ie, GET.ON Stress......) for employees. Methods: The data from the groups who received the intervention were pooled from three randomized controlled trials (RCTs) that evaluated the efficacy of the same Internet-based mobile-supported stress management intervention (N=395). The RCTs only differed in terms of the guidance format...... of the predictors significantly predicted nonadherence. Conclusions: Guidance has been shown to be an influential factor in promoting adherence to an Internet-based mobile-supported stress management intervention. Adherence-focused guidance, which included email reminders and feedback on demand, was equivalent...

  15. EUDAT B2FIND : A Cross-Discipline Metadata Service and Discovery Portal

    Science.gov (United States)

    Widmann, Heinrich; Thiemann, Hannes

    2016-04-01

    The European Data Infrastructure (EUDAT) project aims at a pan-European environment that supports a variety of multiple research communities and individuals to manage the rising tide of scientific data by advanced data management technologies. This led to the establishment of the community-driven Collaborative Data Infrastructure that implements common data services and storage resources to tackle the basic requirements and the specific challenges of international and interdisciplinary research data management. The metadata service B2FIND plays a central role in this context by providing a simple and user-friendly discovery portal to find research data collections stored in EUDAT data centers or in other repositories. For this we store the diverse metadata collected from heterogeneous sources in a comprehensive joint metadata catalogue and make them searchable in an open data portal. The implemented metadata ingestion workflow consists of three steps. First the metadata records - provided either by various research communities or via other EUDAT services - are harvested. Afterwards the raw metadata records are converted and mapped to unified key-value dictionaries as specified by the B2FIND schema. The semantic mapping of the non-uniform, community specific metadata to homogenous structured datasets is hereby the most subtle and challenging task. To assure and improve the quality of the metadata this mapping process is accompanied by • iterative and intense exchange with the community representatives, • usage of controlled vocabularies and community specific ontologies and • formal and semantic validation. Finally the mapped and checked records are uploaded as datasets to the catalogue, which is based on the open source data portal software CKAN. CKAN provides a rich RESTful JSON API and uses SOLR for dataset indexing that enables users to query and search in the catalogue. The homogenization of the community specific data models and vocabularies enables not

  16. Positive Behavior Support in Delaware Schools: Developing Perspectives on Implementation and Outcomes. Executive Summary

    Science.gov (United States)

    Ackerman, Cheryl M.; Cooksy, Leslie J.; Murphy, Aideen; Rubright, Jonathan; Bear, George; Fifield, Steve

    2010-01-01

    In Spring 2010, the Delaware Education Research and Development Center conducted an evaluation of Delaware's PBS project, an initiative focused on developing a school-wide system of strategies to reduce behavior problems and foster a positive school climate. The study focused on facilitators and barriers to PBS implementation, and also included…

  17. Comparative status and assessment of Limulus polyphemus with emphasis on the New England and Delaware Bay populations

    Science.gov (United States)

    Smith, David; Millard, Michael J.; Carmichael, Ruth H.

    2009-01-01

    Increases in harvest of the American horseshoe crab (Limulus polyphemus) during the 1990s, particularly for whelk bait, coupled with decreases in species that depend on their eggs has reduced horseshoe crab abundance, threatened their ecological relationships, and dictated precautionary management of the horseshoe crab resource. Accordingly, population assessments and monitoring programs have been developed throughout much of the horseshoe crab’s range. We review and discuss implications for several recent assessments of Delaware Bay and New England populations and a meta-analysis of region-specific trends. These assessments show that the western Atlantic distribution of the horseshoe crab is comprised of regional or estuarine-specific meta-populations, which exhibit distinct population dynamics and require management as separate units. Modeling of Delaware Bay and Cape Cod populations confirmed that overharvest caused declines, but indicated that some harvest levels are sustainable and consistent with population growth. Coast-wide harvest was reduced by 70% from 1998 to 2006, with the greatest reductions within Delaware Bay states. Harvest regulations in Delaware Bay starting in the late 1990s, such as harvest quotas, seasonal closures, male-only harvest, voluntary use of bait-saving devices, and establishment of the Carl N. Shuster Jr. Horseshoe Crab Reserve, were followed by stabilization and recent evidence of increase in abundance of horseshoe crabs in the region. However, decreased harvest of the Delaware Bay population has redirected harvest to outlying populations, particularly in New York and New England. While the recent Delaware Bay assessments indicate positive population growth, increased harvest elsewhere is believed to be unsustainable. Two important considerations for future assessments include (1) managing Delaware Bay horseshoe crab populations within a multi-species context, for example, to help support migratory shorebirds and (2

  18. Advancements in Large-Scale Data/Metadata Management for Scientific Data.

    Science.gov (United States)

    Guntupally, K.; Devarakonda, R.; Palanisamy, G.; Frame, M. T.

    2017-12-01

    Scientific data often comes with complex and diverse metadata which are critical for data discovery and users. The Online Metadata Editor (OME) tool, which was developed by an Oak Ridge National Laboratory team, effectively manages diverse scientific datasets across several federal data centers, such as DOE's Atmospheric Radiation Measurement (ARM) Data Center and USGS's Core Science Analytics, Synthesis, and Libraries (CSAS&L) project. This presentation will focus mainly on recent developments and future strategies for refining OME tool within these centers. The ARM OME is a standard based tool (https://www.archive.arm.gov/armome) that allows scientists to create and maintain metadata about their data products. The tool has been improved with new workflows that help metadata coordinators and submitting investigators to submit and review their data more efficiently. The ARM Data Center's newly upgraded Data Discovery Tool (http://www.archive.arm.gov/discovery) uses rich metadata generated by the OME to enable search and discovery of thousands of datasets, while also providing a citation generator and modern order-delivery techniques like Globus (using GridFTP), Dropbox and THREDDS. The Data Discovery Tool also supports incremental indexing, which allows users to find new data as and when they are added. The USGS CSAS&L search catalog employs a custom version of the OME (https://www1.usgs.gov/csas/ome), which has been upgraded with high-level Federal Geographic Data Committee (FGDC) validations and the ability to reserve and mint Digital Object Identifiers (DOIs). The USGS's Science Data Catalog (SDC) (https://data.usgs.gov/datacatalog) allows users to discover a myriad of science data holdings through a web portal. Recent major upgrades to the SDC and ARM Data Discovery Tool include improved harvesting performance and migration using new search software, such as Apache Solr 6.0 for serving up data/metadata to scientific communities. Our presentation will highlight

  19. Content-aware network storage system supporting metadata retrieval

    Science.gov (United States)

    Liu, Ke; Qin, Leihua; Zhou, Jingli; Nie, Xuejun

    2008-12-01

    Nowadays, content-based network storage has become the hot research spot of academy and corporation[1]. In order to solve the problem of hit rate decline causing by migration and achieve the content-based query, we exploit a new content-aware storage system which supports metadata retrieval to improve the query performance. Firstly, we extend the SCSI command descriptor block to enable system understand those self-defined query requests. Secondly, the extracted metadata is encoded by extensible markup language to improve the universality. Thirdly, according to the demand of information lifecycle management (ILM), we store those data in different storage level and use corresponding query strategy to retrieval them. Fourthly, as the file content identifier plays an important role in locating data and calculating block correlation, we use it to fetch files and sort query results through friendly user interface. Finally, the experiments indicate that the retrieval strategy and sort algorithm have enhanced the retrieval efficiency and precision.

  20. A Review of Research Ethics in Internet-Based Research

    Science.gov (United States)

    Convery, Ian; Cox, Diane

    2012-01-01

    Internet-based research methods can include: online surveys, web page content analysis, videoconferencing for online focus groups and/or interviews, analysis of "e-conversations" through social networking sites, email, chat rooms, discussion boards and/or blogs. Over the last ten years, an upsurge in internet-based research (IBR) has led…

  1. Leveraging Metadata to Create Better Web Services

    Science.gov (United States)

    Mitchell, Erik

    2012-01-01

    Libraries have been increasingly concerned with data creation, management, and publication. This increase is partly driven by shifting metadata standards in libraries and partly by the growth of data and metadata repositories being managed by libraries. In order to manage these data sets, libraries are looking for new preservation and discovery…

  2. A Metadata Schema for Geospatial Resource Discovery Use Cases

    Directory of Open Access Journals (Sweden)

    Darren Hardy

    2014-07-01

    Full Text Available We introduce a metadata schema that focuses on GIS discovery use cases for patrons in a research library setting. Text search, faceted refinement, and spatial search and relevancy are among GeoBlacklight's primary use cases for federated geospatial holdings. The schema supports a variety of GIS data types and enables contextual, collection-oriented discovery applications as well as traditional portal applications. One key limitation of GIS resource discovery is the general lack of normative metadata practices, which has led to a proliferation of metadata schemas and duplicate records. The ISO 19115/19139 and FGDC standards specify metadata formats, but are intricate, lengthy, and not focused on discovery. Moreover, they require sophisticated authoring environments and cataloging expertise. Geographic metadata standards target preservation and quality measure use cases, but they do not provide for simple inter-institutional sharing of metadata for discovery use cases. To this end, our schema reuses elements from Dublin Core and GeoRSS to leverage their normative semantics, community best practices, open-source software implementations, and extensive examples already deployed in discovery contexts such as web search and mapping. Finally, we discuss a Solr implementation of the schema using a "geo" extension to MODS.

  3. 77 FR 65518 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Prevention of...

    Science.gov (United States)

    2012-10-29

    ... email. The www.regulations.gov Web site is an ``anonymous access'' system, which means EPA will not know... proposed revision to the Delaware SIP. The revision is to 7 DE Admin. Code 1125--Requirements for... DE Admin. Code 1125. Final approval of Delaware's October 12, 2011 SIP revision will put in place the...

  4. Pharmaceuticals in water, fish and osprey nestlings in Delaware River and Bay

    Science.gov (United States)

    Bean, Thomas G.; Rattner, Barnett A.; Lazarus, Rebecca S.; Day, Daniel D.; Burket, S. Rebekah; Brooks, Bryan W.; Haddad, Samuel P.; Bowerman, William W.

    2018-01-01

    Exposure of wildlife to Active Pharmaceutical Ingredients (APIs) is likely to occur but studies of risk are limited. One exposure pathway that has received attention is trophic transfer of APIs in a water-fish-osprey food chain. Samples of water, fish plasma and osprey plasma were collected from Delaware River and Bay, and analyzed for 21 APIs. Only 2 of 21 analytes exceeded method detection limits in osprey plasma (acetaminophen and diclofenac) with plasma levels typically 2–3 orders of magnitude below human therapeutic concentrations (HTC). We built upon a screening level model used to predict osprey exposure to APIs in Chesapeake Bay and evaluated whether exposure levels could have been predicted in Delaware Bay had we just measured concentrations in water or fish. Use of surface water and BCFs did not predict API concentrations in fish well, likely due to fish movement patterns, and partitioning and bioaccumulation uncertainties associated with these ionizable chemicals. Input of highest measured API concentration in fish plasma combined with pharmacokinetic data accurately predicted that diclofenac and acetaminophen would be the APIs most likely detected in osprey plasma. For the majority of APIs modeled, levels were not predicted to exceed 1 ng/mL or method detection limits in osprey plasma. Based on the target analytes examined, there is little evidence that APIs represent a significant risk to ospreys nesting in Delaware Bay. If an API is present in fish orders of magnitude below HTC, sampling of fish-eating birds is unlikely to be necessary. However, several human pharmaceuticals accumulated in fish plasma within a recommended safety factor for HTC. It is now important to expand the scope of diet-based API exposure modeling to include alternative exposure pathways (e.g., uptake from landfills, dumps and wastewater treatment plants) and geographic locations (developing countries) where API contamination of the environment may represent greater risk.

  5. Utility and potential of rapid epidemic intelligence from internet-based sources.

    Science.gov (United States)

    Yan, S J; Chughtai, A A; Macintyre, C R

    2017-10-01

    Rapid epidemic detection is an important objective of surveillance to enable timely intervention, but traditional validated surveillance data may not be available in the required timeframe for acute epidemic control. Increasing volumes of data on the Internet have prompted interest in methods that could use unstructured sources to enhance traditional disease surveillance and gain rapid epidemic intelligence. We aimed to summarise Internet-based methods that use freely-accessible, unstructured data for epidemic surveillance and explore their timeliness and accuracy outcomes. Steps outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist were used to guide a systematic review of research related to the use of informal or unstructured data by Internet-based intelligence methods for surveillance. We identified 84 articles published between 2006-2016 relating to Internet-based public health surveillance methods. Studies used search queries, social media posts and approaches derived from existing Internet-based systems for early epidemic alerts and real-time monitoring. Most studies noted improved timeliness compared to official reporting, such as in the 2014 Ebola epidemic where epidemic alerts were generated first from ProMED-mail. Internet-based methods showed variable correlation strength with official datasets, with some methods showing reasonable accuracy. The proliferation of publicly available information on the Internet provided a new avenue for epidemic intelligence. Methodologies have been developed to collect Internet data and some systems are already used to enhance the timeliness of traditional surveillance systems. To improve the utility of Internet-based systems, the key attributes of timeliness and data accuracy should be included in future evaluations of surveillance systems. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Automating standards based metadata creation using free and open source GIS tools

    NARCIS (Netherlands)

    Ellull, C.D.; Tamash, N.; Xian, F.; Stuiver, H.J.; Rickles, P.

    2013-01-01

    The importance of understanding the quality of data used in any GIS operation should not be underestimated. Metadata (data about data) traditionally provides a description of this quality information, but it is frequently deemed as complex to create and maintain. Additionally, it is generally stored

  7. Regional well-log correlation in the New Mexico portion of the Delaware Basin

    International Nuclear Information System (INIS)

    Borns, D.J.; Shaffer, S.E.

    1985-09-01

    Although well logs provide the most complete record of stratigraphy and structure in the northern Delaware Basin, regional interpretations of these logs generate problems of ambiguous lithologic signatures and on-hole anomalies. Interpretation must therefore be based on log-to-log correlation rather than on inferences from single logs. In this report, logs from 276 wells were used to make stratigraphic picks of Ochoan horizons (the Rustler, Salado, and Castile Formations) in the New Mexico portion of the Delaware Basin. Current log correlation suggests that: (1) the Castile is characterized by lateral thickening and thinning; (2) some Castile thinnings are of Permian age; (3) irregular topography in the Guadalupian Bell Canyon Formation may produce apparent structures in the overlying Ochoan units; and (4) extensive dissolution of the Salado is not apparent in the area of the Waste Isolation Pilot Project (WIPP) site. 13 refs., 37 figs

  8. Internet-Based Science Learning: A Review of Journal Publications

    Science.gov (United States)

    Lee, Silvia Wen-Yu; Tsai, Chin-Chung; Wu, Ying-Tien; Tsai, Meng-Jung; Liu, Tzu-Chien; Hwang, Fu-Kwun; Lai, Chih-Hung; Liang, Jyh-Chong; Wu, Huang-Ching; Chang, Chun-Yen

    2011-01-01

    Internet-based science learning has been advocated by many science educators for more than a decade. This review examines relevant research on this topic. Sixty-five papers are included in the review. The review consists of the following two major categories: (1) the role of demographics and learners' characteristics in Internet-based science…

  9. International Metadata Initiatives: Lessons in Bibliographic Control.

    Science.gov (United States)

    Caplan, Priscilla

    This paper looks at a subset of metadata schemes, including the Text Encoding Initiative (TEI) header, the Encoded Archival Description (EAD), the Dublin Core Metadata Element Set (DCMES), and the Visual Resources Association (VRA) Core Categories for visual resources. It examines why they developed as they did, major point of difference from…

  10. 75 FR 12168 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Control of Nitrogen...

    Science.gov (United States)

    2010-03-15

    ... Promulgation of Air Quality Implementation Plans; Delaware; Control of Nitrogen Oxide Emissions From Industrial... the State of Delaware. The revision adds a new section, Section 2--Control of Nitrogen Oxide Emissions.../SIP Regulation No. 42-- Specific Emission Control Requirements for controlling nitrogen oxide (NO X...

  11. Building a Disciplinary Metadata Standards Directory

    Directory of Open Access Journals (Sweden)

    Alexander Ball

    2014-07-01

    Full Text Available The Research Data Alliance (RDA Metadata Standards Directory Working Group (MSDWG is building a directory of descriptive, discipline-specific metadata standards. The purpose of the directory is to promote the discovery, access and use of such standards, thereby improving the state of research data interoperability and reducing duplicative standards development work.This work builds upon the UK Digital Curation Centre's Disciplinary Metadata Catalogue, a resource created with much the same aim in mind. The first stage of the MSDWG's work was to update and extend the information contained in the catalogue. In the current, second stage, a new platform is being developed in order to extend the functionality of the directory beyond that of the catalogue, and to make it easier to maintain and sustain. Future work will include making the directory more amenable to use by automated tools.

  12. Treating metadata as annotations: separating the content markup from the content

    Directory of Open Access Journals (Sweden)

    Fredrik Paulsson

    2007-11-01

    Full Text Available The use of digital learning resources creates an increasing need for semantic metadata, describing the whole resource, as well as parts of resources. Traditionally, schemas such as Text Encoding Initiative (TEI have been used to add semantic markup for parts of resources. This is not sufficient for use in a ”metadata ecology”, where metadata is distributed, coherent to different Application Profiles, and added by different actors. A new methodology, where metadata is “pointed in” as annotations, using XPointers, and RDF is proposed. A suggestion for how such infrastructure can be implemented, using existing open standards for metadata, and for the web is presented. We argue that such methodology and infrastructure is necessary to realize the decentralized metadata infrastructure needed for a “metadata ecology".

  13. Delaware Anatomy: With Linguistic, Social, and Medical Aspects

    Science.gov (United States)

    Miller, Jay

    1977-01-01

    Presents the comprehensive partonomy of anatomy in Unami Lenape or Delaware as provided by a modern Unami specialist. The primary referent is the human body, but some comparative terms referring to animals and plants are also provided. (CHK)

  14. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    CERN Document Server

    Lambert, Fabian; The ATLAS collaboration

    2016-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring system and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand. Moreover, we describe how to switch to a distant replica in case of downtime.

  15. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  16. Exploring Marine Science through the University of Delaware's TIDE camp

    Science.gov (United States)

    Veron, D. E.; Newton, F. A.; Veron, F.; Trembanis, A. C.; Miller, D. C.

    2012-12-01

    For the past five years, the University of Delaware has offered a two-week, residential, summer camp to rising sophomores, juniors, and seniors who are interested in marine science. The camp, named TIDE (Taking an Interest in Delaware's Estuary) camp, is designed to introduce students to the breadth of marine science while providing them with a college experience. Campers participate in a variety of academic activities which include classroom, laboratory, and field experiences, as well as numerous social activities. Two unique features of this small, focused camp is the large number of university faculty that are involved, and the ability of students to participate in ongoing research projects. At various times students have participated in fish and dolphin counts, AUV deployment, wind-wave tank experiments, coastal water and beach studies, and ROV activities. In addition, each year campers have participated in a local service project. Through communication with former TIDE participants, it is clear that this two-week, formative experience plays a large role in students choice of major when entering college.2012 Tide Camp - Salt marsh in southern Delaware 2012 Tide Camp - Field trip on a small boat

  17. Development of an open metadata schema for prospective clinical research (openPCR) in China.

    Science.gov (United States)

    Xu, W; Guan, Z; Sun, J; Wang, Z; Geng, Y

    2014-01-01

    In China, deployment of electronic data capture (EDC) and clinical data management system (CDMS) for clinical research (CR) is in its very early stage, and about 90% of clinical studies collected and submitted clinical data manually. This work aims to build an open metadata schema for Prospective Clinical Research (openPCR) in China based on openEHR archetypes, in order to help Chinese researchers easily create specific data entry templates for registration, study design and clinical data collection. Singapore Framework for Dublin Core Application Profiles (DCAP) is used to develop openPCR and four steps such as defining the core functional requirements and deducing the core metadata items, developing archetype models, defining metadata terms and creating archetype records, and finally developing implementation syntax are followed. The core functional requirements are divided into three categories: requirements for research registration, requirements for trial design, and requirements for case report form (CRF). 74 metadata items are identified and their Chinese authority names are created. The minimum metadata set of openPCR includes 3 documents, 6 sections, 26 top level data groups, 32 lower data groups and 74 data elements. The top level container in openPCR is composed of public document, internal document and clinical document archetypes. A hierarchical structure of openPCR is established according to Data Structure of Electronic Health Record Architecture and Data Standard of China (Chinese EHR Standard). Metadata attributes are grouped into six parts: identification, definition, representation, relation, usage guides, and administration. OpenPCR is an open metadata schema based on research registration standards, standards of the Clinical Data Interchange Standards Consortium (CDISC) and Chinese healthcare related standards, and is to be publicly available throughout China. It considers future integration of EHR and CR by adopting data structure and data

  18. [A review on the advancement of internet-based public health surveillance program].

    Science.gov (United States)

    Zhao, Y Q; Ma, W J

    2017-02-10

    Internet data is introduced into public health arena under the features of fast updating and tremendous volume. Mining and analyzing internet data, researchers can model the internet-based surveillance system to assess the distribution of health-related events. There are two main types of internet-based surveillance systems, i.e. active and passive, which are distinguished by the sources of information. Through passive surveillance system, information is collected from search engine and social media while the active system gathers information through provision of the volunteers. Except for serving as a real-time and convenient complementary approach to traditional disease, food safety and adverse drug reaction surveillance program, Internet-based surveillance system can also play a role in health-related behavior surveillance and policy evaluation. Although several techniques have been applied to filter information, the accuracy of internet-based surveillance system is still bothered by the false positive information. In this article, we have summarized the development and application of internet-based surveillance system in public health to provide reference for a better surveillance program in China.

  19. On the communication of scientific data: The Full-Metadata Format

    DEFF Research Database (Denmark)

    Riede, Moritz; Schueppel, Rico; Sylvester-Hvid, Kristian O.

    2010-01-01

    In this paper, we introduce a scientific format for text-based data files, which facilitates storing and communicating tabular data sets. The so-called Full-Metadata Format builds on the widely used INI-standard and is based on four principles: readable self-documentation, flexible structure, fail...

  20. Self-guided internet-based and mobile-based stress management for employees

    DEFF Research Database (Denmark)

    Ebert, D. D.; Heber, E.; Berking, M.

    2016-01-01

    Objective This randomised controlled trial (RCT) aimed to evaluate the efficacy of a self-guided internet-based stress management intervention (iSMI) for employees compared to a 6-month wait-list control group (WLC) with full access for both groups to treatment as usual. M e t h o d A sample of 264...... of stressed employees. Internet-based self-guided interventions could be an acceptable, effective and potentially costeffective approach to reduce the negative consequences associated with work-related stress....

  1. NOAA Ship Delaware II Underway Meteorological Data, Quality Controlled

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA Ship Delaware II Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System...

  2. Forensic devices for activism: Metadata tracking and public proof

    Directory of Open Access Journals (Sweden)

    Lonneke van der Velden

    2015-10-01

    Full Text Available The central topic of this paper is a mobile phone application, ‘InformaCam’, which turns metadata from a surveillance risk into a method for the production of public proof. InformaCam allows one to manage and delete metadata from images and videos in order to diminish surveillance risks related to online tracking. Furthermore, it structures and stores the metadata in such a way that the documentary material becomes better accommodated to evidentiary settings, if needed. In this paper I propose InformaCam should be interpreted as a ‘forensic device’. By using the conceptualization of forensics and work on socio-technical devices the paper discusses how InformaCam, through a range of interventions, rearranges metadata into a technology of evidence. InformaCam explicitly recognizes mobile phones as context aware, uses their sensors, and structures metadata in order to facilitate data analysis after images are captured. Through these modifications it invents a form of ‘sensory data forensics'. By treating data in this particular way, surveillance resistance does more than seeking awareness. It becomes engaged with investigatory practices. Considering the extent by which states conduct metadata surveillance, the project can be seen as a timely response to the unequal distribution of power over data.

  3. Internet-based health education in China: a content analysis of websites.

    Science.gov (United States)

    Peng, Ying; Wu, Xi; Atkins, Salla; Zwarentein, Merrick; Zhu, Ming; Zhan, Xing Xin; Zhang, Fan; Ran, Peng; Yan, Wei Rong

    2014-01-27

    The Internet is increasingly being applied in health education worldwide; however there is little knowledge of its use in Chinese higher education institutions. The present study provides the first review and highlights the deficiencies and required future advances in Chinese Internet-based health education. Two authors independently conducted a duplicate Internet search in order to identify information regarding Internet-based health education in China. The findings showed that Internet-based education began in China in September 1998. Currently, only 16 of 150 (10.7%) health education institutions in China offer fee-based online undergraduate degree courses, awarding associates and/or bachelors degrees. Fifteen of the 16 institutions were located in the middle or on the eastern coast of China, where were more developed than other regions. Nursing was the most popular discipline in Internet-based health education, while some other disciplines, such as preventive medicine, were only offered at one university. Besides degree education, Chinese institutions also offered non-degree online training and free resources. The content was mainly presented in the form of PowerPoint slides or videos for self-learning. Very little online interactive mentoring was offered with any of the courses. There is considerable potential for the further development of Internet-based health education in China. These developments should include a focus on strengthening cooperation among higher education institutions in order to develop balanced online health curricula, and on enhancing distance education in low- and middle-income regions to meet extensive learning demands.

  4. A new Information publishing system Based on Internet of things

    Science.gov (United States)

    Zhu, Li; Ma, Guoguang

    2018-03-01

    A new information publishing system based on Internet of things is proposed, which is composed of four level hierarchical structure, including the screen identification layer, the network transport layer, the service management layer and the publishing application layer. In the architecture, the screen identification layer has realized the internet of screens in which geographically dispersed independent screens are connected to the internet by the customized set-top boxes. The service management layer uses MQTT protocol to implement a lightweight broker-based publish/subscribe messaging mechanism in constrained environments such as internet of things to solve the bandwidth bottleneck. Meanwhile the cloud-based storage technique is used to storage and manage the promptly increasing multimedia publishing information. The paper has designed and realized a prototype SzIoScreen, and give some related test results.

  5. Metadata Creation, Management and Search System for your Scientific Data

    Science.gov (United States)

    Devarakonda, R.; Palanisamy, G.

    2012-12-01

    Mercury Search Systems is a set of tools for creating, searching, and retrieving of biogeochemical metadata. Mercury toolset provides orders of magnitude improvements in search speed, support for any metadata format, integration with Google Maps for spatial queries, multi-facetted type search, search suggestions, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. Mercury's metadata editor provides a easy way for creating metadata and Mercury's search interface provides a single portal to search for data and information contained in disparate data management systems, each of which may use any metadata format including FGDC, ISO-19115, Dublin-Core, Darwin-Core, DIF, ECHO, and EML. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury is being used more than 14 different projects across 4 federal agencies. It was originally developed for NASA, with continuing development funded by NASA, USGS, and DOE for a consortium of projects. Mercury search won the NASA's Earth Science Data Systems Software Reuse Award in 2008. References: R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010);

  6. Survey data and metadata modelling using document-oriented NoSQL

    Science.gov (United States)

    Rahmatuti Maghfiroh, Lutfi; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Survey data that are collected from year to year have metadata change. However it need to be stored integratedly to get statistical data faster and easier. Data warehouse (DW) can be used to solve this limitation. However there is a change of variables in every period that can not be accommodated by DW. Traditional DW can not handle variable change via Slowly Changing Dimension (SCD). Previous research handle the change of variables in DW to manage metadata by using multiversion DW (MVDW). MVDW is designed using relational model. Some researches also found that developing nonrelational model in NoSQL database has reading time faster than the relational model. Therefore, we propose changes to metadata management by using NoSQL. This study proposes a model DW to manage change and algorithms to retrieve data with metadata changes. Evaluation of the proposed models and algorithms result in that database with the proposed design can retrieve data with metadata changes properly. This paper has contribution in comprehensive data analysis with metadata changes (especially data survey) in integrated storage.

  7. PREFERENCES ON INTERNET BASED LEARNING ENVIRONMENTS IN STUDENT-CENTERED EDUCATION

    Directory of Open Access Journals (Sweden)

    Zuhal CUBUKCU

    2008-10-01

    Full Text Available Nowadays, educational systems are being questionned to find effective solutions to problems that are being encountered, and discussions are centered around the ways of restructuring systems so as to overcome difficulties. As the consequences of the traditional teaching approach, we can indicate that the taught material is not long-lasting but easily forgotten, that students do not sufficiently acquire the knowledge and skills that are aimed at developing, and that students lack transferring their knowledge to real life. In our current situation, individuals prefer to use educational resources where and when they want, based on their individual skills and abilities. Throughout the world, because the internet infrastructure has developed quite rapidly, it has been offered as an alternative way for a rich learning and teaching environment. This study aims at determining teacher candidates’ preferences regarding internet-based learning environments in student-centered education by involving the teacher candidates enrolled at Osmangazi University, Faculty of Education, Primary School Teaching, Mathematics Teaching and Computer and Educational Technologies Education programmes. This study is a descriptive study. The data collection scale consists of the “Constructivist Internet-based Education of Science Scale (CILES-S”. The sample group of teacher candidates in the study showed differences with respect to their preferences regarding internet-based learning in student-centered education. The candidates scored higher in the internet-based learning environments of Cognitive Development and Critical Judgement. The lowest average scores of the sample group were observed in the internet-based learning environment of Episthemologic awareness.

  8. Collaborative Metadata Curation in Support of NASA Earth Science Data Stewardship

    Science.gov (United States)

    Sisco, Adam W.; Bugbee, Kaylin; le Roux, Jeanne; Staton, Patrick; Freitag, Brian; Dixon, Valerie

    2018-01-01

    Growing collection of NASA Earth science data is archived and distributed by EOSDIS’s 12 Distributed Active Archive Centers (DAACs). Each collection and granule is described by a metadata record housed in the Common Metadata Repository (CMR). Multiple metadata standards are in use, and core elements of each are mapped to and from a common model – the Unified Metadata Model (UMM). Work done by the Analysis and Review of CMR (ARC) Team.

  9. 76 FR 59087 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Adhesives and Sealants Rule

    Science.gov (United States)

    2011-09-23

    ... Promulgation of Air Quality Implementation Plans; Delaware; Adhesives and Sealants Rule AGENCY: Environmental... manufacture, sale, use, or application of adhesives, sealants, primers, and solvents. This action is being... consists of Delaware's regulation for reducing VOCs from commercially-used adhesive and sealant products by...

  10. Case-control study of tobacco smoke exposure and breast cancer risk in Delaware

    Directory of Open Access Journals (Sweden)

    Hathcock H Leroy

    2008-06-01

    Full Text Available Abstract Background Tobacco smoke exposure may be associated with increased breast cancer risk, although the evidence supporting the association is inconclusive. We conducted a case-control study in Delaware, incorporating detailed exposure assessment for active and secondhand smoke at home and in the workplace. Methods Primary invasive breast cancer cases diagnosed among female Delaware residents, ages 40–79, in 2000–2002 were identified through the Delaware cancer registry (n = 287. Delaware drivers license and Health Care Finance Administration records were used to select age frequency-matched controls for women Results A statistically significant increased risk of breast cancer was observed for ever having smoked cigarettes (odds ratio = 1.43, 95% confidence interval = 1.03–1.99. However, there was no evidence of a dose-response relationship between breast cancer risk and total years smoked, cigarettes per day, or pack-years. Neither residential nor workplace secondhand smoke exposure was associated with breast cancer. Recalculations of active smoking risks using a purely unexposed reference group of women who were not exposed to active or secondhand smoking did not indicate increased risks of breast cancer. Conclusion These findings do not support an association between smoking and breast cancer.

  11. Association between recruitment methods and attrition in Internet-based studies.

    Directory of Open Access Journals (Sweden)

    Paolo Bajardi

    Full Text Available Internet-based systems for epidemiological studies have advantages over traditional approaches as they can potentially recruit and monitor a wider range of individuals in a relatively inexpensive fashion. We studied the association between communication strategies used for recruitment (offline, online, face-to-face and follow-up participation in nine Internet-based cohorts: the Influenzanet network of platforms for influenza surveillance which includes seven cohorts in seven different European countries, the Italian birth cohort Ninfea and the New Zealand birth cohort ELF. Follow-up participation varied from 43% to 89% depending on the cohort. Although there were heterogeneities among studies, participants who became aware of the study through an online communication campaign compared with those through traditional offline media seemed to have a lower follow-up participation in 8 out of 9 cohorts. There were no clear differences in participation between participants enrolled face-to-face and those enrolled through other offline strategies. An Internet-based campaign for Internet-based epidemiological studies seems to be less effective than an offline one in enrolling volunteers who keep participating in follow-up questionnaires. This suggests that even for Internet-based epidemiological studies an offline enrollment campaign would be helpful in order to achieve a higher participation proportion and limit the cohort attrition.

  12. EPA Metadata Style Guide Keywords and EPA Organization Names

    Science.gov (United States)

    The following keywords and EPA organization names listed below, along with EPA’s Metadata Style Guide, are intended to provide suggestions and guidance to assist with the standardization of metadata records.

  13. ATLAS Metadata Task Force

    Energy Technology Data Exchange (ETDEWEB)

    ATLAS Collaboration; Costanzo, D.; Cranshaw, J.; Gadomski, S.; Jezequel, S.; Klimentov, A.; Lehmann Miotto, G.; Malon, D.; Mornacchi, G.; Nemethy, P.; Pauly, T.; von der Schmitt, H.; Barberis, D.; Gianotti, F.; Hinchliffe, I.; Mapelli, L.; Quarrie, D.; Stapnes, S.

    2007-04-04

    This document provides an overview of the metadata, which are needed to characterizeATLAS event data at different levels (a complete run, data streams within a run, luminosity blocks within a run, individual events).

  14. Application of Advanced Reservoir Characterization, Simulation, and Production Optimization Strategies to Maximize Recovery in Slope and Basin Clastic Reservoirs, West Texas (Delaware Basin), Class III

    Energy Technology Data Exchange (ETDEWEB)

    Dutton, Shirley P.; Flanders, William A.; Mendez, Daniel L.

    2001-05-08

    The objective of this Class 3 project was demonstrate that detailed reservoir characterization of slope and basin clastic reservoirs in sandstone's of the Delaware Mountain Group in the Delaware Basin of West Texas and New Mexico is a cost effective way to recover oil more economically through geologically based field development. This project was focused on East Ford field, a Delaware Mountain Group field that produced from the upper Bell Canyon Formation (Ramsey sandstone). The field, discovered in 9160, is operated by Oral Petco, Inc., as the East Ford unit. A CO2 flood was being conducted in the unit, and this flood is the Phase 2 demonstration for the project.

  15. Examination of contaminant exposure and reproduction of ospreys (Pandion haliaetus) nesting in Delaware Bay and River in 2015.

    Science.gov (United States)

    Rattner, Barnett A; Lazarus, Rebecca S; Bean, Thomas G; McGowan, Peter C; Callahan, Carl R; Erickson, Richard A; Hale, Robert C

    2018-05-22

    A study of ospreys (Pandion haliaetus) nesting in the coastal Inland Bays of Delaware, and the Delaware Bay and Delaware River in 2015 examined spatial and temporal trends in contaminant exposure, food web transfer and reproduction. Concentrations of organochlorine pesticides and metabolites, polychlorinated biphenyls (PCBs), coplanar PCB toxic equivalents, polybrominated diphenyl ethers (PBDEs) and other flame retardants in sample eggs were generally greatest in the Delaware River. Concentrations of legacy contaminants in 2015 Delaware Bay eggs were lower than values observed in the 1970s through early 2000s. Several alternative brominated flame retardants were rarely detected, with only TBPH [bis(2-ethylhexyl)-tetrabromophthalate)] present in 5 of 27 samples at <5 ng/g wet weight. No relation was found between p,p'-DDE, total PCBs or total PBDEs in eggs with egg hatching, eggs lost from nests, nestling loss, fledging and nest success. Osprey eggshell thickness recovered to pre-DDT era values, and productivity was adequate to sustain a stable population. Prey fish contaminant concentrations were generally less than those in osprey eggs, with detection frequencies and concentrations greatest in white perch (Morone americana) from Delaware River compared to the Bay. Biomagnification factors from fish to eggs for p,p'-DDE and total PCBs were generally similar to findings from several Chesapeake Bay tributaries. Overall, findings suggest that there have been improvements in Delaware Estuary waterbird habitat compared to the second half of the 20th century. This trend is in part associated with mitigation of some anthropogenic contaminant threats. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Data catalog project—A browsable, searchable, metadata system

    International Nuclear Information System (INIS)

    Stillerman, Joshua; Fredian, Thomas; Greenwald, Martin; Manduchi, Gabriele

    2016-01-01

    Modern experiments are typically conducted by large, extended groups, where researchers rely on other team members to produce much of the data they use. The experiments record very large numbers of measurements that can be difficult for users to find, access and understand. We are developing a system for users to annotate their data products with structured metadata, providing data consumers with a discoverable, browsable data index. Machine understandable metadata captures the underlying semantics of the recorded data, which can then be consumed by both programs, and interactively by users. Collaborators can use these metadata to select and understand recorded measurements. The data catalog project is a data dictionary and index which enables users to record general descriptive metadata, use cases and rendering information as well as providing them a transparent data access mechanism (URI). Users describe their diagnostic including references, text descriptions, units, labels, example data instances, author contact information and data access URIs. The list of possible attribute labels is extensible, but limiting the vocabulary of names increases the utility of the system. The data catalog is focused on the data products and complements process-based systems like the Metadata Ontology Provenance project [Greenwald, 2012; Schissel, 2015]. This system can be coupled with MDSplus to provide a simple platform for data driven display and analysis programs. Sites which use MDSplus can describe tree branches, and if desired create ‘processed data trees’ with homogeneous node structures for measurements. Sites not currently using MDSplus can either use the database to reference local data stores, or construct an MDSplus tree whose leaves reference the local data store. A data catalog system can provide a useful roadmap of data acquired from experiments or simulations making it easier for researchers to find and access important data and understand the meaning of the

  17. Data catalog project—A browsable, searchable, metadata system

    Energy Technology Data Exchange (ETDEWEB)

    Stillerman, Joshua, E-mail: jas@psfc.mit.edu [MIT Plasma Science and Fusion Center, Cambridge, MA (United States); Fredian, Thomas; Greenwald, Martin [MIT Plasma Science and Fusion Center, Cambridge, MA (United States); Manduchi, Gabriele [Consorzio RFX, Euratom-ENEA Association, Corso Stati Uniti 4, Padova 35127 (Italy)

    2016-11-15

    Modern experiments are typically conducted by large, extended groups, where researchers rely on other team members to produce much of the data they use. The experiments record very large numbers of measurements that can be difficult for users to find, access and understand. We are developing a system for users to annotate their data products with structured metadata, providing data consumers with a discoverable, browsable data index. Machine understandable metadata captures the underlying semantics of the recorded data, which can then be consumed by both programs, and interactively by users. Collaborators can use these metadata to select and understand recorded measurements. The data catalog project is a data dictionary and index which enables users to record general descriptive metadata, use cases and rendering information as well as providing them a transparent data access mechanism (URI). Users describe their diagnostic including references, text descriptions, units, labels, example data instances, author contact information and data access URIs. The list of possible attribute labels is extensible, but limiting the vocabulary of names increases the utility of the system. The data catalog is focused on the data products and complements process-based systems like the Metadata Ontology Provenance project [Greenwald, 2012; Schissel, 2015]. This system can be coupled with MDSplus to provide a simple platform for data driven display and analysis programs. Sites which use MDSplus can describe tree branches, and if desired create ‘processed data trees’ with homogeneous node structures for measurements. Sites not currently using MDSplus can either use the database to reference local data stores, or construct an MDSplus tree whose leaves reference the local data store. A data catalog system can provide a useful roadmap of data acquired from experiments or simulations making it easier for researchers to find and access important data and understand the meaning of the

  18. OntoFire: an ontology-based geo-portal for wildfires

    Science.gov (United States)

    Kalabokidis, K.; Athanasis, N.; Vaitis, M.

    2011-12-01

    With the proliferation of the geospatial technologies on the Internet, the role of geo-portals (i.e. gateways to Spatial Data Infrastructures) in the area of wildfires management emerges. However, keyword-based techniques often frustrate users when looking for data of interest in geo-portal environments, while little attention has been paid to shift from the conventional keyword-based to navigation-based mechanisms. The presented OntoFire system is an ontology-based geo-portal about wildfires. Through the proposed navigation mechanisms, the relationships between the data can be discovered, which would otherwise not be possible when using conventional querying techniques alone. End users can use the browsing interface to find resources of interest by using the navigation mechanisms provided. Data providers can use the publishing interface to submit new metadata, modify metadata or removing metadata in/from the catalogue. The proposed approach can improve the discovery of valuable information that is necessary to set priorities for disaster mitigation and prevention strategies. OntoFire aspires to be a focal point of integration and management of a very large amount of information, contributing in this way to the dissemination of knowledge and to the preparedness of the operational stakeholders.

  19. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    Science.gov (United States)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  20. Evaluation of Internet-Based Interventions on Waist Circumference Reduction: A Meta-Analysis.

    Science.gov (United States)

    Seo, Dong-Chul; Niu, Jingjing

    2015-07-21

    Internet-based interventions are more cost-effective than conventional interventions and can provide immediate, easy-to-access, and individually tailored support for behavior change. Waist circumference is a strong predictor of an increased risk for a host of diseases, such as hypertension, diabetes, and dyslipidemia, independent of body mass index. To date, no study has examined the effect of Internet-based lifestyle interventions on waist circumference change. This study aimed to systematically review the effect of Internet-based interventions on waist circumference change among adults. This meta-analysis reviewed randomized controlled trials (N=31 trials and 8442 participants) that used the Internet as a main intervention approach and reported changes in waist circumference. Internet-based interventions showed a significant reduction in waist circumference (mean change -2.99 cm, 95% CI -3.68 to -2.30, I(2)=93.3%) and significantly better effects on waist circumference loss (mean loss 2.38 cm, 95% CI 1.61-3.25, I(2)=97.2%) than minimal interventions such as information-only groups. Meta-regression results showed that baseline waist circumference, gender, and the presence of social support in the intervention were significantly associated with waist circumference reduction. Internet-based interventions have a significant and promising effect on waist circumference change. Incorporating social support into an Internet-based intervention appears to be useful in reducing waist circumference. Considerable heterogeneity exists among the effects of Internet-based interventions. The design of an intervention may have a significant impact on the effectiveness of the intervention.

  1. Study of intelligent building system based on the internet of things

    Science.gov (United States)

    Wan, Liyong; Xu, Renbo

    2017-03-01

    In accordance with the problem such as isolated subsystems, weak system linkage and expansibility of the bus type buildings management system, this paper based on the modern intelligent buildings has studied some related technologies of the intelligent buildings and internet of things, and designed system architecture of the intelligent buildings based on the Internet of Things. Meanwhile, this paper has also analyzed wireless networking modes, wireless communication protocol and wireless routing protocol of the intelligent buildings based on the Internet of Things.

  2. 33 CFR 162.40 - Inland waterway from Delaware River to Chesapeake Bay, Del. and Md. (Chesapeake and Delaware Canal).

    Science.gov (United States)

    2010-07-01

    ...., between Reedy Point, Delaware River, and Old Town Point Wharf, Elk River. (b) Speed. No vessel in the..., are required to travel at all times at a safe speed throughout the canal and its approaches so as to... Point and Welch Point. (f) Sailboats. Transiting the canal by vessels under sail is not permitted...

  3. Obtaining Application-based and Content-based Internet Traffic Statistics

    DEFF Research Database (Denmark)

    Bujlow, Tomasz; Pedersen, Jens Myrup

    2012-01-01

    the Volunteer-Based System for Research on the Internet, developed at Aalborg University, is capable of providing detailed statistics of Internet usage. Since an increasing amount of HTTP traffic has been observed during the last few years, the system also supports creating statistics of different kinds of HTTP...... traffic, like audio, video, file transfers, etc. All statistics can be obtained for individual users of the system, for groups of users, or for all users altogether. This paper presents results with real data collected from a limited number of real users over six months. We demonstrate that the system can...

  4. WATER QUALITY ANALYSIS OF AGRICULTURALLY IMPACTED TIDAL BLACKBIRD CREEK, DELAWARE

    Directory of Open Access Journals (Sweden)

    Matthew Stone

    2016-11-01

    Full Text Available Blackbird Creek, Delaware is a small watershed in northern Delaware that has a significant proportion of land designated for agricultural land use. The Blackbird Creek water monitoring program was initiated in 2012 to assess the condition of the watershed’s habitats using multiple measures of water quality. Habitats were identified based on percent adjacent agricultural land use. Study sites varying from five to fourteen were sampled biweekly during April and November, 2012-2015. Data were analyzed using principal component analysis and generalized linear modeling. Results from these first four years of data documented no significant differences in water quality parameters (dissolved oxygen, pH, temperature, salinity, inorganic nitrate, nitrite, ammonia, orthophosphate, alkalinity, and turbidity between the two habitats, although both orthophosphate and turbidity were elevated beyond EPA-recommended values. There were statistically significant differences for all of the parameters between agriculture seasons. The lack of notable differences between habitats suggests that, while the watershed is generally impacted by agricultural land use practices, there appears to be no impact on the surface water chemistry. Because there were no differences between habitats, it was concluded that seasonal differences were likely due to basic seasonal variation and were not a function of agricultural land use practices.

  5. Mutton Traceability Method Based on Internet of Things

    Directory of Open Access Journals (Sweden)

    Wu Min-Ning

    2014-01-01

    Full Text Available In order to improve the mutton traceability efficiency for Internet of Things and solve the problem of data transmission, analyzed existing tracking algorithm, proposed the food traceability application model, Petri network model of food traceability and food traceability of time series data of improved K-means algorithm based on the Internet of things. The food traceability application model to convert, integrate and mine the heterogeneous information, implementation of the food safety traceability information management, Petri network model for food traceability in the process of the state transition were analyzed and simulated and provides a theoretical basis to study the behavior described in the food traceability system and structural design. The experiments on simulation data show that the proposed traceability method based on Internet of Things is more effective for mutton traceability data than the traditional K-means methods.

  6. Command and Control of Space Assets Through Internet-Based Technologies Demonstrated

    Science.gov (United States)

    Foltz, David A.

    2002-01-01

    The NASA Glenn Research Center successfully demonstrated a transmission-control-protocol/ Internet-protocol- (TCP/IP) based approach to the command and control of onorbit assets over a secure network. This is a significant accomplishment because future NASA missions will benefit by using Internet-standards-based protocols. Benefits of this Internet-based space command and control system architecture include reduced mission costs and increased mission efficiency. The demonstration proved that this communications architecture is viable for future NASA missions. This demonstration was a significant feat involving multiple NASA organizations and industry. Phillip Paulsen, from Glenn's Project Development and Integration Office, served as the overall project lead, and David Foltz, from Glenn's Satellite Networks and Architectures Branch, provided the hybrid networking support for the required Internet connections. The goal was to build a network that would emulate a connection between a space experiment on the International Space Station and a researcher accessing the experiment from anywhere on the Internet, as shown. The experiment was interfaced to a wireless 802.11 network inside the demonstration area. The wireless link provided connectivity to the Tracking and Data Relay Satellite System (TDRSS) Internet Link Terminal (TILT) satellite uplink terminal located 300 ft away in a parking lot on top of a panel van. TILT provided a crucial link in this demonstration. Leslie Ambrose, NASA Goddard Space Flight Center, provided the TILT/TDRSS support. The TILT unit transmitted the signal to TDRS 6 and was received at the White Sands Second TDRSS Ground Station. This station provided the gateway to the Internet. Coordination also took place at the White Sands station to install a Veridian Firewall and automated security incident measurement (ASIM) system to the Second TDRSS Ground Station Internet gateway. The firewall provides a trusted network for the simulated space

  7. 33 CFR 100.T05-0443 - Safety Zone; Fireworks Display, Delaware River, New Hope, PA.

    Science.gov (United States)

    2010-07-01

    ..., Delaware River, New Hope, PA. 100.T05-0443 Section 100.T05-0443 Navigation and Navigable Waters COAST GUARD... Safety Zone; Fireworks Display, Delaware River, New Hope, PA. (a) Location. The safety zone will restrict.... Bridge located in New Hope, PA, and 400 ft east of the shoreline of New Hope, PA. (b) Regulations. (1) No...

  8. The relevance of music information representation metadata from the perspective of expert users

    Directory of Open Access Journals (Sweden)

    Camila Monteiro de Barros

    Full Text Available The general goal of this research was to verify which metadata elements of music information representation are relevant for its retrieval from the perspective of expert music users. Based on a bibliographical research, a comprehensive metadata set of music information representation was developed and transformed into a questionnaire for data collection, which was applied to students and professors of the Graduate Program in Music at the Federal University of Rio Grande do Sul. The results show that the most relevant information for expert music users is related to identification and authorship responsibilities. The respondents from Composition and Interpretative Practice areas agree with these results, while the respondents from Musicology/Ethnomusicology and Music Education areas also consider the metadata related to the historical context of composition relevant.

  9. Composition and temporal patterns of larval fish communities in Chesapeake and Delaware Bays

    Directory of Open Access Journals (Sweden)

    Filipe Ribeiro

    2015-11-01

    Full Text Available Comparing larval fish assemblages in different estuaries provides insights about the coastal distribution of larval populations, larval transport, and adult spawning locations (Ribeiro et al. 2015. We simultaneously compared the larval fish assemblages entering two Middle Atlantic Bight (MAB estuaries (Delaware Bay and Chesapeake Bay, USA through weekly sampling from 2007 to 2009. In total, 43 taxa (32 families and 36 taxa (24 families were collected in Delaware and Chesapeake Bays, respectively. Mean taxonomic diversity, mean richness, and evenness were generally lower in Delaware Bay. Communities of both bays were dominated by Anchoa spp., Gobiosoma spp., Micropogonias undulatus, and Brevoortia tyrannus; Paralichthys spp. was more abundant in Delaware Bay and Microgobius thalassinus was more abundant in Chesapeake Bay. Inter-annual variation in the larval fish communities was low at both sites, with a relatively consistent composition across years, but strong seasonal (intra-annual variation in species composition occurred in both bays. Two groups were identified in Chesapeake Bay: a ‘winter’ group dominated by shelf-spawned species (e.g. M. undulatus and a ‘summer’ group comprising obligate estuarine species and coastal species (e.g. Gobiosoma spp. and Cynoscion regalis, respectively. In Delaware Bay, 4 groups were identified: a ‘summer’ group of mainly obligate estuarine fishes (e.g. Menidia sp. being replaced by a ‘fall’ group (e.g. Ctenogobius boleosoma and Gobionellus oceanicus; ‘winter’ and ‘spring’ groups were dominated by shelf-spawned (e.g. M. undulatus and Paralichthys spp. and obligate estuarine species (e.g. Leiostomus xanthurus and Pseudopleuronectes americanus, respectively. This study demonstrates that inexpensive and simultaneous sampling in different estuaries provides important insights into the variability in community structure of fish assemblages at large spatial scales.

  10. DEVELOPMENT OF A METADATA MANAGEMENT SYSTEM FOR AN INTERDISCIPLINARY RESEARCH PROJECT

    Directory of Open Access Journals (Sweden)

    C. Curdt

    2012-07-01

    Full Text Available In every interdisciplinary, long-term research project it is essential to manage and archive all heterogeneous research data, produced by the project participants during the project funding. This has to include sustainable storage, description with metadata, easy and secure provision, back up, and visualisation of all data. To ensure the accurate description of all project data with corresponding metadata, the design and implementation of a metadata management system is a significant duty. Thus, the sustainable use and search of all research results during and after the end of the project is particularly dependent on the implementation of a metadata management system. Therefore, this paper will describe the practical experiences gained during the development of a scientific research data management system (called the TR32DB including the corresponding metadata management system for the multidisciplinary research project Transregional Collaborative Research Centre 32 (CRC/TR32 'Patterns in Soil-Vegetation-Atmosphere Systems'. The entire system was developed according to the requirements of the funding agency, the user and project requirements, as well as according to recent standards and principles. The TR32DB is basically a combination of data storage, database, and web-interface. The metadata management system was designed, realized, and implemented to describe and access all project data via accurate metadata. Since the quantity and sort of descriptive metadata depends on the kind of data, a user-friendly multi-level approach was chosen to cover these requirements. Thus, the self-developed CRC/TR32 metadata framework is designed. It is a combination of general, CRC/TR32 specific, as well as data type specific properties.

  11. Metadata Exporter for Scientific Photography Management

    Science.gov (United States)

    Staudigel, D.; English, B.; Delaney, R.; Staudigel, H.; Koppers, A.; Hart, S.

    2005-12-01

    Photographs have become an increasingly important medium, especially with the advent of digital cameras. It has become inexpensive to take photographs and quickly post them on a website. However informative photos may be, they still need to be displayed in a convenient way, and be cataloged in such a manner that makes them easily locatable. Managing the great number of photographs that digital cameras allow and creating a format for efficient dissemination of the information related to the photos is a tedious task. Products such as Apple's iPhoto have greatly eased the task of managing photographs, However, they often have limitations. Un-customizable metadata fields and poor metadata extraction tools limit their scientific usefulness. A solution to this persistent problem is a customizable metadata exporter. On the ALIA expedition, we successfully managed the thousands of digital photos we took. We did this with iPhoto and a version of the exporter that is now available to the public under the name "CustomHTMLExport" (http://www.versiontracker.com/dyn/moreinfo/macosx/27777), currently undergoing formal beta testing This software allows the use of customized metadata fields (including description, time, date, GPS data, etc.), which is exported along with the photo. It can also produce webpages with this data straight from iPhoto, in a much more flexible way than is already allowed. With this tool it becomes very easy to manage and distribute scientific photos.

  12. A searching and reporting system for relational databases using a graph-based metadata representation.

    Science.gov (United States)

    Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling

    2005-01-01

    Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.

  13. Languages for Metadata

    NARCIS (Netherlands)

    Brussee, R.; Veenstra, M.; Blanken, Henk; de Vries, A.P.; Blok, H.E.; Feng, L.

    2007-01-01

    The term meta origins from the Greek word µ∈τα, meaning after. The word Metaphysics is the title of Aristotle’s book coming after his book on nature called Physics. This has given meta the modern connotation of a nature of a higher order or of a more fundamental kind [1]. Literally, metadata is

  14. Ground-water resources in the tri-state region adjacent to the Lower Delaware River

    Science.gov (United States)

    Barksdale, Henry C.; Greenman, David W.; Lang, Solomon Max; Hilton, George Stockbridge; Outlaw, Donald E.

    1958-01-01

    The purpose of this report is to appraise and evaluate the groundwater resources of a tri-state region adjacent to the lower Delaware River that is centered around Philadelphia, Pa., and Camden, N. J., and includes Wilmington, Del., and Trenton, N.J. Specifically, the region includes New Castle County, Del.; Burlington, Camden, Gloucester, Mercer, and Salem Counties in New Jersey; and Bucks, Chester, Delaware, Montgomery, and Philadelphia Counties in Pennsylvania.

  15. Metadata capture in an electronic notebook: How to make it as simple as possible?

    Directory of Open Access Journals (Sweden)

    Menzel, Julia

    2015-09-01

    Full Text Available In the last few years electronic laboratory notebooks (ELNs have become popular. ELNs offer the great possibility to capture metadata automatically. Due to the high documentation effort metadata documentation is neglected in science. To close the gap between good data documentation and high documentation effort for the scientists a first user-friendly solution to capture metadata in an easy way was developed.At first, different protocols for the Western Blot were collected within the Collaborative Research Center 1002 and analyzed. Together with existing metadata standards identified in a literature search a first version of the metadata scheme was developed. Secondly, the metadata scheme was customized for future users including the implementation of default values for automated metadata documentation.Twelve protocols for the Western Blot were used to construct one standard protocol with ten different experimental steps. Three already existing metadata standards were used as models to construct the first version of the metadata scheme consisting of 133 data fields in ten experimental steps. Through a revision with future users the final metadata scheme was shortened to 90 items in three experimental steps. Using individualized default values 51.1% of the metadata can be captured with present values in the ELN.This lowers the data documentation effort. At the same time, researcher could benefit by providing standardized metadata for data sharing and re-use.

  16. The Value of Data and Metadata Standardization for Interoperability in Giovanni

    Science.gov (United States)

    Smit, C.; Hegde, M.; Strub, R. F.; Bryant, K.; Li, A.; Petrenko, M.

    2017-12-01

    Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization. This poster gives examples of how our metadata and data standards, both external and internal, have both simplified our code base and improved our users' experiences.

  17. The use of interactive graphical maps for browsing medical/health Internet information resources

    Directory of Open Access Journals (Sweden)

    Boulos Maged

    2003-01-01

    Full Text Available Abstract As online information portals accumulate metadata descriptions of Web resources, it becomes necessary to develop effective ways for visualising and navigating the resultant huge metadata repositories as well as the different semantic relationships and attributes of described Web resources. Graphical maps provide a good method to visualise, understand and navigate a world that is too large and complex to be seen directly like the Web. Several examples of maps designed as a navigational aid for Web resources are presented in this review with an emphasis on maps of medical and health-related resources. The latter include HealthCyberMap maps http://healthcybermap.semanticweb.org/, which can be classified as conceptual information space maps, and the very abstract and geometric Visual Net maps of PubMed http://map.net (for demos. Information resources can be also organised and navigated based on their geographic attributes. Some of the maps presented in this review use a Kohonen Self-Organising Map algorithm, and only HealthCyberMap uses a Geographic Information System to classify Web resource data and render the maps. Maps based on familiar metaphors taken from users' everyday life are much easier to understand. Associative and pictorial map icons that enable instant recognition and comprehension are preferred to geometric ones and are key to successful maps for browsing medical/health Internet information resources.

  18. Ambulatory orthopaedic surgery patients' knowledge with internet-based education.

    Science.gov (United States)

    Heikkinen, Katja; Leino-Kilpi, H; Salanterä, S

    2012-01-01

    There is a growing need for patient education and an evaluation of its outcomes. The aim of this study was to compare ambulatory orthopaedic surgery patients' knowledge with Internet-based education and face-to-face education with a nurse. The following hypothesis was proposed: Internet-based patient education (experiment) is as effective as face-to-face education with a nurse (control) in increasing patients' level of knowledge and sufficiency of knowledge. In addition, the correlations of demographic variables were tested. The patients were randomized to either an experiment group (n = 72) or a control group (n = 75). Empirical data were collected with two instruments. Patients in both groups showed improvement in their knowledge during their care. Patients in the experiment group improved their knowledge level significantly more in total than those patients in the control group. There were no differences in patients' sufficiency of knowledge between the groups. Knowledge was correlated especially with patients' age, gender and earlier ambulatory surgeries. As a conclusion, positive results concerning patients' knowledge could be achieved with the Internet-based education. The Internet is a viable method in ambulatory care.

  19. Metadata Schema Used in OCLC Sampled Web Pages

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2005-12-01

    Full Text Available The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the others? To address these issues, this study analyzed 16,383 Web pages with meta tags extracted from 200,000 OCLC sampled Web pages in 2000. It found that only 8.19% Web pages used meta tags; description tags, keyword tags, and Dublin Core tags were the only three schemas used in the Web pages. This article revealed the use of meta tags in terms of their function distribution, syntax characteristics, granularity of the Web pages, and the length distribution and word number distribution of both description and keywords tags.

  20. Evolution of Web Services in EOSDIS: Search and Order Metadata Registry (ECHO)

    Science.gov (United States)

    Mitchell, Andrew; Ramapriyan, Hampapuram; Lowe, Dawn

    2009-01-01

    During 2005 through 2008, NASA defined and implemented a major evolutionary change in it Earth Observing system Data and Information System (EOSDIS) to modernize its capabilities. This implementation was based on a vision for 2015 developed during 2005. The EOSDIS 2015 Vision emphasizes increased end-to-end data system efficiency and operability; increased data usability; improved support for end users; and decreased operations costs. One key feature of the Evolution plan was achieving higher operational maturity (ingest, reconciliation, search and order, performance, error handling) for the NASA s Earth Observing System Clearinghouse (ECHO). The ECHO system is an operational metadata registry through which the scientific community can easily discover and exchange NASA's Earth science data and services. ECHO contains metadata for 2,726 data collections comprising over 87 million individual data granules and 34 million browse images, consisting of NASA s EOSDIS Data Centers and the United States Geological Survey's Landsat Project holdings. ECHO is a middleware component based on a Service Oriented Architecture (SOA). The system is comprised of a set of infrastructure services that enable the fundamental SOA functions: publish, discover, and access Earth science resources. It also provides additional services such as user management, data access control, and order management. The ECHO system has a data registry and a services registry. The data registry enables organizations to publish EOS and other Earth-science related data holdings to a common metadata model. These holdings are described through metadata in terms of datasets (types of data) and granules (specific data items of those types). ECHO also supports browse images, which provide a visual representation of the data. The published metadata can be mapped to and from existing standards (e.g., FGDC, ISO 19115). With ECHO, users can find the metadata stored in the data registry and then access the data either

  1. Application of Advanced Reservoir Characterization, Simulation, and Production Optimization Strategies to Maximize Recovery in Slope and Basin Clastic Reservoirs, West Texas (Delaware Basin)

    International Nuclear Information System (INIS)

    Dutton, Shirley

    1999-01-01

    The objective of this Class 3 project was demonstrate that detailed reservoir characterization of slope and basin clastic reservoirs in sandstones of the Delaware Mountain Group in the Delaware Basin of West Texas and New Mexico is a cost effective way to recover a higher percentage of the original oil in place through strategic placement of infill wells and geologically based field development. Project objectives are divided into two main phases. The original objectives of the reservoir-characterization phase of the project were (1) to provide a detailed understanding of the architecture and heterogeneity of two representative fields of the Delaware Mountain Group, Geraldine Ford and Ford West, which produce from the Bell Canyon and Cherry Canyon Formations, respectively, (2) to chose a demonstration area in one of the fields, and (3) to simulate a CO 2 flood in the demonstration area

  2. 75 FR 33690 - Safety Zone, Lights on the River Fireworks Display, Delaware River, New Hope, PA

    Science.gov (United States)

    2010-06-15

    ... scenario with potential for loss of life and property. Basis and Purpose The New Hope Chamber of Commerce... to protect life and property operating on the navigable waterways of the Delaware River in New Hope...-AA00 Safety Zone, Lights on the River Fireworks Display, Delaware River, New Hope, PA AGENCY: Coast...

  3. Internet-Based Mobile Ad Hoc Networking (Preprint)

    National Research Council Canada - National Science Library

    Corson, M. S; Macker, Joseph P; Cirincione, Gregory H

    1999-01-01

    Internet-based Mobile Ad Hoc Networking is an emerging technology that supports self-organizing, mobile networking infrastructures, and is one which appears well-suited for use in future commercial...

  4. Habitat-Lite: A GSC case study based on free text terms for environmental metadata

    Energy Technology Data Exchange (ETDEWEB)

    Kyrpides, Nikos; Hirschman, Lynette; Clark, Cheryl; Cohen, K. Bretonnel; Mardis, Scott; Luciano, Joanne; Kottmann, Renzo; Cole, James; Markowitz, Victor; Kyrpides, Nikos; Field, Dawn

    2008-04-01

    There is an urgent need to capture metadata on the rapidly growing number of genomic, metagenomic and related sequences, such as 16S ribosomal genes. This need is a major focus within the Genomic Standards Consortium (GSC), and Habitat is a key metadata descriptor in the proposed 'Minimum Information about a Genome Sequence' (MIGS) specification. The goal of the work described here is to provide a light-weight, easy-to-use (small) set of terms ('Habitat-Lite') that captures high-level information about habitat while preserving a mapping to the recently launched Environment Ontology (EnvO). Our motivation for building Habitat-Lite is to meet the needs of multiple users, such as annotators curating these data, database providers hosting the data, and biologists and bioinformaticians alike who need to search and employ such data in comparative analyses. Here, we report a case study based on semi-automated identification of terms from GenBank and GOLD. We estimate that the terms in the initial version of Habitat-Lite would provide useful labels for over 60% of the kinds of information found in the GenBank isolation-source field, and around 85% of the terms in the GOLD habitat field. We present a revised version of Habitat-Lite and invite the community's feedback on its further development in order to provide a minimum list of terms to capture high-level habitat information and to provide classification bins needed for future studies.

  5. An internet graph model based on trade-off optimization

    Science.gov (United States)

    Alvarez-Hamelin, J. I.; Schabanel, N.

    2004-03-01

    This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.

  6. Towards a best practice of modeling unit of measure and related statistical metadata

    CERN Document Server

    Grossmann, Wilfried

    2011-01-01

    Data and metadata exchange between organizations requires a common language for describing structure and content of statistical data and metadata. The SDMX consortium develops content oriented guidelines (COG) recommending harmonized cross-domain concepts and terminology to increase the efficiency of (meta-) data exchange. A recent challenge is a recommended code list for the unit of measure. Based on examples from SDMX sponsor organizations this paper analyses the diversity of ""unit of measure"" as used in practice, including potential breakdowns and interdependencies of the respective meta-

  7. Internet-based communications in radiation oncology

    International Nuclear Information System (INIS)

    Goldwein, Joel W.

    1996-01-01

    Currently, it is estimated that 40 million Americans have access to the Internet. The emergence of widely available software, inexpensive hardware and affordable connectivity have all led to an explosive growth in its use. Medicine in general and radiation oncology specifically are deriving great benefits from this technology. The use of this technology will result in a paradigm shift that is likely to change the way we all communicate. An understanding of the technology is therefore mandatory. The objectives of the course are to provide a practical introduction to the use of Internet technologies as they relate to our profession. The following topics will be reviewed. 1. A brief history of the Internet 2. Getting connected to the Internet 3. Internet venues - The Web, ftp, USENETS ... 4. Basic software tools - email, browsers ... 5. Specific Internet resources 6. Advanced Internet utilization 7. Business and the Internet 8. Intranet utilization 9. Philosophical and medicolegal issues 10. Predictions of the future Upon completion, the attendee will be familiar with the Internet, how it works, and how it can be used to fulfill the research, educational, and clinical care missions of our profession

  8. Structure constrained by metadata in networks of chess players.

    Science.gov (United States)

    Almeira, Nahuel; Schaigorodsky, Ana L; Perotti, Juan I; Billoni, Orlando V

    2017-11-09

    Chess is an emblematic sport that stands out because of its age, popularity and complexity. It has served to study human behavior from the perspective of a wide number of disciplines, from cognitive skills such as memory and learning, to aspects like innovation and decision-making. Given that an extensive documentation of chess games played throughout history is available, it is possible to perform detailed and statistically significant studies about this sport. Here we use one of the most extensive chess databases in the world to construct two networks of chess players. One of the networks includes games that were played over-the-board and the other contains games played on the Internet. We study the main topological characteristics of the networks, such as degree distribution and correlations, transitivity and community structure. We complement the structural analysis by incorporating players' level of play as node metadata. Although both networks are topologically different, we show that in both cases players gather in communities according to their expertise and that an emergent rich-club structure, composed by the top-rated players, is also present.

  9. Contingent approach to Internet-based supply network integration

    Science.gov (United States)

    Ho, Jessica; Boughton, Nick; Kehoe, Dennis; Michaelides, Zenon

    2001-10-01

    The Internet is playing an increasingly important role in enhancing the operations of supply networks as many organizations begin to recognize the benefits of Internet- enabled supply arrangements. However, the developments and applications to-date do not extend significantly beyond the dyadic model, whereas the real advantages are to be made with the external and network models to support a coordinated and collaborative based approach. The DOMAIN research group at the University of Liverpool is currently defining new Internet- enabled approaches to enable greater collaboration across supply chains. Different e-business models and tools are focusing on different applications. Using inappropriate e- business models, tools or techniques will bring negative results instead of benefits to all the tiers in the supply network. Thus there are a number of issues to be considered before addressing Internet based supply network integration, in particular an understanding of supply chain management, the emergent business models and evaluating the effects of deploying e-business to the supply network or a particular tier. It is important to utilize a contingent approach to selecting the right e-business model to meet the specific supply chain requirements. This paper addresses the issues and provides a case study on the indirect materials supply networks.

  10. Extent and frequency of floods on Delaware River in vicinity of Belvidere, New Jersey

    Science.gov (United States)

    Farlekas, George M.

    1966-01-01

    A stream overflowing its banks is a natural phenomenon. This natural phenomenon of flooding has occurred on the Delaware River in the past and will occur in the future. T' o resulting inundation of large areas can cause property damage, business losses and possible loss of life, and may result in emergency costs for protection, rescue, and salvage work. For optimum development of the river valley consistent with the flood risk, an evaluation of flood conditions is necessary. Basic data and the interpretation of the data on the regimen of the streams, particularly the magnitude of floods to be expected, the frequency of their occurrence, and the areas inundated, are essential for planning and development of flood-prone areas.This report presents information relative to the extent, depth, and frequency of floods on the Delaware River and its tributaries in the vicinity of Belvidere, N.J. Flooding on the tributaries detailed in the report pertains only to the effect of backwater from the Delaware River. Data are presented for several past floods with emphasis given to the floods of August 19, 1955 and May 24, 1942. In addition, information is given for a hypothetical flood based on the flood of August 19, 1955 modified by completed (since 1955) and planned flood-control works.By use of relations presented in this report the extent, depth, and frequency of flooding can be estimated for any site along the reach of the Delaware River under study. Flood data and the evaluation of the data are presented so that local and regional agencies, organizations, and individuals may have a technical basis for making decisions on the use of flood-prone areas. The Delaware River Basin Commission and the U.S. Geological Survey regard this program of flood-plain inundation studies as a positive step toward flood-damage prevention. Flood-plain inundation studies, when followed by appropriate land-use regulations, are a valuable and economical supplement to physical works for flood

  11. Successful implementation of controlled aerobic bioremediation technology at hydrocarbon contaminated sites in the state of Delaware

    International Nuclear Information System (INIS)

    Harmon, C.D.; Hiller, A.V.; Carberry, J.B.

    1994-01-01

    WIK Associates, Inc. of New Castle, Delaware, has been working over the last two years to improve and advance a cost effective method of treating hydrocarbon contaminated soils. The first section of this paper describes treatment methods and associated benefits such as increased control over environmental parameters. The second part of this paper describes work performed in attempting to predict degradation rates for varying types of hydrocarbon contamination under varying conditions. This research is based on data gathered in performing on-site bioremediation as described. A third section included in this paper describes the unique perspective of a State regulator responsible for overseeing remediation efforts evolving from leaking underground storage tanks. This section describes regulatory issues and procedures in Delaware and how the Department handles the submission and implementation of corrective action work plans, through project closure with thorough documentation of the remediation

  12. Conditions and configuration metadata for the ATLAS experiment

    International Nuclear Information System (INIS)

    Gallas, E J; Pachal, K E; Tseng, J C L; Albrand, S; Fulachier, J; Lambert, F; Zhang, Q

    2012-01-01

    In the ATLAS experiment, a system called COMA (Conditions/Configuration Metadata for ATLAS), has been developed to make globally important run-level metadata more readily accessible. It is based on a relational database storing directly extracted, refined, reduced, and derived information from system-specific database sources as well as information from non-database sources. This information facilitates a variety of unique dynamic interfaces and provides information to enhance the functionality of other systems. This presentation will give an overview of the components of the COMA system, enumerate its diverse data sources, and give examples of some of the interfaces it facilitates. We list important principles behind COMA schema and interface design, and how features of these principles create coherence and eliminate redundancy among the components of the overall system. In addition, we elucidate how interface logging data has been used to refine COMA content and improve the value and performance of end-user reports and browsers.

  13. Conditions and configuration metadata for the ATLAS experiment

    CERN Document Server

    Gallas, E J; Albrand, S; Fulachier, J; Lambert, F; Pachal, K E; Tseng, J C L; Zhang, Q

    2012-01-01

    In the ATLAS experiment, a system called COMA (Conditions/Configuration Metadata for ATLAS), has been developed to make globally important run-level metadata more readily accessible. It is based on a relational database storing directly extracted, refined, reduced, and derived information from system-specific database sources as well as information from non-database sources. This information facilitates a variety of unique dynamic interfaces and provides information to enhance the functionality of other systems. This presentation will give an overview of the components of the COMA system, enumerate its diverse data sources, and give examples of some of the interfaces it facilitates. We list important principles behind COMA schema and interface design, and how features of these principles create coherence and eliminate redundancy among the components of the overall system. In addition, we elucidate how interface logging data has been used to refine COMA content and improve the value and performance of end-user...

  14. Biological, chemical, and physical data collected in Delaware Bay from 2 Sep 1997 to 8 Oct 1997 (NODC Accession 0118720)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This study was based on the sediment quality triad (SQT) approach. A stratified probabilistic sampling design was utilized to characterize the Delaware Bay system in...

  15. Creating an Early Warning System: Predictors of Dropout in Delaware. REL Mid-Atlantic Technical Assistance Brief. REL MA 1.2.75-10

    Science.gov (United States)

    Uekawa, Kazuaki; Merola, Stacey; Fernandez, Felix; Porowski, Allan

    2010-01-01

    This Technical Brief presents an historical analysis of key indicators of dropout for Delaware students in grades 9-12. Cut points for key risk indicators of high school dropout for the State of Delaware are provided. Using data provided by the Delaware Department of Education (DDOE), relationships between student dropout and several student…

  16. Automated Metadata Extraction

    Science.gov (United States)

    2008-06-01

    Store [4]. The files purchased from the iTunes Music Store include the following metadata. • Name • Email address of purchaser • Year • Album ...6 3. Music : MP3 and AAC .........................................................................7 4. Tagged Image File Format...Expert Group (MPEG) set of standards for music encoding. Open Document Format (ODF) – an open, license-free, and clearly documented file format

  17. Obtaining Internet Flow Statistics by Volunteer-Based System

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Bujlow, Tomasz

    2012-01-01

    In this paper we demonstrate how the Volunteer Based System for Research on the Internet, developed at Aalborg University, can be used for creating statistics of Internet usage. Since the data is collected on individual machines, the statistics can be made on the basis of both individual users......, and average flow durations. The paper is concluded with a discussion on what further statistics can be made, and the further development of the system....

  18. The emergence of internet-based virtual private networks in international safeguards

    International Nuclear Information System (INIS)

    Smartt, Heidi Anne

    2001-01-01

    Full text: The costs associated with secure data transmission can be an obstacle to International Safeguards. Typical communication methods are priced by distance and may include telephone lines, frame relay, and ISDN. It is therefore costly to communicate globally. The growth of the Internet has provided an extensive backbone for global communications; however, the Internet does not provide intrinsic security measures. Combining the Internet with Virtual Private Network technology, which encrypts and authenticates data, creates a secure and potentially cost-effective data transmission path, as well as achieving other benefits such as reliability and scalability. Access to the Internet can be achieved by connecting to a local Internet Service Provider, which can be preferable to installing a static link between two distant points. The cost-effectiveness of the Internet-based Virtual Private Network is dependent on such factors as data amount, current operational costs, and the specifics of the Internet connection, such as user proximity to an Internet Service Provider or existing access to the Internet. This paper will introduce Virtual Private Network technology, the benefits of Internet communication, and the emergence of Internet-based Virtual Private Networks throughout the International Safeguards community. Specific projects to be discussed include: The completed demonstration of secure remote monitoring data transfer via the Internet between STUK in Helsinki, Finland, and the IAEA in Vienna, Austria; The demonstration of secure remote access to IAEA resources by traveling inspectors with Virtual Private Network software loaded on laptops; The proposed Action Sheets between ABACC/DOE and ARN/DOE, which will provide a link between Rio de Janeiro and Buenos Aires; The proposed use at the HIFAR research reactor, located in Australia, to provide remote monitoring data to the IAEA; The use of Virtual Private Networks by JRC, Ispra, Italy. (author)

  19. Automating the Extraction of Metadata from Archaeological Data Using iRods Rules

    Directory of Open Access Journals (Sweden)

    David Walling

    2011-10-01

    Full Text Available The Texas Advanced Computing Center and the Institute for Classical Archaeology at the University of Texas at Austin developed a method that uses iRods rules and a Jython script to automate the extraction of metadata from digital archaeological data. The first step was to create a record-keeping system to classify the data. The record-keeping system employs file and directory hierarchy naming conventions designed specifically to maintain the relationship between the data objects and map the archaeological documentation process. The metadata implicit in the record-keeping system is automatically extracted upon ingest, combined with additional sources of metadata, and stored alongside the data in the iRods preservation environment. This method enables a more organized workflow for the researchers, helps them archive their data close to the moment of data creation, and avoids error prone manual metadata input. We describe the types of metadata extracted and provide technical details of the extraction process and storage of the data and metadata.

  20. Internet-Based Indoor Navigation Services

    OpenAIRE

    Zeinalipour-Yazti, Demetrios; Laoudias, Christos; Georgiou, Kyriakos

    2017-01-01

    Smartphone advances are leading to a class of Internet-based Indoor Navigation services. IIN services rely on geolocation databases that store indoor models, comprising floor maps and points of interest, along with wireless, light, and magnetic signals for localizing users. Developing IIN services creates new information management challenges - such as crowdsourcing indoor models, acquiring and fusing big data velocity signals, localization algorithms, and custodians' location privacy. Here, ...

  1. Trichotillomania: the impact of treatment history on the outcome of an Internet-based intervention.

    Science.gov (United States)

    Weidt, Steffi; Bruehl, Annette Beatrix; Delsignore, Aba; Zai, Gwyneth; Kuenburg, Alexa; Klaghofer, Richard; Rufer, Michael

    2017-01-01

    Many patients suffering from trichotillomania (TTM) have never undergone treatment. Without treatment, TTM often presents with a chronic course. Characteristics of TTM individuals who have never been treated (untreated) remain largely unknown. Whether treatment history impacts Internet-based interventions has not yet been investigated. We aimed to answer whether Internet-based interventions can reach untreated individuals and whether treatment history is associated with certain characteristics and impacts on the outcome of an Internet-based intervention. We provided Internet-based interventions. Subjects were characterized at three time points using the Massachusetts General Hospital Hairpulling Scale, Hamilton Depression Rating Scale, and the World Health Organization Quality of Life questionnaire. Of 105 individuals, 34 were untreated. Health-related quality of life (HRQoL) was markedly impaired in untreated and treated individuals. Symptom severity did not differ between untreated and treated individuals. Nontreatment was associated with fewer depressive symptoms ( P =0.002). Treatment history demonstrated no impact on the outcome of Internet-based interventions. Results demonstrate that Internet-based interventions can reach untreated TTM individuals. They show that untreated individuals benefit as much as treated individuals from such interventions. Future Internet-based interventions should focus on how to best reach/support untreated individuals with TTM. Additionally, future studies may examine whether Internet-based interventions can reach and help untreated individuals suffering from other psychiatric disorders.

  2. Internet Connection Control based on Idle Time Using User Behavior Pattern Analysis

    Directory of Open Access Journals (Sweden)

    Fadilah Fahrul Hardiansyah

    2014-12-01

    Full Text Available The increase of smartphone ability is rapidly increasing the power consumption. Many methods have been proposed to reduce smartphone power consumption. Most of these methods use the internet connection control based on the availability of the battery power level regardless of when and where a waste of energy occurs. This paper proposes a new approach to control the internet connection based on idle time using user behavior pattern analysis. User behavior patterns are used to predict idle time duration. Internet connection control performed during idle time. During idle time internet connection periodically switched on and off by a certain time interval. This method effectively reduces a waste of energy. Control of the internet connection does not interfere the user because it is implemented on idle time. Keywords: Smartphone, User Behavior, Pattern Recognition, Idle Time, Internet Connection Control

  3. Applying Internet-based Technologies to Teaching Corporate Finance and Investments

    Directory of Open Access Journals (Sweden)

    Zhuoming “Joe” Peng, Ph.D.,

    2006-07-01

    Full Text Available Finance faculty are increasingly encouraged to use internet-based technologies in teaching. This paper examines students’ perceptions of finance faculty who use internet-based technologies and the impact on their learning experiences in undergraduate introductory corporate finance, investments, and MBA investments courses. The results suggest that offering all course materials online may enhance students’ learning experiences, however, the technologies may be best thought of as teaching tools. A better methodology for a finance course delivery may be that of in-classroom interactions between an instructor and the students while all the pertinent course materials are available online throughout the semester. There is a statistically significant difference between MBA (Master of Business Administration students and undergraduate business students in terms of their desire to use the internet for learning finance. Consistent with previous research, results indicate that it may not be common practice among faculty to use internet-based technologies, and that assistant professors tend to use technologies in teaching more often than their higher-ranked colleagues do.

  4. Studies of Big Data metadata segmentation between relational and non-relational databases

    Science.gov (United States)

    Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.

    2015-12-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  5. Studies of Big Data metadata segmentation between relational and non-relational databases

    CERN Document Server

    Golosova, M V; Klimentov, A A; Ryabinkin, E A; Dimitrov, G; Potekhin, M

    2015-01-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  6. Generating Explanations for Internet-based Business Games

    Directory of Open Access Journals (Sweden)

    Martin Fischer

    2007-06-01

    Full Text Available It is widely established debriefing in business games is important and influences the students' learning performance. Most games only support game statistics instead of explaining solution paths. We suggest the automatic generation of explanations for internet-mediated business games to improve the debriefing quality. As a proof of concept we developed a prototype of an internet-based auction game embedding an open simulation model and an automatic explanation component helping students and teachers to analyse the decision making process. This paper describes the usefulness of automated explanations and the underlying generic software architecture.

  7. OntoFire: an ontology-based geo-portal for wildfires

    Directory of Open Access Journals (Sweden)

    K. Kalabokidis

    2011-12-01

    Full Text Available With the proliferation of the geospatial technologies on the Internet, the role of geo-portals (i.e. gateways to Spatial Data Infrastructures in the area of wildfires management emerges. However, keyword-based techniques often frustrate users when looking for data of interest in geo-portal environments, while little attention has been paid to shift from the conventional keyword-based to navigation-based mechanisms. The presented OntoFire system is an ontology-based geo-portal about wildfires. Through the proposed navigation mechanisms, the relationships between the data can be discovered, which would otherwise not be possible when using conventional querying techniques alone. End users can use the browsing interface to find resources of interest by using the navigation mechanisms provided. Data providers can use the publishing interface to submit new metadata, modify metadata or removing metadata in/from the catalogue. The proposed approach can improve the discovery of valuable information that is necessary to set priorities for disaster mitigation and prevention strategies. OntoFire aspires to be a focal point of integration and management of a very large amount of information, contributing in this way to the dissemination of knowledge and to the preparedness of the operational stakeholders.

  8. FSA 2002 Digital Orthophoto Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the 2002 FSA Color Orthophotos Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the quarter-quad...

  9. Internet-Based Education for Prostate Cancer Screening

    National Research Council Canada - National Science Library

    Taylor, Kathryn L

    2007-01-01

    .... Abundant evidence documents the expanding role of the Internet in increasing access to and understanding of health information and the need for systematic evaluations of Internetbased interventions. The print- and web-based interventions have been.

  10. Ethical Issues in Designing Internet-Based Research: Recommendations for Good Practice

    Science.gov (United States)

    Gupta, Shikha

    2017-01-01

    This article presents an overview of internet-based research, highlighting the absence of a standard terminology to define and classify such research. The label internet-based research or online research can cover a diverse range of research designs and methods, involving different degrees of ethical concern regarding privacy, transparency,…

  11. 47 CFR 64.613 - Numbering directory for internet-based TRS users.

    Science.gov (United States)

    2010-10-01

    ... Uniform Resource Identifier (URI). (2) For each record associated with a VRS user, the URI shall contain.... (3) Only the TRS Numbering Administrator and Internet-based TRS providers may access the TRS...-governmental entity that is impartial and not an affiliate of any Internet-based TRS provider. (ii) Neither the...

  12. Metadata as a means for correspondence on digital media

    NARCIS (Netherlands)

    Stouffs, R.; Kooistra, J.; Tuncer, B.

    2004-01-01

    Metadata derive their action from their association to data and from the relationship they maintain with this data. An interpretation of this action is that the metadata lays claim to the data collection to which it is associated, where the claim is successful if the data collection gains quality as

  13. 75 FR 21653 - Commercial Leasing for Wind Power on the Outer Continental Shelf (OCS) Offshore Delaware-Request...

    Science.gov (United States)

    2010-04-26

    ... Leasing for Wind Power on the Outer Continental Shelf (OCS) Offshore Delaware--Request for Interest (RFI... proposal. In June 2008, Bluewater Wind Delaware LLC announced that it signed a 25-year power purchase agreement with Delmarva Power to sell up to 200 megawatts (MW) of power to the utility from an offshore wind...

  14. Finding Atmospheric Composition (AC) Metadata

    Science.gov (United States)

    Strub, Richard F..; Falke, Stefan; Fiakowski, Ed; Kempler, Steve; Lynnes, Chris; Goussev, Oleg

    2015-01-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all

  15. A Flexible Online Metadata Editing and Management System

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Raul [Arizona State University; Pan, Jerry Yun [ORNL; Gries, Corinna [Arizona State University; Inigo, Gil San [University of New Mexico, Albuquerque; Palanisamy, Giri [ORNL

    2010-01-01

    A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fields user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to

  16. Internet-based self-management in asthma

    NARCIS (Netherlands)

    Meer, Victor van der

    2010-01-01

    This thesis describes the role of internet-based support in the delivery of an asthma self management program. First, the compliance and reliability of home lung function monitoring, one of the key features of asthma self-management, was studied and appeared to be high over a 4-week period. Second,

  17. Separation of metadata and pixel data to speed DICOM tag morphing.

    Science.gov (United States)

    Ismail, Mahmoud; Philbin, James

    2013-01-01

    The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing.

  18. Characteristics and Help-Seeking Behaviors of Internet Gamblers Based on Most Problematic Mode of Gambling

    Science.gov (United States)

    2015-01-01

    Background Previous studies of problem Internet gamblers have failed to distinguish whether their problem gambling relates to Internet or land-based gambling modes. Therefore, characteristics and help-seeking behaviors of people whose gambling problems relate specifically to Internet gambling are unknown, but could inform the optimal alignment of treatment and support services with the needs and preferences of problem gamblers. Objective This study aimed to compare (1) characteristics of problem Internet gamblers and problem land-based gamblers and (2) uptake of different types and modes of help between problem Internet gamblers and problem land-based gamblers. Hypothesis 1 was that problem Internet gamblers are less likely to seek help. Hypothesis 2 was that problem Internet gamblers are more likely to use online modes of help. Methods A sample of 620 respondents meeting criteria for problem gambling was drawn from an online survey of 4594 Australian gamblers. Respondents were recruited through advertisements on gambling and gambling help websites, Facebook, and Google. Measures consisted of gambling participation; proportion of gambling on the Internet; most problematic mode of gambling; help seeking from 11 different sources of formal help, informal help, and self-help for gambling problems; psychological distress (Kessler 6); problem gambling severity (Problem Gambling Severity Index, PGSI); and demographics. Results Problem Internet gamblers were significantly more likely than problem land-based gamblers to be male (χ2 1=28.3, Pgambling helplines, online groups, self-exclusion from land-based venues, family or friends, and self-help strategies. Both problem Internet and problem land-based gamblers had similarly low use of online help. However, problem land-based gamblers (37.6%, 126/335) were significantly more likely to have sought land-based formal help compared to problem Internet gamblers (23.5%, 67/285; χ2 1=14.3, Pgambling help by problem Internet

  19. Leveraging Python to improve ebook metadata selection, ingest, and management

    Directory of Open Access Journals (Sweden)

    Kelly Thompson

    2017-10-01

    Full Text Available Libraries face many challenges in managing descriptive metadata for ebooks, including quality control, completeness of coverage, and ongoing management. The recent emergence of library management systems that automatically provide descriptive metadata for e-resources activated in system knowledge bases means that ebook management models are moving toward both greater efficiency and more complex implementation and maintenance choices. Automated and data-driven processes for ebook management have always been desirable, but in the current environment, they become necessary. In addition to initial selection of a record source, automation can be applied to quality control processes and ongoing maintenance in order to keep manual, eyes-on work to a minimum while providing the best possible discovery and access. In this article, we describe how we are using Python scripts to address these challenges.

  20. 76 FR 4716 - Commercial Leasing for Wind Power on the Outer Continental Shelf (OCS) Off Delaware, Notice of...

    Science.gov (United States)

    2011-01-26

    ... No. BOEM-2010-0075] Commercial Leasing for Wind Power on the Outer Continental Shelf (OCS) Off... commercial wind development on the OCS off Delaware and requests submission of indications of competitive... received two nominations of proposed lease areas: One from Bluewater Wind Delaware LLC (Bluewater) and...

  1. Concentrations of metals in blood and feathers of nestling ospreys (Pandion haliaetus) in Chesapeake and Delaware Bays

    Science.gov (United States)

    Rattner, B.A.; Golden, N.H.; Toschik, P.C.; McGowan, P.C.; Custer, T.W.

    2008-01-01

    In 2000, 2001, and 2002, blood and feather samples were collected from 40-45-day-old nestling ospreys (Pandion haliaetus) from Chesapeake Bay and Delaware Bay and River. Concentrations of 18 metals, metalloids, and other elements were determined in these samples by inductively coupled plasma-mass spectroscopy, and Hg concentrations were measured by cold vapor atomic absorption spectroscopy. When compared to concurrent reference areas (South, West, and Rhode Rivers), mean As and Hg concentrations in blood were greater (p nestlings from the highly industrialized Elizabeth River compared to the rural reference area. When compared to the concurrent reference area, mean Al, Ba, Hg, Mn, and Pb concentrations in feathers were substantially greater (p nestlings from northern Delaware Bay and River had greater concentrations (p nestling feathers from Delaware were frequently greater than in the Chesapeake. The present findings and those of related reproductive studies suggest that concentrations of several heavy metals (e.g., Cd, Hg, Pb) in nestling blood and feathers from Chesapeake and Delaware Bays were below toxicity thresholds and do not seem to be affecting chick survival during the nestling period.

  2. Internet Self-Efficacy Preferences of Internet Based Environments and Achievement of Prospective Teachers

    Science.gov (United States)

    Ozyalcin Oskay, Ozge

    2011-01-01

    The aims of this study are to determine prospective chemistry teachers' internet self-efficacy and preferences of constructivist internet-assisted environments and to examine the relationship between their internet self-efficacy and their preferences for constructivist internet-assisted environments, the relationship between their achievement in…

  3. The significant impact of education, poverty, and race on Internet-based research participant engagement.

    Science.gov (United States)

    Hartz, Sarah M; Quan, Tiffany; Ibiebele, Abiye; Fisher, Sherri L; Olfson, Emily; Salyer, Patricia; Bierut, Laura J

    2017-02-01

    Internet-based technologies are increasingly being used for research studies. However, it is not known whether Internet-based approaches will effectively engage participants from diverse racial and socioeconomic backgrounds. A total of 967 participants were recruited and offered genetic ancestry results. We evaluated viewing Internet-based genetic ancestry results among participants who expressed high interest in obtaining the results. Of the participants, 64% stated that they were very or extremely interested in their genetic ancestry results. Among interested participants, individuals with a high school diploma (n = 473) viewed their results 19% of the time relative to 4% of the 145 participants without a diploma (P Internet-based research was low despite high reported interest. This suggests that explicit strategies should be developed to increase diversity in Internet-based research.Genet Med 19 2, 240-243.

  4. Application of advanced reservoir characterization, simulation, and production optimization strategies to maximize recovery in slope and basin clastic reservoirs, West Texas (Delaware Basin), Class III

    Energy Technology Data Exchange (ETDEWEB)

    Dutton, Shirley P.; Flanders, William A.; Zirczy, Helena H.

    2000-05-24

    The objective of this Class 3 project was to demonstrate that detailed reservoir characterization of slope and basin clastic reservoirs in sandstones of the Delaware Mountain Group in the Delaware Basin of West Texas and New Mexico is a cost effective way to recover a higher percentage of the original oil in place through strategic placement of infill wells and geologically based field development. Phase 1 of the project, reservoir characterization, was completed this year, and Phase 2 began. The project is focused on East Ford field, a representative Delaware Mountain Group field that produces from the upper Bell Canyon Formation (Ramsey sandstone). The field, discovered in 1960, is operated by Oral Petco, Inc., as the East Ford unit. A CO{sub 2} flood is being conducted in the unit, and this flood is the Phase 2 demonstration for the project.

  5. Architectural and Mobility Management Designs in Internet-Based Infrastructure Wireless Mesh Networks

    Science.gov (United States)

    Zhao, Weiyi

    2011-01-01

    Wireless mesh networks (WMNs) have recently emerged to be a cost-effective solution to support large-scale wireless Internet access. They have numerous applications, such as broadband Internet access, building automation, and intelligent transportation systems. One research challenge for Internet-based WMNs is to design efficient mobility…

  6. Research on technology environment improvement of related industries based on internet governance

    Science.gov (United States)

    Zhang, Jing; Guan, Zhongliang

    2017-08-01

    The technology of Internet is an important factor of industry’s development. Constructing a good technical environment is the foundation of the Internet and related industries’ development. This paper demonstrates the necessity of the construction and improvement of the Internet and the related industries technology environment through comparing the current situation of the related industries. It also points out that China needs to improve the environment of the Internet technology urgently. The paper establishes the technology demand pattern of different related industries, and explores strategies of the different Internet technology environment’s construction and perfection according to the different demand of the strong related Internet and the weak related Internet to the industries environment. This paper analyzes the factors that threaten the security of the Internet, and fully demonstrates the methods and tactics of establishing and improving the technology environment Internet hardware, the Internet and related industries in China under the basis of the framework of comprehensive management of Internet. This paper also studies the construction and improvement of the comprehensive management technology environment based on the Internet industry in China.

  7. 76 FR 64959 - Delaware; Major Disaster and Related Determinations

    Science.gov (United States)

    2011-10-19

    ... resulting from Hurricane Irene during the period of August 25-31, 2011, is of sufficient severity and... State of Delaware have been designated as adversely affected by this major disaster: Kent and Sussex... Unemployment Assistance (DUA); 97.046, Fire Management Assistance Grant; 97.048, Disaster Housing Assistance to...

  8. Geography-based structural analysis of the Internet

    Energy Technology Data Exchange (ETDEWEB)

    Kasiviswanathan, Shiva [Los Alamos National Laboratory; Eidenbenz, Stephan [Los Alamos National Laboratory; Yan, Guanhua [Los Alamos National Laboratory

    2010-01-01

    In this paper, we study some geographic aspects of the Internet. We base our analysis on a large set of geolocated IP hop-level session data (including about 300,000 backbone routers, 150 million end hosts, and 1 billion sessions) that we synthesized from a variety of different input sources such as US census data, computer usage statistics, Internet market share data, IP geolocation data sets, CAJDA's Skitter data set for backbone connectivity, and BGP routing tables. We use this model to perform a nationwide and statewide geographic analysis of the Internet. Our main observations are: (1) There is a dominant coast-to-coast pattern in the US Internet traffic. In fact, in many instances even if the end-devices are not near either coast, still the traffic between them takes a long detour through the coasts. (2) More than half of the Internet paths are inflated by 100% or more compared to their corresponding geometric straight-line distance. This circuitousness makes the average ratio between the routing distance and geometric distance big (around 10). (3) The weighted mean hop count is around 5, but the hop counts are very loosely correlated with the distances. The weighted mean AS count (number of ASes traversed) is around 3. (4) The AS size and the AS location number distributions are heavy-tailed and strongly correlated. Most of the ASes are medium sized and there is a wide variability in the geographic dispersion size (measured in terms of the convex hull area) of these ASes.

  9. Value-based metrics and Internet-based enterprises

    Science.gov (United States)

    Gupta, Krishan M.

    2001-10-01

    Within the last few years, a host of value-based metrics like EVA, MVA, TBR, CFORI, and TSR have evolved. This paper attempts to analyze the validity and applicability of EVA and Balanced Scorecard for Internet based organizations. Despite the collapse of the dot-com model, the firms engaged in e- commerce continue to struggle to find new ways to account for customer-base, technology, employees, knowledge, etc, as part of the value of the firm. While some metrics, like the Balance Scorecard are geared towards internal use, others like EVA are for external use. Value-based metrics are used for performing internal audits as well as comparing firms against one another; and can also be effectively utilized by individuals outside the firm looking to determine if the firm is creating value for its stakeholders.

  10. OAI-PMH repositories : quality issues regarding metadata and protocol compliance, tutorial 1

    CERN Multimedia

    CERN. Geneva; Cole, Tim

    2005-01-01

    This tutorial will provide an overview of emerging guidelines and best practices for OAI data providers and how they relate to expectations and needs of service providers. The audience should already be familiar with OAI protocol basics and have at least some experience with either data provider or service provider implementations. The speakers will present both protocol compliance best practices and general recommendations for creating and disseminating high-quality "shareable metadata". Protocol best practices discussion will include coverage of OAI identifiers, date-stamps, deleted records, sets, resumption tokens, about containers, branding, errors conditions, HTTP server issues, and repository lifecycle issues. Discussion of what makes for good, shareable metadata will cover topics including character encoding, namespace and XML schema issues, metadata crosswalk issues, support of multiple metadata formats, general metadata authoring recommendations, specific recommendations for use of Dublin Core elemen...

  11. The University of Delaware Carlson International Polar Year Events: Collaborative and Educational Outreach

    Science.gov (United States)

    Nelson, F. E.; Bryant, T.; Wellington, P.; Dooley, J.; Bird, M.

    2008-12-01

    Delaware is a small state with, by virtue of its coastal location, a large stake in climatic change in the polar regions. The University of Delaware has maintained a strong presence in cold-regions research since the mid-1940s, when William Samuel Carlson, a highly accomplished Arctic explorer, military strategist, and earth scientist, was named 20th President (1946-50) of the University. Carlson played a leading role in two of the University of Michigan's Greenland expeditions in the late 1920s and early 1930s. As Director of the Arctic, Desert, and Tropic Branch of the US Army Air Forces Tactical Center during World War II, Colonel Carlson played a role in developing several air transportation routes through the Arctic that helped to facilitate the Allied victory in Europe. Carlson authored many scientific and popular publications concerned with the Arctic, including the books Greenland Lies North (1940) and Lifelines Through the Arctic (1962). Although the University of Delaware has maintained a vigorous and continuous program of polar research since Carlson's tenure, the faculty, staff, and students involved are diffused throughout the University's colleges and departments, without an institutional focal point. Consequently, although many of these individuals are well known in their respective fields, the institution has not until recently been perceived widely as a center of polar-oriented research. The goals of the Carlson International Polar Year Events are to: (a) develop a sense of community among UD's diffuse polar-oriented researchers and educators; (b) create a distinctive and highly visible role for UD in the milieu of IPY activities; (c) promote interest in and knowledge about the polar regions in the State of Delaware, at all educational levels; (d) forge a close relationship between UD and the American Geographical Society, a national organization involved closely with previous International Polar Years; and (e) create a new basis for development

  12. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format.

    Science.gov (United States)

    Ismail, Mahmoud; Philbin, James

    2015-04-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.

  13. The Use of Metadata Visualisation Assist Information Retrieval

    Science.gov (United States)

    2007-10-01

    album title, the track length and the genre of music . Again, any of these pieces of information can be used to quickly search and locate specific...that person. Music files also have metadata tags, in a format called ID3. This usually contains information such as the artist, the song title, the...tracks, to provide more information about the entire music collection, or to find similar or diverse tracks within the collection. Metadata is

  14. Revision of IRIS/IDA Seismic Station Metadata

    Science.gov (United States)

    Xu, W.; Davis, P.; Auerbach, D.; Klimczak, E.

    2017-12-01

    Trustworthy data quality assurance has always been one of the goals of seismic network operators and data management centers. This task is considerably complex and evolving due to the huge quantities as well as the rapidly changing characteristics and complexities of seismic data. Published metadata usually reflect instrument response characteristics and their accuracies, which includes zero frequency sensitivity for both seismometer and data logger as well as other, frequency-dependent elements. In this work, we are mainly focused studying the variation of the seismometer sensitivity with time of IRIS/IDA seismic recording systems with a goal to improve the metadata accuracy for the history of the network. There are several ways to measure the accuracy of seismometer sensitivity for the seismic stations in service. An effective practice recently developed is to collocate a reference seismometer in proximity to verify the in-situ sensors' calibration. For those stations with a secondary broadband seismometer, IRIS' MUSTANG metric computation system introduced a transfer function metric to reflect two sensors' gain ratios in the microseism frequency band. In addition, a simulation approach based on M2 tidal measurements has been proposed and proven to be effective. In this work, we compare and analyze the results from three different methods, and concluded that the collocated-sensor method is most stable and reliable with the minimum uncertainties all the time. However, for epochs without both the collocated sensor and secondary seismometer, we rely on the analysis results from tide method. For the data since 1992 on IDA stations, we computed over 600 revised seismometer sensitivities for all the IRIS/IDA network calibration epochs. Hopefully further revision procedures will help to guarantee that the data is accurately reflected by the metadata of these stations.

  15. Settlement to Improve Water Quality in Delaware River, Philadelphia-Area Creeks

    Science.gov (United States)

    EPA and the U.S. Department of Justice have reached agreement with a major water utility in the greater Philadelphia area to significantly reduce sewage discharges to the Delaware River and local creeks.

  16. Digital Learning Compass: Distance Education State Almanac 2017. Delaware

    Science.gov (United States)

    Seaman, Julia E.; Seaman, Jeff

    2017-01-01

    This brief report uses data collected under the U.S. Department of Education's National Center for Educational Statistics (NCES) Integrated Postsecondary Education Data System (IPEDS) Fall Enrollment survey to highlight distance education data in the state of Delaware. The sample for this analysis is comprised of all active, degree-granting…

  17. Linked data for libraries, archives and museums how to clean, link and publish your metadata

    CERN Document Server

    Hooland, Seth van

    2014-01-01

    This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Libraries, archives and museums are facing up to the challenge of providing access to fast growing collections whilst managing cuts to budgets. Key to this is the creation, linking and publishing of good quality metadata as Linked Data that will allow their collections to be discovered, accessed and disseminated in a sustainable manner. This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Metadata experts Seth van Hooland and Ruben Verborgh introduce the key concepts of metadata standards and Linked Data and how they can be practically applied to existing metadata, giving readers the tools and understanding to achieve maximum results with limited re...

  18. An Automatic Indicator of the Reusability of Learning Objects Based on Metadata That Satisfies Completeness Criteria

    Science.gov (United States)

    Sanz-Rodríguez, Javier; Margaritopoulos, Merkourios; Margaritopoulos, Thomas; Dodero, Juan Manuel; Sánchez-Alonso, Salvador; Manitsaris, Athanasios

    The search for learning objects in open repositories is currently a tedious task, owing to the vast amount of resources available and the fact that most of them do not have associated ratings to help users make a choice. In order to tackle this problem, we propose a reusability indicator, which can be calculated automatically using the metadata that describes the objects, allowing us to select those materials most likely to be reused. In order for this reusability indicator to be applied, metadata records must reach a certain amount of completeness, guaranteeing that the material is adequately described. This reusability indicator is tested in two studies on the Merlot and eLera repositories, and results obtained offer evidence to support their effectiveness.

  19. Standardizing metadata and taxonomic identification in metabarcoding studies.

    Science.gov (United States)

    Tedersoo, Leho; Ramirez, Kelly S; Nilsson, R Henrik; Kaljuvee, Aivi; Kõljalg, Urmas; Abarenkov, Kessy

    2015-01-01

    High-throughput sequencing-based metabarcoding studies produce vast amounts of ecological data, but a lack of consensus on standardization of metadata and how to refer to the species recovered severely hampers reanalysis and comparisons among studies. Here we propose an automated workflow covering data submission, compression, storage and public access to allow easy data retrieval and inter-study communication. Such standardized and readily accessible datasets facilitate data management, taxonomic comparisons and compilation of global metastudies.

  20. Incorporating Internet-based Interventions into Couple Therapy: Available Resources and Recommended Uses.

    Science.gov (United States)

    Cicila, Larisa N; Georgia, Emily J; Doss, Brian D

    2014-12-01

    Although there are a number of highly efficacious in-person treatments designed to ameliorate relationship distress, only a small proportion of distressed couples seek out in-person treatment. Recently developed internet-based interventions based on these in-person treatments are a promising way to circumvent common barriers to in-person treatment and give more distressed couples access to these efficacious interventions. The overarching aims of this review are to provide couple and family therapists with a broad overview of the available internet-based interventions and provide suggestions about how these interventions might be utilized before, during, or after in-person treatment. First, we review internet-based interventions targeting individual psychopathology (e.g. anxiety and depression). These interventions would be particularly useful as an adjunctive resource for in-person couple or family therapy when referrals for a concurrent in-person individual therapist are not feasible (because of time, financial, or geographic constraints). The majority of the review centers on internet-based interventions for distressed couples and covers four distinct types of resources: relationship advice websites, assessment/feedback interventions, enrichment interventions for satisfied couples, and interventions targeting at-risk or distressed couples. We close with a case study of one couple's journey through a newly developed intervention targeting at-risk couples, OurRelationship.com, and provide two appendices with information on currently available internet-based interventions.

  1. Overview of long-term field experiments in Germany - metadata visualization

    Science.gov (United States)

    Muqit Zoarder, Md Abdul; Heinrich, Uwe; Svoboda, Nikolai; Grosse, Meike; Hierold, Wilfried

    2017-04-01

    BonaRes ("soil as a sustainable resource for the bioeconomy") is conducting to collect data and metadata of agricultural long-term field experiments (LTFE) of Germany. It is funded by the German Federal Ministry of Education and Research (BMBF) under the umbrella of the National Research Strategy BioEconomy 2030. BonaRes consists of ten interdisciplinary research project consortia and the 'BonaRes - Centre for Soil Research'. BonaRes Data Centre is responsible for collecting all LTFE data and regarding metadata into an enterprise database upon higher level of security and visualization of the data and metadata through data portal. In the frame of the BonaRes project, we are compiling an overview of long-term field experiments in Germany that is based on a literature review, the results of the online survey and direct contacts with LTFE operators. Information about research topic, contact person, website, experiment setup and analyzed parameters are collected. Based on the collected LTFE data, an enterprise geodatabase is developed and a GIS-based web-information system about LTFE in Germany is also settled. Various aspects of the LTFE, like experiment type, land-use type, agricultural category and duration of experiment, are presented in thematic maps. This information system is dynamically linked to the database, which means changes in the data directly affect the presentation. An easy data searching option using LTFE name, -location or -operators and the dynamic layer selection ensure a user-friendly web application. Dispersion and visualization of the overlapping LTFE points on the overview map are also challenging and we make it automatized at very zoom level which is also a consistent part of this application. The application provides both, spatial location and meta-information of LTFEs, which is backed-up by an enterprise geodatabase, GIS server for hosting map services and Java script API for web application development.

  2. Shared Geospatial Metadata Repository for Ontario University Libraries: Collaborative Approaches

    Science.gov (United States)

    Forward, Erin; Leahey, Amber; Trimble, Leanne

    2015-01-01

    Successfully providing access to special collections of digital geospatial data in academic libraries relies upon complete and accurate metadata. Creating and maintaining metadata using specialized standards is a formidable challenge for libraries. The Ontario Council of University Libraries' Scholars GeoPortal project, which created a shared…

  3. Measuring Macrobenthos Biodiversity at Oyster Aquaculture Sites in the Delaware Inland Bays

    Science.gov (United States)

    Fuoco, M. J.; Ozbay, G.

    2016-12-01

    The Delaware Inland Bays consists of three shallow coastal bays located in the southern portion of Delaware. Anthropogenic activities have led to the degradation of water quality, because the bays are surrounded by highly developed areas and have low flushing rates. This results in loss of biodiversity and abundance of organisms. Ongoing degradation of the bays has led to a dramatic decline in local oyster populations since the late 1800s. Oysters are keystone species, which provide habitats for organisms and help to improve water quality. This study aims to find if the introduction of oyster aquaculture improves local biodiversity and abundance of macrobenthos. The study was conducted in Rehoboth Bay, Indian River Bay and Little Assawoman Bay. Aquaculture gear was placed at one location in each of the bays and 24 sediment core samples were taken once a month. From these core samples all worms were fixed and stained in a 10% Formalin Rose Bengal solution and preserved in 70% Ethanol for later identification. Stable carbon and nitrogen isotope analysis of oyster tissue will also be performed to assess the health of the bay. The goals of this research are to better understand the role of oyster aquaculture in restoring the viability and health of the Delaware Inland Bays.

  4. Metadata Harvesting in Regional Digital Libraries in the PIONIER Network

    Science.gov (United States)

    Mazurek, Cezary; Stroinski, Maciej; Werla, Marcin; Weglarz, Jan

    2006-01-01

    Purpose: The paper aims to present the concept of the functionality of metadata harvesting for regional digital libraries, based on the OAI-PMH protocol. This functionality is a part of regional digital libraries platform created in Poland. The platform was required to reach one of main objectives of the Polish PIONIER Programme--to enrich the…

  5. An outline of compilation and processing of metadata in agricultural database management system WebAgris

    Directory of Open Access Journals (Sweden)

    Tomaž Bartol

    2008-01-01

    Full Text Available The paper tackles international information system for agriculture Agris and local processing of metadata with database management software WebAgris. Operations are coordinated by the central repository at the FAO in Rome. Based on international standards and unified methodology, national and regional centers collect and process local publications, and then send the records to the central unit, which enables global website accessibility of the data. Earlier DOS-run application was based on package Agrin CDS/ISIS. The Current package WebAgris runs on web servers. Database construction tools and instructions are accessible on FAO Web pages. Data are entered through unified input masks. International consistency is achieved through authority control of certain elements, such as author or corporate affiliation. Central authority control is made available for subject headings, such as descriptors and subject categories.Subject indexing is based on controlled multilingual thesaurus Agrovoc, also available freely on the Internet. This glossary has become an important tool in the area of the international agricultural ontology. The data are exported to the central unit in XML format. Global database is currently eccessible to everyone. This international cooperative information network combines elements of a document repository,electronic publishing, open archiving and full text open access. Links with Google Scholar provide a good possibility for international promotion of publishing.

  6. Playing the Metadata Game: Technologies and Strategies Used by Climate Diagnostics Center for Cataloging and Distributing Climate Data.

    Science.gov (United States)

    Schweitzer, R. H.

    2001-05-01

    The Climate Diagnostics Center maintains a collection of gridded climate data primarily for use by local researchers. Because this data is available on fast digital storage and because it has been converted to netCDF using a standard metadata convention (called COARDS), we recognize that this data collection is also useful to the community at large. At CDC we try to use technology and metadata standards to reduce our costs associated with making these data available to the public. The World Wide Web has been an excellent technology platform for meeting that goal. Specifically we have developed Web-based user interfaces that allow users to search, plot and download subsets from the data collection. We have also been exploring use of the Pacific Marine Environment Laboratory's Live Access Server (LAS) as an engine for this task. This would result in further savings by allowing us to concentrate on customizing the LAS where needed, rather that developing and maintaining our own system. One such customization currently under development is the use of Java Servlets and JavaServer pages in conjunction with a metadata database to produce a hierarchical user interface to LAS. In addition to these Web-based user interfaces all of our data are available via the Distributed Oceanographic Data System (DODS). This allows other sites using LAS and individuals using DODS-enabled clients to use our data as if it were a local file. All of these technology systems are driven by metadata. When we began to create netCDF files, we collaborated with several other agencies to develop a netCDF convention (COARDS) for metadata. At CDC we have extended that convention to incorporate additional metadata elements to make the netCDF files as self-describing as possible. Part of the local metadata is a set of controlled names for the variable, level in the atmosphere and ocean, statistic and data set for each netCDF file. To allow searching and easy reorganization of these metadata, we loaded

  7. Metadata In, Library Out. A Simple, Robust Digital Library System

    Directory of Open Access Journals (Sweden)

    Tonio Loewald

    2010-06-01

    Full Text Available Tired of being held hostage to expensive systems that did not meet our needs, the University of Alabama Libraries developed an XML schema-agnostic, light-weight digital library delivery system based on the principles of "Keep It Simple, Stupid!" Metadata and derivatives reside in openly accessible web directories, which support the development of web agents and new usability software, as well as modification and complete retrieval at any time. The file name structure is echoed in the file system structure, enabling the delivery software to make inferences about relationships, sequencing, and complex object structure without having to encapsulate files in complex metadata schemas. The web delivery system, Acumen, is built of PHP, JSON, JavaScript and HTML5, using MySQL to support fielded searching. Recognizing that spreadsheets are more user-friendly than XML, an accompanying widget, Archivists Utility, transforms spreadsheets into MODS based on rules selected by the user. Acumen, Archivists Utility, and all supporting software scripts will be made available as open source.

  8. Behaviour of uranium during mixing in the Delaware and Chesapeake estuaries

    International Nuclear Information System (INIS)

    Sarin, M.M.; Church, T.M.

    1994-01-01

    Unequivocal evidence is presented for the removal of uranium in two major estuarine systems of the north-eastern United States: the Delaware and Chesapeake Bays. In both the estuaries, during all seasons but mostly in summer, dissolved uranium shows distinctly non-conservative behaviour at salinities ≤ 5. At salinities above 5, there are no deviations from the ideal dilution line. In these two estuaries as much as 22% of dissolved uranium is removed at low salinities, around salinity 2. This pronounced removal of uranium observed at low salinities has been investigated in terms of other chemical properties measured in the Delaware Estuary. In the zone of uranium removal, dissolved oxygen is significantly depleted and pH goes through a minimum down to 6.8. In the same low salinity regime, total alkalinity shows negative deviation from the linear dilution line and phosphate is removed. Humic acids, dissolved iron and manganese are also rapidly removed during estuarine mixing in this low salinity region. Thus, it appears that removal of uranium is most likely related to those properties of alkalinity and acid-base system of the upper estuary that may destabilize the uranium-carbonate complex. Under these conditions, uranium may associate strongly with phosphates or humic substances and be removed onto particulate phases and deposited within upper estuarine sediments. (author)

  9. Use of internet library based services by the students of Imo State ...

    African Journals Online (AJOL)

    Findings show that students utilizes internet based library services in their academic work, for their intellectual development as well as in communicating to their lecturers and other relations on their day to day information needs. It is recommended that University libraries should provide and offer internet based library ...

  10. Internet-based Interactive Construction Management Learning System.

    Science.gov (United States)

    Sawhney, Anil; Mund, Andre; Koczenasz, Jeremy

    2001-01-01

    Describes a way to incorporate practical content into the construction engineering and management curricula: the Internet-based Interactive Construction Management Learning System, which uses interactive and adaptive learning environments to train students in the areas of construction methods, equipment and processes using multimedia, databases,…

  11. Implementation of a framework for multi-species, multi-objective adaptive management in Delaware Bay

    Science.gov (United States)

    McGowan, Conor P.; Smith, David R.; Nichols, James D.; Lyons, James E.; Sweka, John A.; Kalasz, Kevin; Niles, Lawrence J.; Wong, Richard; Brust, Jeffrey; Davis, Michelle C.; Spear, Braddock

    2015-01-01

    Decision analytic approaches have been widely recommended as well suited to solving disputed and ecologically complex natural resource management problems with multiple objectives and high uncertainty. However, the difference between theory and practice is substantial, as there are very few actual resource management programs that represent formal applications of decision analysis. We applied the process of structured decision making to Atlantic horseshoe crab harvest decisions in the Delaware Bay region to develop a multispecies adaptive management (AM) plan, which is currently being implemented. Horseshoe crab harvest has been a controversial management issue since the late 1990s. A largely unregulated horseshoe crab harvest caused a decline in crab spawning abundance. That decline coincided with a major decline in migratory shorebird populations that consume horseshoe crab eggs on the sandy beaches of Delaware Bay during spring migration. Our approach incorporated multiple stakeholders, including fishery and shorebird conservation advocates, to account for diverse management objectives and varied opinions on ecosystem function. Through consensus building, we devised an objective statement and quantitative objective function to evaluate alternative crab harvest policies. We developed a set of competing ecological models accounting for the leading hypotheses on the interaction between shorebirds and horseshoe crabs. The models were initially weighted based on stakeholder confidence in these hypotheses, but weights will be adjusted based on monitoring and Bayesian model weight updating. These models were used together to predict the effects of management actions on the crab and shorebird populations. Finally, we used a dynamic optimization routine to identify the state dependent optimal harvest policy for horseshoe crabs, given the possible actions, the stated objectives and our competing hypotheses about system function. The AM plan was reviewed, accepted and

  12. Estimated use of water in the Delaware River Basin in Delaware, New Jersey, New York, and Pennsylvania, 2010

    Science.gov (United States)

    Hutson, Susan S.; Linsey, Kristin S.; Ludlow, Russell A.; Reyes, Betzaida; Shourds, Jennifer L.

    2016-11-07

    The Delaware River Basin (DRB) was selected as a Focus Area Study in 2011 by the U.S. Geological Survey (USGS) as part of the USGS National Water Census. The National Water Census is a USGS research program that focuses on national water availability and use and then develops new water accounting tools and assesses water availability at both the regional and national scales. One of the water management needs that the DRB study addressed, and that was identified by stakeholder groups from the DRB, was to improve the integration of state water use and water-supply data and to provide the compiled water use information to basin users. This water use information was also used in the hydrologic modeling and ecological components of the study.Instream and offstream water use was calculated for 2010 for the DRB based on information received from Delaware, New Jersey, New York, and Pennsylvania. Water withdrawal, interbasin transfers, return flow, and hydroelectric power generation release data were compiled for 11 categories by hydrologic subregion, basin, subbasin, and subwatershed. Data availability varied by state. Site-specific data were used whenever possible to calculate public supply, irrigation (golf courses, nurseries, sod farms, and crops), aquaculture, self-supplied industrial, commercial, mining, thermoelectric, and hydroelectric power withdrawals. Where site-specific data were not available, primarily for crop irrigation, livestock, and domestic use, various techniques were used to estimate water withdrawals.Total water withdrawals in the Delaware River Basin were calculated to be about 7,130 million gallons per day (Mgal/d) in 2010. Calculations of withdrawals by source indicate that freshwater withdrawals were about 4,130 Mgal/d (58 percent of the total) and the remaining 3,000 Mgal/d (42 percent) were from saline water. Total surface-water withdrawals were calculated to be 6,590 Mgal/d, or 92 percent of the total; about 54 percent (3,590 Mgal/d) of surface

  13. A Comparative Study on Metadata Scheme of Chinese and American Open Data Platforms

    Directory of Open Access Journals (Sweden)

    Yang Sinan

    2018-01-01

    Full Text Available [Purpose/significance] Open government data is conducive to the rational development and utilization of data resources. It can encourage social innovation and promote economic development. Besides, in order to ensure effective utilization and social increment of open government data, high-quality metadata schemes is necessary. [Method/process] Firstly, this paper analyzed the related research of open government data at home and abroad. Then, it investigated the open metadata schemes of some Chinese main local governments’ data platforms, and made a comparison with the metadata standard of American open government data. [Result/conclusion] This paper reveals that there are some disadvantages about Chinese local government open data affect the use effect of open data, which including that different governments use different data metadata schemes, the description of data set is too simple for further utilization and usually presented in HTML Web page format with lower machine-readable. Therefore, our government should come up with a standardized metadata schemes by drawing on the international mature and effective metadata standard, to ensure the social needs of high quality and high value data.

  14. Internet Technologies for Space-based Communications: State of the Art and Challenges

    Science.gov (United States)

    Bhasin, K.; DePaula, R.; Edwards, C.

    2000-01-01

    The Internet is rapidly changing the ways we communicate information around the globe today. The desire to provide Internet-based services to anyone, anywhere, anytime has brought satellite communications to the forefront to become an integral part of the Internet. In spite of the distances involved, satellite links are proving to be capable of providing Internet services based on Internet protocol (TCP/IP) stack. This development has led to the question particularly at NASA; can satellites and other space platforms become an Internet-node in space? This will allow the direct transfer of information directly from space to the users on Earth and even be able to control the spacecraft and its instruments. NASA even wants to extend the near earth space Internet to deep space applications where scientists and the public here on Earth may view space exploration in real time via the Internet. NASA's future solar system exploration will involve intensive in situ investigations of planets, moons, asteroids, and comets. While past missions typically involved a single fly-by or orbiting science spacecraft, future missions will begin to use fleets of small, highly intelligent robotic vehicles to carry out collaborative investigations. The resulting multi-spacecraft topologies will effectively create a wide area network spanning the solar system. However, this will require significant development in Internet technologies for space use. This paper provides the status'of the Internet for near earth applications and the potential extension of the Internet for use in deep space planetary exploration. The paper will discuss the overall challenges of implementing the space Internet and how the space Internet will integrate into the complex terrestrial systems those forms the Internet of today in a hybrid set of networks. Internet. We envision extending to the deep space environment such Internet concepts as a well-designed layered architecture. This effort will require an ability to

  15. Latest developments for the IAGOS database: Interoperability and metadata

    Science.gov (United States)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for

  16. Strategies for building a customer base on the internet: Symbiotic marketing

    OpenAIRE

    Lockett, A; Blackman, ID

    2001-01-01

    The advent of the Internet is leading a re-evaluation of existing business practice as the methods employed in a non-virtual world may not necessarily be as effective in a virtual environment. The present paper examines the different strategic options facing organizations as they attempt to build a customer base on the Internet. The traditional site-centric approach of directing traffic to a central Internet site suffers from the problems of the increasing costs and decreasing effectiveness o...

  17. Development of the Internet addiction scale based on the Internet Gaming Disorder criteria suggested in DSM-5.

    Science.gov (United States)

    Cho, Hyun; Kwon, Min; Choi, Ji-Hye; Lee, Sang-Kyu; Choi, Jung Seok; Choi, Sam-Wook; Kim, Dai-Jin

    2014-09-01

    This study was conducted to develop and validate a standardized self-diagnostic Internet addiction (IA) scale based on the diagnosis criteria for Internet Gaming Disorder (IGD) in the Diagnostic and Statistical Manual of Mental Disorder, 5th edition (DSM-5). Items based on the IGD diagnosis criteria were developed using items of the previous Internet addiction scales. Data were collected from a community sample. The data were divided into two sets, and confirmatory factor analysis (CFA) was performed repeatedly. The model was modified after discussion with professionals based on the first CFA results, after which the second CFA was performed. The internal consistency reliability was generally good. The items that showed significantly low correlation values based on the item-total correlation of each factor were excluded. After the first CFA was performed, some factors and items were excluded. Seven factors and 26 items were prepared for the final model. The second CFA results showed good general factor loading, Squared Multiple Correlation (SMC) and model fit. The model fit of the final model was good, but some factors were very highly correlated. It is recommended that some of the factors be refined through further studies. Copyright © 2014. Published by Elsevier Ltd.

  18. Leucine incorporation by aerobic anoxygenic phototrophic bacteria in the Delaware estuary

    Science.gov (United States)

    Stegman, Monica R; Cottrell, Matthew T; Kirchman, David L

    2014-01-01

    Aerobic anoxygenic phototrophic (AAP) bacteria are well known to be abundant in estuaries, coastal regions and in the open ocean, but little is known about their activity in any aquatic ecosystem. To explore the activity of AAP bacteria in the Delaware estuary and coastal waters, single-cell 3H-leucine incorporation by these bacteria was examined with a new approach that combines infrared epifluorescence microscopy and microautoradiography. The approach was used on samples from the Delaware coast from August through December and on transects through the Delaware estuary in August and November 2011. The percent of active AAP bacteria was up to twofold higher than the percentage of active cells in the rest of the bacterial community in the estuary. Likewise, the silver grain area around active AAP bacteria in microautoradiography preparations was larger than the area around cells in the rest of the bacterial community, indicating higher rates of leucine consumption by AAP bacteria. The cell size of AAP bacteria was 50% bigger than the size of other bacteria, about the same difference on average as measured for activity. The abundance of AAP bacteria was negatively correlated and their activity positively correlated with light availability in the water column, although light did not affect 3H-leucine incorporation in light–dark experiments. Our results suggest that AAP bacteria are bigger and more active than other bacteria, and likely contribute more to organic carbon fluxes than indicated by their abundance. PMID:24824666

  19. Leucine incorporation by aerobic anoxygenic phototrophic bacteria in the Delaware estuary.

    Science.gov (United States)

    Stegman, Monica R; Cottrell, Matthew T; Kirchman, David L

    2014-11-01

    Aerobic anoxygenic phototrophic (AAP) bacteria are well known to be abundant in estuaries, coastal regions and in the open ocean, but little is known about their activity in any aquatic ecosystem. To explore the activity of AAP bacteria in the Delaware estuary and coastal waters, single-cell (3)H-leucine incorporation by these bacteria was examined with a new approach that combines infrared epifluorescence microscopy and microautoradiography. The approach was used on samples from the Delaware coast from August through December and on transects through the Delaware estuary in August and November 2011. The percent of active AAP bacteria was up to twofold higher than the percentage of active cells in the rest of the bacterial community in the estuary. Likewise, the silver grain area around active AAP bacteria in microautoradiography preparations was larger than the area around cells in the rest of the bacterial community, indicating higher rates of leucine consumption by AAP bacteria. The cell size of AAP bacteria was 50% bigger than the size of other bacteria, about the same difference on average as measured for activity. The abundance of AAP bacteria was negatively correlated and their activity positively correlated with light availability in the water column, although light did not affect (3)H-leucine incorporation in light-dark experiments. Our results suggest that AAP bacteria are bigger and more active than other bacteria, and likely contribute more to organic carbon fluxes than indicated by their abundance.

  20. Reach and uptake of Internet- and phone-based smoking cessation interventions

    DEFF Research Database (Denmark)

    Skov-Ettrup, L S; Dalum, P; Ekholm, O

    2014-01-01

    To study whether demographic and smoking-related characteristics are associated with participation (reach) in a smoking cessation trial and subsequent use (uptake) of two specific smoking interventions (Internet-based program and proactive telephone counseling).......To study whether demographic and smoking-related characteristics are associated with participation (reach) in a smoking cessation trial and subsequent use (uptake) of two specific smoking interventions (Internet-based program and proactive telephone counseling)....

  1. FSA 2003-2004 Digital Orthophoto Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the 2003-2004 FSA Color Orthophotos Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the...

  2. Sesotho Online : Establishing an internet-based language ...

    African Journals Online (AJOL)

    It is against this background that the status, presentation and representation of African languages are being investigated. This article reports on the contribution of the website Sesotho Online to the establishment of an internet-based language knowledge community for the Sesotho language. In its literature review the article ...

  3. eLearning: a review of Internet-based continuing medical education.

    Science.gov (United States)

    Wutoh, Rita; Boren, Suzanne Austin; Balas, E Andrew

    2004-01-01

    The objective was to review the effect of Internet-based continuing medical education (CME) interventions on physician performance and health care outcomes. Data sources included searches of MEDLINE (1966 to January 2004), CINAHL (1982 to December 2003), ACP Journal Club (1991 to July/August 2003), and the Cochrane Database of Systematic Reviews (third quarter, 2003). Studies were included in the analyses if they were randomized controlled trials of Internet-based education in which participants were practicing health care professionals or health professionals in training. CME interventions were categorized according to the nature of the intervention, sample size, and other information about educational content and format. Sixteen studies met the eligibility criteria. Six studies generated positive changes in participant knowledge over traditional formats; only three studies showed a positive change in practices. The remainder of the studies showed no difference in knowledge levels between Internet-based interventions and traditional formats for CME. The results demonstrate that Internet-based CME programs are just as effective in imparting knowledge as traditional formats of CME. Little is known as to whether these positive changes in knowledge are translated into changes in practice. Subjective reports of change in physician behavior should be confirmed through chart review or other objective measures. Additional studies need to be performed to assess how long these new learned behaviors could be sustained. eLearning will continue to evolve as new innovations and more interactive modes are incorporated into learning.

  4. National Dam Inspection Program. Ingham Creek (Aquetong Lake) Dam (NDI ID PA 00224, PA DER 9-49) Delaware River Basin, Ingham Creek, Pennsylvania. Phase I Inspection Report,

    Science.gov (United States)

    1981-04-01

    Delaware River Basing Ingham Justif icaticn--- L Creek, Pennsylvania. Phase I Inspection Do DEL-AWARE RIVER BASIN Availabilit T Co~es Avail and/or D...about 1.5H:IV and an unknown upstream slope below the water surface. The dam impounds a reservoir with a normal pool surface area of 12.4 acres and a...deep. It was once used to direct water to a mill downstream of the dam and is now in poor condition. The spillway Design Flood (SDF) chosen for this

  5. USGS Digital Orthophoto Quad (DOQ) Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the USGS DOQ Orthophoto Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the quarter-quad tile...

  6. Is Internet search better than structured instruction for web-based health education?

    Science.gov (United States)

    Finkelstein, Joseph; Bedra, McKenzie

    2013-01-01

    Internet provides access to vast amounts of comprehensive information regarding any health-related subject. Patients increasingly use this information for health education using a search engine to identify education materials. An alternative approach of health education via Internet is based on utilizing a verified web site which provides structured interactive education guided by adult learning theories. Comparison of these two approaches in older patients was not performed systematically. The aim of this study was to compare the efficacy of a web-based computer-assisted education (CO-ED) system versus searching the Internet for learning about hypertension. Sixty hypertensive older adults (age 45+) were randomized into control or intervention groups. The control patients spent 30 to 40 minutes searching the Internet using a search engine for information about hypertension. The intervention patients spent 30 to 40 minutes using the CO-ED system, which provided computer-assisted instruction about major hypertension topics. Analysis of pre- and post- knowledge scores indicated a significant improvement among CO-ED users (14.6%) as opposed to Internet users (2%). Additionally, patients using the CO-ED program rated their learning experience more positively than those using the Internet.

  7. Research on Application of Automatic Weather Station Based on Internet of Things

    Science.gov (United States)

    Jianyun, Chen; Yunfan, Sun; Chunyan, Lin

    2017-12-01

    In this paper, the Internet of Things is briefly introduced, and then its application in the weather station is studied. A method of data acquisition and transmission based on NB-iot communication mode is proposed, Introduction of Internet of things technology, Sensor digital and independent power supply as the technical basis, In the construction of Automatic To realize the intelligent interconnection of the automatic weather station, and then to form an automatic weather station based on the Internet of things. A network structure of automatic weather station based on Internet of things technology is constructed to realize the independent operation of intelligent sensors and wireless data transmission. Research on networking data collection and dissemination of meteorological data, through the data platform for data analysis, the preliminary work of meteorological information publishing standards, networking of meteorological information receiving terminal provides the data interface, to the wisdom of the city, the wisdom of the purpose of the meteorological service.

  8. That obscure object of desire: multimedia metadata on the Web, part 2

    NARCIS (Netherlands)

    F.-M. Nack (Frank); J.R. van Ossenbruggen (Jacco); L. Hardman (Lynda)

    2003-01-01

    textabstractThis article discusses the state of the art in metadata for audio-visual media in large semantic networks, such as the Semantic Web. Our discussion is predominantly motivated by the two most widely known approaches towards machine-processable and semantic-based content description,

  9. That obscure object of desire: multimedia metadata on the Web, part 1

    NARCIS (Netherlands)

    F.-M. Nack (Frank); J.R. van Ossenbruggen (Jacco); L. Hardman (Lynda)

    2003-01-01

    textabstractThis article discusses the state of the art in metadata for audio-visual media in large semantic networks, such as the Semantic Web. Our discussion is predominantly motivated by the two most widely known approaches towards machine-processable and semantic-based content description,

  10. Research on Intelligent Agriculture Greenhouses Based on Internet of Things Technology

    OpenAIRE

    Shang Ying; Fu An-Ying

    2017-01-01

    Internet of things is a hot topic in the field of research, get a lot of attention, On behalf of the future development trend of the network, Internet of Things has a wide range of applications, because of the efficient and reliable information transmission in modern agriculture. In the greenhouse, the conditions of the Greenhouse determine the quality of crops, high yield and many other aspects. Research on Intelligent Agriculture Greenhouses based on Internet of Things, mainly Research on h...

  11. Network-based analysis reveals functional connectivity related to internet addiction tendency

    Directory of Open Access Journals (Sweden)

    Tanya eWen

    2016-02-01

    Full Text Available IntroductionPreoccupation and compulsive use of the internet can have negative psychological effects, such that it is increasingly being recognized as a mental disorder. The present study employed network-based statistics to explore how whole-brain functional connections at rest is related to the extent of individual’s level of internet addiction, indexed by a self-rated questionnaire. We identified two topologically significant networks, one with connections that are positively correlated with internet addiction tendency, and one with connections negatively correlated with internet addiction tendency. The two networks are interconnected mostly at frontal regions, which might reflect alterations in the frontal region for different aspects of cognitive control (i.e., for control of internet usage and gaming skills. Next, we categorized the brain into several large regional subgroupings, and found that the majority of proportions of connections in the two networks correspond to the cerebellar model of addiction which encompasses the four-circuit model. Lastly, we observed that the brain regions with the most inter-regional connections associated with internet addiction tendency replicate those often seen in addiction literature, and is corroborated by our meta-analysis of internet addiction studies. This research provides a better understanding of large-scale networks involved in internet addiction tendency and shows that pre-clinical levels of internet addiction are associated with similar regions and connections as clinical cases of addiction.

  12. Assuring the Quality of Agricultural Learning Repositories: Issues for the Learning Object Metadata Creation Process of the CGIAR

    Science.gov (United States)

    Zschocke, Thomas; Beniest, Jan

    The Consultative Group on International Agricultural Re- search (CGIAR) has established a digital repository to share its teaching and learning resources along with descriptive educational information based on the IEEE Learning Object Metadata (LOM) standard. As a critical component of any digital repository, quality metadata are critical not only to enable users to find more easily the resources they require, but also for the operation and interoperability of the repository itself. Studies show that repositories have difficulties in obtaining good quality metadata from their contributors, especially when this process involves many different stakeholders as is the case with the CGIAR as an international organization. To address this issue the CGIAR began investigating the Open ECBCheck as well as the ISO/IEC 19796-1 standard to establish quality protocols for its training. The paper highlights the implications and challenges posed by strengthening the metadata creation workflow for disseminating learning objects of the CGIAR.

  13. MMI's Metadata and Vocabulary Solutions: 10 Years and Growing

    Science.gov (United States)

    Graybeal, J.; Gayanilo, F.; Rueda-Velasquez, C. A.

    2014-12-01

    The Marine Metadata Interoperability project (http://marinemetadata.org) held its public opening at AGU's 2004 Fall Meeting. For 10 years since that debut, the MMI guidance and vocabulary sites have served over 100,000 visitors, with 525 community members and continuous Steering Committee leadership. Originally funded by the National Science Foundation, over the years multiple organizations have supported the MMI mission: "Our goal is to support collaborative research in the marine science domain, by simplifying the incredibly complex world of metadata into specific, straightforward guidance. MMI encourages scientists and data managers at all levels to apply good metadata practices from the start of a project, by providing the best guidance and resources for data management, and developing advanced metadata tools and services needed by the community." Now hosted by the Harte Research Institute at Texas A&M University at Corpus Christi, MMI continues to provide guidance and services to the community, and is planning for marine science and technology needs for the next 10 years. In this presentation we will highlight our major accomplishments, describe our recent achievements and imminent goals, and propose a vision for improving marine data interoperability for the next 10 years, including Ontology Registry and Repository (http://mmisw.org/orr) advancements and applications (http://mmisw.org/cfsn).

  14. Mind the Gap - Building Profitable Community Based Businesses on the Internet

    OpenAIRE

    Krieger,Bernhard; Müller,Philipp

    2001-01-01

    Building Internet communities will become a strategic tool both as a stand-alone model and as a supplement to sustain competitive advantage for "normal" businesses. Community based business models aim to profit from the value, which is created when Internet communities solve problems of collective action, by controlling access, aggregating data, or realizing side-payments. The current literature on community based business models refers to rational choices by individuals to explain why member...

  15. Feasibility of Internet-based Parent Training for Low-income Parents of Young Children.

    Science.gov (United States)

    McGoron, Lucy; Hvizdos, Erica; Bocknek, Erika L; Montgomery, Erica; Ondersma, Steven J

    2018-01-01

    Parent training programs promote positive parenting and benefit low-income children, but are rarely used. Internet-based delivery may help expand the reach of parent training programs, although feasibility among low-income populations is still unclear. We examined the feasibility of internet-based parent training, in terms of internet access/use and engagement, through two studies. In Study 1, 160 parents recruited from Women, Infants, and Children (WIC) centers completed a brief paper survey regarding internet access and use (all parents received government aid). We found high levels of access, openness, and comfort with the internet and internet-enabled devices. In Study 2, a pilot study, we assessed use of an online parenting program in a project with a sample of 89 predominately low-income parents (75% received government aid). Parents learned about a new, online parenting program (the "5-a-Day Parenting Program") and provided ratings of level of interest and program use 2-weeks and 4-weeks later. Local website traffic was also monitored. At baseline, parents were very interested in using the web-based program, and the majority of parents (69.6%) reported visiting the website at least once. However, in-depth use was rare (only 9% of parents reported frequent use of the online program). Results support the feasibility of internet-based parent training for low-income parents, as most parent were able to use the program and were interested in doing so. However, results also suggest the need to develop strategies to promote in-depth program use.

  16. What Information Does Your EHR Contain? Automatic Generation of a Clinical Metadata Warehouse (CMDW) to Support Identification and Data Access Within Distributed Clinical Research Networks.

    Science.gov (United States)

    Bruland, Philipp; Doods, Justin; Storck, Michael; Dugas, Martin

    2017-01-01

    Data dictionaries provide structural meta-information about data definitions in health information technology (HIT) systems. In this regard, reusing healthcare data for secondary purposes offers several advantages (e.g. reduce documentation times or increased data quality). Prerequisites for data reuse are its quality, availability and identical meaning of data. In diverse projects, research data warehouses serve as core components between heterogeneous clinical databases and various research applications. Given the complexity (high number of data elements) and dynamics (regular updates) of electronic health record (EHR) data structures, we propose a clinical metadata warehouse (CMDW) based on a metadata registry standard. Metadata of two large hospitals were automatically inserted into two CMDWs containing 16,230 forms and 310,519 data elements. Automatic updates of metadata are possible as well as semantic annotations. A CMDW allows metadata discovery, data quality assessment and similarity analyses. Common data models for distributed research networks can be established based on similarity analyses.

  17. Predicting the Continued Use of Internet-Based Learning Technologies: The Role of Habit

    Science.gov (United States)

    Limayem, Moez; Cheung, Christy M. K.

    2011-01-01

    The proliferation and advance of Internet-based technologies create expanded opportunities for educators to provide students with better learning experiences. Although current studies focus mostly on the learning processes and learning outcomes, this article examines the students' usage behaviour with Internet-based learning technologies across…

  18. Nurse-Moderated Internet-Based Support for New Mothers: Non-Inferiority, Randomized Controlled Trial.

    Science.gov (United States)

    Sawyer, Michael G; Reece, Christy E; Bowering, Kerrie; Jeffs, Debra; Sawyer, Alyssa C P; Mittinty, Murthy; Lynch, John W

    2017-07-24

    Internet-based interventions moderated by community nurses have the potential to improve support offered to new mothers, many of whom now make extensive use of the Internet to obtain information about infant care. However, evidence from population-based randomized controlled trials is lacking. The aim of this study was to test the non-inferiority of outcomes for mothers and infants who received a clinic-based postnatal health check plus nurse-moderated, Internet-based group support when infants were aged 1-7 months as compared with outcomes for those who received standard care consisting of postnatal home-based support provided by a community nurse. The design of the study was a pragmatic, preference, non-inferiority randomized control trial. Participants were recruited from mothers contacted for their postnatal health check, which is offered to all mothers in South Australia. Mothers were assigned either (1) on the basis of their preference to clinic+Internet or home-based support groups (n=328), or (2) randomly assigned to clinic+Internet or home-based groups if they declared no strong preference (n=491). The overall response rate was 44.8% (819/1827). The primary outcome was parenting self-competence, as measured by the Parenting Stress Index (PSI) Competence subscale, and the Karitane Parenting Confidence Scale scores. Secondary outcome measures included PSI Isolation, Interpersonal Support Evaluation List-Short Form, Maternal Support Scale, Ages and Stages Questionnaire-Social-Emotional and MacArthur Communicative Development Inventory (MCDI) scores. Assessments were completed offline via self-assessment questionnaires at enrolment (mean child age=4.1 weeks, SD 1.3) and again when infants were aged 9, 15, and 21 months. Generalized estimating equations adjusting for post-randomization baseline imbalances showed that differences in outcomes between mothers in the clinic+Internet and home-based support groups did not exceed the pre-specified margin of

  19. Definition of an ISO 19115 metadata profile for SeaDataNet II Cruise Summary Reports and its XML encoding

    Science.gov (United States)

    Boldrini, Enrico; Schaap, Dick M. A.; Nativi, Stefano

    2013-04-01

    SeaDataNet implements a distributed pan-European infrastructure for Ocean and Marine Data Management whose nodes are maintained by 40 national oceanographic and marine data centers from 35 countries riparian to all European seas. A unique portal makes possible distributed discovery, visualization and access of the available sea data across all the member nodes. Geographic metadata play an important role in such an infrastructure, enabling an efficient documentation and discovery of the resources of interest. In particular: - Common Data Index (CDI) metadata describe the sea datasets, including identification information (e.g. product title, interested area), evaluation information (e.g. data resolution, constraints) and distribution information (e.g. download endpoint, download protocol); - Cruise Summary Reports (CSR) metadata describe cruises and field experiments at sea, including identification information (e.g. cruise title, name of the ship), acquisition information (e.g. utilized instruments, number of samples taken) In the context of the second phase of SeaDataNet (SeaDataNet 2 EU FP7 project, grant agreement 283607, started on October 1st, 2011 for a duration of 4 years) a major target is the setting, adoption and promotion of common international standards, to the benefit of outreach and interoperability with the international initiatives and communities (e.g. OGC, INSPIRE, GEOSS, …). A standardization effort conducted by CNR with the support of MARIS, IFREMER, STFC, BODC and ENEA has led to the creation of a ISO 19115 metadata profile of CDI and its XML encoding based on ISO 19139. The CDI profile is now in its stable version and it's being implemented and adopted by the SeaDataNet community tools and software. The effort has then continued to produce an ISO based metadata model and its XML encoding also for CSR. The metadata elements included in the CSR profile belong to different models: - ISO 19115: E.g. cruise identification information, including

  20. Weatherization Builds on Delaware's Innovative Past: Weatherization Assistance Close-Up Fact Sheet

    International Nuclear Information System (INIS)

    2001-01-01

    Delaware demonstrates its commitment to technology and efficiency through the Weatherization Program. Weatherization uses advanced technologies and techniques to reduce energy costs for low-income families by increasing the energy efficiency of their homes

  1. An Analysis of the Charter School Facility Landscape in Delaware

    Science.gov (United States)

    Hesla, Kevin; Johnson, Jessica M.; Massett, Kendall; Ziebarth, Todd

    2018-01-01

    In the spring of 2016, the National Charter School Resource Center (NCSRC), the Colorado League of Charter Schools (the League), the Delaware Charter Schools Network (DCSN), and the National Alliance for Public Charter Schools (the Alliance) collaborated to collect data and information about charter school facilities and facilities expenditures in…

  2. Internet-based data interchange with XML

    Science.gov (United States)

    Fuerst, Karl; Schmidt, Thomas

    2000-12-01

    In this paper, a complete concept for Internet Electronic Data Interchange (EDI) - a well-known buzzword in the area of logistics and supply chain management to enable the automation of the interactions between companies and their partners - using XML (eXtensible Markup Language) will be proposed. This approach is based on Internet and XML, because the implementation of traditional EDI (e.g. EDIFACT, ANSI X.12) is mostly too costly for small and medium sized enterprises, which want to integrate their suppliers and customers in a supply chain. The paper will also present the results of the implementation of a prototype for such a system, which has been developed for an industrial partner to improve the current situation of parts delivery. The main functions of this system are an early warning system to detect problems during the parts delivery process as early as possible, and a transport following system to pursue the transportation.

  3. Trichotillomania: the impact of treatment history on the outcome of an Internet-based intervention

    Directory of Open Access Journals (Sweden)

    Weidt S

    2017-04-01

    Full Text Available Steffi Weidt,1 Annette Beatrix Bruehl,2,3 Aba Delsignore,1 Gwyneth Zai,2,4–6 Alexa Kuenburg,1 Richard Klaghofer,1 Michael Rufer1 1Department of Psychiatry and Psychotherapy, University Hospital Zurich, University of Zurich, Zurich, Switzerland; 2Department of Psychiatry, Behavioural and Clinical Neuroscience Institute, University of Cambridge, Cambridge, UK; 3Department of Psychiatry, Psychotherapy and Psychosomatics, University Hospital of Psychiatry, Zurich, Switzerland; 4Department of Psychiatry, Institute of Medical Science, University of Toronto, 5Neurogenetics Section, Centre for Addiction and Mental Health, 6Department of Psychiatry, Frederick W. Thompson Anxiety Disorders Centre, Sunnybrook Health Sciences Centre, Toronto, ON, Canada Background: Many patients suffering from trichotillomania (TTM have never undergone treatment. Without treatment, TTM often presents with a chronic course. Characteristics of TTM individuals who have never been treated (untreated remain largely unknown. Whether treatment history impacts Internet-based interventions has not yet been investigated. We aimed to answer whether Internet-based interventions can reach untreated individuals and whether treatment history is associated with certain characteristics and impacts on the outcome of an Internet-based intervention.Methods: We provided Internet-based interventions. Subjects were characterized at three time points using the Massachusetts General Hospital Hairpulling Scale, Hamilton Depression Rating Scale, and the World Health Organization Quality of Life questionnaire.Results: Of 105 individuals, 34 were untreated. Health-related quality of life (HRQoL was markedly impaired in untreated and treated individuals. Symptom severity did not differ between untreated and treated individuals. Nontreatment was associated with fewer depressive symptoms (P=0.002. Treatment history demonstrated no impact on the outcome of Internet-based interventions.Conclusion: Results

  4. The digital divide in Internet-based patient education materials.

    Science.gov (United States)

    Sun, Gordon H

    2012-11-01

    The ubiquity of the Internet has led to the widespread availability of health-related information to the public, and the subsequent empowerment of patients has fundamentally altered the patient-physician relationship. Among several concerns of physicians is the possibility that patients may be misinformed by information obtained from the Internet. One opportunity for health care providers to address this problem exists within Internet-based patient education materials (IPEMs). According to recent research in Otolaryngology-Head and Neck Surgery, IPEMs found within professional otolaryngology websites are written at the 8th- to 18th-grade reading comprehension level, essentially unchanged over the past 3 years. This greatly exceeds the fourth- to sixth-grade reading level recommended by the National Institutes of Health. Benefits, strategies, and challenges to improving the readability of IPEMs are discussed.

  5. The utilization of oncology web-based resources in Spanish-speaking Internet users.

    Science.gov (United States)

    Simone, Charles B; Hampshire, Margaret K; Vachani, Carolyn; Metz, James M

    2012-12-01

    There currently are few web-based resources written in Spanish providing oncology-specific information. This study examines utilization of Spanish-language oncology web-based resources and evaluates oncology-related Internet browsing practices of Spanish-speaking patients. OncoLink (http://www.oncolink.org) is the oldest and among the largest Internet-based cancer information resources. In September 2005, OncoLink pioneered OncoLink en español (OEE) (http://es.oncolink.org), a Spanish translation of OncoLink. Internet utilization data on these sites for 2006 to 2007 were compared. Visits to OncoLink rose from 4,440,843 in 2006 to 5,125,952 in 2007. OEE had 204,578 unique visitors and 240,442 visits in 2006, and 351,228 visitors and 412,153 visits in 2007. Although there was no time predilection for viewing OncoLink, less relative browsing on OEE was conducted during weekends and early morning hours. Although OncoLink readers searched for information on the most common cancers in the United States, OEE readers most often search for gastric, vaginal, osteosarcoma, leukemia, penile, cervical, and testicular malignancies. Average visit duration on OEE was shorter, and fewer readers surveyed OEE more than 15 minutes (4.5% vs. 14.9%, P users of web-based oncology resources are increasingly using the Internet to supplement their cancer knowledge. Limited available resources written in Spanish contribute to disparities in information access and disease outcomes. Spanish-speaking oncology readers differ from English-speaking readers in day and time of Internet browsing, visit duration, Internet search patterns, and types of cancers searched. By acknowledging these differences, content of web-based oncology resources can be developed to best target the needs of Spanish-speaking viewers.

  6. Disaster management: using Internet-based technology.

    Science.gov (United States)

    Dimitruk, Paul

    2007-01-01

    Disasters impose operational challenges and substantial financial burdens on hospitals. Internet-based disaster management technology can help. This technology should: Capture, analyze, and track relevant data. Be available 24/7. Guide decision makers in setting up an incident command center and monitor the completion of jobs by ICC role. Provide assistance in areas that hospitals are not used to dealing with, e.g., chemical or bio-terror agents.

  7. Genetic-linked Inattentiveness Protects Individuals from Internet Overuse: A Genetic Study of Internet Overuse Evaluating Hypotheses Based on Addiction, Inattention, Novelty-seeking and Harm-avoidance

    Directory of Open Access Journals (Sweden)

    Cheng Sun

    2016-06-01

    Full Text Available The all-pervasive Internet has created serious problems, such as Internet overuse, which has triggered considerable debate over its relationship with addiction. To further explore its genetic susceptibilities and alternative explanations for Internet overuse, we proposed and evaluated four hypotheses, each based on existing knowledge of the biological bases of addiction, inattention, novelty-seeking, and harm-avoidance. Four genetic loci including DRD4 VNTR, DRD2 Taq1A, COMT Val158Met and 5-HTTLPR length polymorphisms were screened from seventy-three individuals. Our results showed that the DRD4 4R/4R individuals scored significantly higher than the 2R or 7R carriers in Internet Addiction Test (IAT. The 5-HTTLPR short/short males scored significantly higher in IAT than the long variant carriers. Bayesian analysis showed the most compatible hypothesis with the observed genetic results was based on attention (69.8%, whereas hypotheses based harm-avoidance (21.6%, novelty-seeking (7.8% and addiction (0.9% received little support. Our study suggests that carriers of alleles (DRD4 2R and 7R, 5-HTTLPR long associated with inattentiveness are more likely to experience disrupted patterns and reduced durations of Internet use, protecting them from Internet overuse. Furthermore, our study suggests that Internet overuse should be categorized differently from addiction due to the lack of shared genetic contributions.

  8. Hindcasting of Storm Surges, Currents, and Waves at Lower Delaware Bay during Hurricane Isabel

    Science.gov (United States)

    Salehi, M.

    2017-12-01

    Hurricanes are a major threat to coastal communities and infrastructures including nuclear power plants located in low-lying coastal zones. In response, their sensitive elements should be protected by smart design to withstand against drastic impact of such natural phenomena. Accurate and reliable estimate of hurricane attributes is the first step to that effort. Numerical models have extensively grown over the past few years and are effective tools in modeling large scale natural events such as hurricane. The impact of low probability hurricanes on the lower Delaware Bay is investigated using dynamically coupled meteorological, hydrodynamic, and wave components of Delft3D software. Efforts are made to significantly reduce the computational overburden of performing such analysis for the industry, yet keeping the same level of accuracy at the area of study (AOS). The model is comprised of overall and nested domains. The overall model domain includes portion of Atlantic Ocean, Delaware, and Chesapeake bays. The nested model domain includes Delaware Bay, its floodplain, and portion of the continental shelf. This study is portion of a larger modeling effort to study the impact of low probability hurricanes on sensitive infrastructures located at the coastal zones prone to hurricane activity. The AOS is located on the east bank of Delaware Bay almost 16 miles upstream of its mouth. Model generated wind speed, significant wave height, water surface elevation, and current are calibrated for hurricane Isabel (2003). The model calibration results agreed reasonably well with field observations. Furthermore, sensitivity of surge and wave responses to various hurricane parameters was tested. In line with findings from other researchers, accuracy of wind field played a major role in hindcasting the hurricane attributes.

  9. Development of Energy Management System Based on Internet of Things Technique

    OpenAIRE

    Wen-Jye Shyr; Chia-Ming Lin and Hung-Yun Feng

    2017-01-01

    The purpose of this study was to develop an energy management system for university campuses based on the Internet of Things (IoT) technique. The proposed IoT technique based on WebAccess is used via network browser Internet Explore and applies TCP/IP protocol. The case study of IoT for lighting energy usage management system was proposed. Structure of proposed IoT technique included perception layer, equipment layer, control layer, application layer and network layer.

  10. DataNet: A flexible metadata overlay over file resources

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Managing and sharing data stored in files results in a challenge due to data amounts produced by various scientific experiments [1]. While solutions such as Globus Online [2] focus on file transfer and synchronization, in this work we propose an additional layer of metadata over file resources which helps to categorize and structure the data, as well as to make it efficient in integration with web-based research gateways. A basic concept of the proposed solution [3] is a data model consisting of entities built from primitive types such as numbers, texts and also from files and relationships among different entities. This allows for building complex data structure definitions and mix metadata and file data into a single model tailored for a given scientific field. A data model becomes actionable after being deployed as a data repository which is done automatically by the proposed framework by using one of the available PaaS (platform-as-a-service) platforms and is exposed to the world as a REST service, which...

  11. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    OpenAIRE

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes...

  12. Examining the Internet-Based Free Talk in College English Classes from the Motivation Perspective

    Directory of Open Access Journals (Sweden)

    Li Ming

    2017-01-01

    Full Text Available Free Talk is recognized as an effective approach to teaching college English in China to improve students’ English speaking. With the popularity of Internet around the world, the Internet-based Free Talk demonstrates more advantages in motivating students to engage in English learning. In this paper, the author compares the main features of the Internet-based Free Talk with the five components of the MUSIC Model of Motivation synthesized from current research and theory in the field of motivation. Furthermore, the author illustrates how Internet facilitates Free Talk through online writing service system and online QQ community. The comparison reveals that the success of the Internet-based Free Talk is consistent with the key motivation principles. This paper indicates that professors and researchers in higher education could design and evaluate their instruction according to the components of the MUSIC model of motivation.

  13. Internet addiction or excessive internet use.

    Science.gov (United States)

    Weinstein, Aviv; Lejoyeux, Michel

    2010-09-01

    Problematic Internet addiction or excessive Internet use is characterized by excessive or poorly controlled preoccupations, urges, or behaviors regarding computer use and Internet access that lead to impairment or distress. Currently, there is no recognition of internet addiction within the spectrum of addictive disorders and, therefore, no corresponding diagnosis. It has, however, been proposed for inclusion in the next version of the Diagnostic and Statistical Manual of Mental Disorder (DSM). To review the literature on Internet addiction over the topics of diagnosis, phenomenology, epidemiology, and treatment. Review of published literature between 2000-2009 in Medline and PubMed using the term "internet addiction. Surveys in the United States and Europe have indicated prevalence rate between 1.5% and 8.2%, although the diagnostic criteria and assessment questionnaires used for diagnosis vary between countries. Cross-sectional studies on samples of patients report high comorbidity of Internet addiction with psychiatric disorders, especially affective disorders (including depression), anxiety disorders (generalized anxiety disorder, social anxiety disorder), and attention deficit hyperactivity disorder (ADHD). Several factors are predictive of problematic Internet use, including personality traits, parenting and familial factors, alcohol use, and social anxiety. Although Internet-addicted individuals have difficulty suppressing their excessive online behaviors in real life, little is known about the patho-physiological and cognitive mechanisms responsible for Internet addiction. Due to the lack of methodologically adequate research, it is currently impossible to recommend any evidence-based treatment of Internet addiction.

  14. Understanding Patient Experience Using Internet-based Email Surveys: A Feasibility Study at Mount Sinai Hospital.

    Science.gov (United States)

    Morgan, Matthew; Lau, Davina; Jivraj, Tanaz; Principi, Tania; Dietrich, Sandra; Bell, Chaim M

    2015-01-01

    Email is becoming a widely accepted communication tool in healthcare settings. This study sought to test the feasibility of Internet-based email surveys of patient experience in the ambulatory setting. We conducted a study of email Internet-based surveys sent to patients in selected ambulatory clinics at Mount Sinai Hospital in Toronto, Canada. Our findings suggest that email links to Internet surveys are a feasible, timely and efficient method to solicit patient feedback about their experience. Further research is required to optimally leverage Internet-based email surveys as a tool to better understand the patient experience.

  15. GEO Label Web Services for Dynamic and Effective Communication of Geospatial Metadata Quality

    Science.gov (United States)

    Lush, Victoria; Nüst, Daniel; Bastin, Lucy; Masó, Joan; Lumsden, Jo

    2014-05-01

    We present demonstrations of the GEO label Web services and their integration into a prototype extension of the GEOSS portal (http://scgeoviqua.sapienzaconsulting.com/web/guest/geo_home), the GMU portal (http://gis.csiss.gmu.edu/GADMFS/) and a GeoNetwork catalog application (http://uncertdata.aston.ac.uk:8080/geonetwork/srv/eng/main.home). The GEO label is designed to communicate, and facilitate interrogation of, geospatial quality information with a view to supporting efficient and effective dataset selection on the basis of quality, trustworthiness and fitness for use. The GEO label which we propose was developed and evaluated according to a user-centred design (UCD) approach in order to maximise the likelihood of user acceptance once deployed. The resulting label is dynamically generated from producer metadata in ISO or FDGC format, and incorporates user feedback on dataset usage, ratings and discovered issues, in order to supply a highly informative summary of metadata completeness and quality. The label was easily incorporated into a community portal as part of the GEO Architecture Implementation Programme (AIP-6) and has been successfully integrated into a prototype extension of the GEOSS portal, as well as the popular metadata catalog and editor, GeoNetwork. The design of the GEO label was based on 4 user studies conducted to: (1) elicit initial user requirements; (2) investigate initial user views on the concept of a GEO label and its potential role; (3) evaluate prototype label visualizations; and (4) evaluate and validate physical GEO label prototypes. The results of these studies indicated that users and producers support the concept of a label with drill-down interrogation facility, combining eight geospatial data informational aspects, namely: producer profile, producer comments, lineage information, standards compliance, quality information, user feedback, expert reviews, and citations information. These are delivered as eight facets of a wheel

  16. Internet from Above.

    Science.gov (United States)

    Sullivan, Laura

    1998-01-01

    Explains how fast and reliable Internet access can be obtained by using satellite communications based on experiences at a high school in Mississippi. Discusses Internet communications; how it was implemented in the media center; local area networks; the need for Ethernet-based connection to the Internet; and price. (LRW)

  17. Using a linked data approach to aid development of a metadata portal to support Marine Strategy Framework Directive (MSFD) implementation

    Science.gov (United States)

    Wood, Chris

    2016-04-01

    Under the Marine Strategy Framework Directive (MSFD), EU Member States are mandated to achieve or maintain 'Good Environmental Status' (GES) in their marine areas by 2020, through a series of Programme of Measures (PoMs). The Celtic Seas Partnership (CSP), an EU LIFE+ project, aims to support policy makers, special-interest groups, users of the marine environment, and other interested stakeholders on MSFD implementation in the Celtic Seas geographical area. As part of this support, a metadata portal has been built to provide a signposting service to datasets that are relevant to MSFD within the Celtic Seas. To ensure that the metadata has the widest possible reach, a linked data approach was employed to construct the database. Although the metadata are stored in a traditional RDBS, the metadata are exposed as linked data via the D2RQ platform, allowing virtual RDF graphs to be generated. SPARQL queries can be executed against the end-point allowing any user to manipulate the metadata. D2RQ's mapping language, based on turtle, was used to map a wide range of relevant ontologies to the metadata (e.g. The Provenance Ontology (prov-o), Ocean Data Ontology (odo), Dublin Core Elements and Terms (dc & dcterms), Friend of a Friend (foaf), and Geospatial ontologies (geo)) allowing users to browse the metadata, either via SPARQL queries or by using D2RQ's HTML interface. The metadata were further enhanced by mapping relevant parameters to the NERC Vocabulary Server, itself built on a SPARQL endpoint. Additionally, a custom web front-end was built to enable users to browse the metadata and express queries through an intuitive graphical user interface that requires no prior knowledge of SPARQL. As well as providing means to browse the data via MSFD-related parameters (Descriptor, Criteria, and Indicator), the metadata records include the dataset's country of origin, the list of organisations involved in the management of the data, and links to any relevant INSPIRE

  18. A Metadata Model for E-Learning Coordination through Semantic Web Languages

    Science.gov (United States)

    Elci, Atilla

    2005-01-01

    This paper reports on a study aiming to develop a metadata model for e-learning coordination based on semantic web languages. A survey of e-learning modes are done initially in order to identify content such as phases, activities, data schema, rules and relations, etc. relevant for a coordination model. In this respect, the study looks into the…

  19. Document Classification in Support of Automated Metadata Extraction Form Heterogeneous Collections

    Science.gov (United States)

    Flynn, Paul K.

    2014-01-01

    A number of federal agencies, universities, laboratories, and companies are placing their documents online and making them searchable via metadata fields such as author, title, and publishing organization. To enable this, every document in the collection must be catalogued using the metadata fields. Though time consuming, the task of identifying…

  20. Using the Internet for information about breast cancer: a questionnaire-based study.

    Science.gov (United States)

    Littlechild, Sophie Anna; Barr, Lester

    2013-09-01

    To identify the proportion of breast cancer patients that used the Internet for breast cancer information; to classify patterns of use based on patient demographics; and to evaluate whether using the Internet for this purpose was beneficial or problematic. Also to recognize whether a specific demographic group was more likely to experience problems when using the Internet for breast cancer information. A 10-item questionnaire was given to patients who attended the breast unit at the University Hospital of South Manchester between May and June 2011 following breast cancer treatment within the last 5 years. 200 questionnaires were completed. 50.5% of patients had used the Internet for breast cancer information, with younger (pincome (pInternet for breast cancer information, particularly those from ethnic minorities. Health professionals need to include a discussion about Internet use in consultations with breast cancer patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Advertising, Internet Based Networking Websites (IBNWs) and New Ventures

    OpenAIRE

    Jara, Carlos; Wayburne, Terence

    2012-01-01

    With the explosion of technology we are finding that our methods of communication are  changing rapidly year to year. The way that we interact with each other from personal  levels to more formal business is all being affected. With the birth of the internet we have  seen continuous growth  of communication methods via this medium and most recently is  the boom of the Internet Based Networking Websites (IBNWs) that allow the, over  300million, users to interact with each other. Websites like ...

  2. The Genomic Observatories Metadatabase (GeOMe): A new repository for field and sampling event metadata associated with genetic samples

    Science.gov (United States)

    Deck, John; Gaither, Michelle R.; Ewing, Rodney; Bird, Christopher E.; Davies, Neil; Meyer, Christopher; Riginos, Cynthia; Toonen, Robert J.; Crandall, Eric D.

    2017-01-01

    The Genomic Observatories Metadatabase (GeOMe, http://www.geome-db.org/) is an open access repository for geographic and ecological metadata associated with biosamples and genetic data. Whereas public databases have served as vital repositories for nucleotide sequences, they do not accession all the metadata required for ecological or evolutionary analyses. GeOMe fills this need, providing a user-friendly, web-based interface for both data contributors and data recipients. The interface allows data contributors to create a customized yet standard-compliant spreadsheet that captures the temporal and geospatial context of each biosample. These metadata are then validated and permanently linked to archived genetic data stored in the National Center for Biotechnology Information’s (NCBI’s) Sequence Read Archive (SRA) via unique persistent identifiers. By linking ecologically and evolutionarily relevant metadata with publicly archived sequence data in a structured manner, GeOMe sets a gold standard for data management in biodiversity science. PMID:28771471

  3. The Genomic Observatories Metadatabase (GeOMe: A new repository for field and sampling event metadata associated with genetic samples.

    Directory of Open Access Journals (Sweden)

    John Deck

    2017-08-01

    Full Text Available The Genomic Observatories Metadatabase (GeOMe, http://www.geome-db.org/ is an open access repository for geographic and ecological metadata associated with biosamples and genetic data. Whereas public databases have served as vital repositories for nucleotide sequences, they do not accession all the metadata required for ecological or evolutionary analyses. GeOMe fills this need, providing a user-friendly, web-based interface for both data contributors and data recipients. The interface allows data contributors to create a customized yet standard-compliant spreadsheet that captures the temporal and geospatial context of each biosample. These metadata are then validated and permanently linked to archived genetic data stored in the National Center for Biotechnology Information's (NCBI's Sequence Read Archive (SRA via unique persistent identifiers. By linking ecologically and evolutionarily relevant metadata with publicly archived sequence data in a structured manner, GeOMe sets a gold standard for data management in biodiversity science.

  4. Health literacy: a study of internet-based information on advance directives.

    Science.gov (United States)

    Stuart, Peter

    2017-11-28

    The aim of this study was to evaluate the quality and value of web-based information on advance directives. Internet-based information on advance directives was selected because, if it is inaccurate or difficult to understand, patients risk making decisions about their care that may not be followed in practice. Two validated health information evaluation tools, the Suitability Assessment of Materials and DISCERN, and a focus group were used to assess credibility, user orientation and effectiveness. Only one of the 34 internet-based information items on advance directives reviewed fulfilled the study criteria and 30% of the sites were classed as unreadable. In terms of learning and informing, 79% of the sites were considered unsuitable. Using health literacy tools to evaluate internet-based health information highlights that often it is not at a functional literacy level and neither informs nor empowers users to make independent and valid healthcare decisions. ©2017 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.

  5. Concurrent array-based queue

    Science.gov (United States)

    Heidelberger, Philip; Steinmacher-Burow, Burkhard

    2015-01-06

    According to one embodiment, a method for implementing an array-based queue in memory of a memory system that includes a controller includes configuring, in the memory, metadata of the array-based queue. The configuring comprises defining, in metadata, an array start location in the memory for the array-based queue, defining, in the metadata, an array size for the array-based queue, defining, in the metadata, a queue top for the array-based queue and defining, in the metadata, a queue bottom for the array-based queue. The method also includes the controller serving a request for an operation on the queue, the request providing the location in the memory of the metadata of the queue.

  6. Internet MEMS design tools based on component technology

    Science.gov (United States)

    Brueck, Rainer; Schumer, Christian

    1999-03-01

    The micro electromechanical systems (MEMS) industry in Europe is characterized by small and medium sized enterprises specialized on products to solve problems in specific domains like medicine, automotive sensor technology, etc. In this field of business the technology driven design approach known from micro electronics is not appropriate. Instead each design problem aims at its own, specific technology to be used for the solution. The variety of technologies at hand, like Si-surface, Si-bulk, LIGA, laser, precision engineering requires a huge set of different design tools to be available. No single SME can afford to hold licenses for all these tools. This calls for a new and flexible way of designing, implementing and distributing design software. The Internet provides a flexible manner of offering software access along with methodologies of flexible licensing e.g. on a pay-per-use basis. New communication technologies like ADSL, TV cable of satellites as carriers promise to offer a bandwidth sufficient even for interactive tools with graphical interfaces in the near future. INTERLIDO is an experimental tool suite for process specification and layout verification for lithography based MEMS technologies to be accessed via the Internet. The first version provides a Java implementation even including a graphical editor for process specification. Currently, a new version is brought into operation that is based on JavaBeans component technology. JavaBeans offers the possibility to realize independent interactive design assistants, like a design rule checking assistants, a process consistency checking assistants, a technology definition assistants, a graphical editor assistants, etc. that may reside distributed over the Internet, communicating via Internet protocols. Each potential user thus is able to configure his own dedicated version of a design tool set dedicated to the requirements of the current problem to be solved.

  7. Radiological dose and metadata management

    International Nuclear Information System (INIS)

    Walz, M.; Madsack, B.; Kolodziej, M.

    2016-01-01

    This article describes the features of management systems currently available in Germany for extraction, registration and evaluation of metadata from radiological examinations, particularly in the digital imaging and communications in medicine (DICOM) environment. In addition, the probable relevant developments in this area concerning radiation protection legislation, terminology, standardization and information technology are presented. (orig.) [de

  8. Development of RESTful services and map-based user interface tools for access and delivery of data and metadata from the Marine-Geo Digital Library

    Science.gov (United States)

    Morton, J. J.; Ferrini, V. L.

    2015-12-01

    The Marine Geoscience Data System (MGDS, www.marine-geo.org) operates an interactive digital data repository and metadata catalog that provides access to a variety of marine geology and geophysical data from throughout the global oceans. Its Marine-Geo Digital Library includes common marine geophysical data types and supporting data and metadata, as well as complementary long-tail data. The Digital Library also includes community data collections and custom data portals for the GeoPRISMS, MARGINS and Ridge2000 programs, for active source reflection data (Academic Seismic Portal), and for marine data acquired by the US Antarctic Program (Antarctic and Southern Ocean Data Portal). Ensuring that these data are discoverable not only through our own interfaces but also through standards-compliant web services is critical for enabling investigators to find data of interest.Over the past two years, MGDS has developed several new RESTful web services that enable programmatic access to metadata and data holdings. These web services are compliant with the EarthCube GeoWS Building Blocks specifications and are currently used to drive our own user interfaces. New web applications have also been deployed to provide a more intuitive user experience for searching, accessing and browsing metadata and data. Our new map-based search interface combines components of the Google Maps API with our web services for dynamic searching and exploration of geospatially constrained data sets. Direct introspection of nearly all data formats for hundreds of thousands of data files curated in the Marine-Geo Digital Library has allowed for precise geographic bounds, which allow geographic searches to an extent not previously possible. All MGDS map interfaces utilize the web services of the Global Multi-Resolution Topography (GMRT) synthesis for displaying global basemap imagery and for dynamically provide depth values at the cursor location.

  9. Internet-based intelligent information processing systems

    CERN Document Server

    Tonfoni, G; Ichalkaranje, N S

    2003-01-01

    The Internet/WWW has made it possible to easily access quantities of information never available before. However, both the amount of information and the variation in quality pose obstacles to the efficient use of the medium. Artificial intelligence techniques can be useful tools in this context. Intelligent systems can be applied to searching the Internet and data-mining, interpreting Internet-derived material, the human-Web interface, remote condition monitoring and many other areas. This volume presents the latest research on the interaction between intelligent systems (neural networks, adap

  10. Internet-Based Approaches to Building Stakeholder Networks for Conservation and Natural Resource Management

    OpenAIRE

    Kreakie, B. J.; Hychka, K. C.; Belaire, J. A.; Minor, E.; Walker, H. A.

    2015-01-01

    Social network analysis (SNA) is based on a conceptual network representation of social interactions and is an invaluable tool for conservation professionals to increase collaboration, improve information flow, and increase efficiency. We present two approaches to constructing internet-based social networks, and use an existing traditional (survey-based) case study to illustrate in a familiar context the deviations in methods and results. Internet-based approaches to SNA offer a means to over...

  11. Trip attraction rates of shopping centers in Northern New Castle County, Delaware.

    Science.gov (United States)

    2004-07-01

    This report presents the trip attraction rates of the shopping centers in Northern New : Castle County in Delaware. The study aims to provide an alternative to ITE Trip : Generation Manual (1997) for computing the trip attraction of shopping centers ...

  12. 36 CFR 7.71 - Delaware Water Gap National Recreation Area.

    Science.gov (United States)

    2010-07-01

    ... THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.71 Delaware Water Gap National... route begins at the Smithfield Beach parking area and is in two loops. Loop One is a small trail... number of axles and wheels on a vehicle, regardless of load or weight, as follows: (i) Two-axle car, van...

  13. A Comparison of Internet-Based Participant Recruitment Methods: Engaging the Hidden Population of Cannabis Users in Research

    Directory of Open Access Journals (Sweden)

    Elizabeth Clare Temple

    2011-01-01

    Full Text Available While a growing number of researchers are embracing Internet-based data collection methods, the adoption of Internet-based recruitment methods has been relatively slow. This may be because little is known regarding the relative strengths and weaknesses of different methods of Internet-based participant recruitment, nor how these different recruitment strategies impact on the data collected. These issues are addressed in this article with reference to a study comparing the effectiveness of three Internet-based strategies in recruiting cannabis users for an online study. Consideration of the recruitment data leads us to recommend that researchers use multipronged Internet-based recruitment campaigns with appropriately detailed recruitment messages tailored to the population of interest and located carefully to ensure they reach the intended audience. Further, we suggest that building rapport directly with potential participants, or utilising derived rapport and implicit endorsements, is an important aspect of successful Internet-based participant recruitment strategies.

  14. Batch metadata assignment to archival photograph collections using facial recognition software

    Directory of Open Access Journals (Sweden)

    Kyle Banerjee

    2013-07-01

    Full Text Available Useful metadata is essential to giving individual meaning and value within the context of a greater image collection as well as making them more discoverable. However, often little information is available about the photos themselves, so adding consistent metadata to large collections of digital and digitized photographs is a time consuming process requiring highly experienced staff. By using facial recognition software, staff can identify individuals more quickly and reliably. Knowledge of individuals in photos helps staff determine when and where photos are taken and also improves understanding of the subject matter. This article demonstrates simple techniques for using facial recognition software and command line tools to assign, modify, and read metadata for large archival photograph collections.

  15. phosphorus retention data and metadata

    Science.gov (United States)

    phosphorus retention in wetlands data and metadataThis dataset is associated with the following publication:Lane , C., and B. Autrey. Phosphorus retention of forested and emergent marsh depressional wetlands in differing land uses in Florida, USA. Wetlands Ecology and Management. Springer Science and Business Media B.V;Formerly Kluwer Academic Publishers B.V., GERMANY, 24(1): 45-60, (2016).

  16. NCPP's Use of Standard Metadata to Promote Open and Transparent Climate Modeling

    Science.gov (United States)

    Treshansky, A.; Barsugli, J. J.; Guentchev, G.; Rood, R. B.; DeLuca, C.

    2012-12-01

    The National Climate Predictions and Projections (NCPP) Platform is developing comprehensive regional and local information about the evolving climate to inform decision making and adaptation planning. This includes both creating and providing tools to create metadata about the models and processes used to create its derived data products. NCPP is using the Common Information Model (CIM), an ontology developed by a broad set of international partners in climate research, as its metadata language. This use of a standard ensures interoperability within the climate community as well as permitting access to the ecosystem of tools and services emerging alongside the CIM. The CIM itself is divided into a general-purpose (UML & XML) schema which structures metadata documents, and a project or community-specific (XML) Controlled Vocabulary (CV) which constraints the content of metadata documents. NCPP has already modified the CIM Schema to accommodate downscaling models, simulations, and experiments. NCPP is currently developing a CV for use by the downscaling community. Incorporating downscaling into the CIM will lead to several benefits: easy access to the existing CIM Documents describing CMIP5 models and simulations that are being downscaled, access to software tools that have been developed in order to search, manipulate, and visualize CIM metadata, and coordination with national and international efforts such as ES-DOC that are working to make climate model descriptions and datasets interoperable. Providing detailed metadata descriptions which include the full provenance of derived data products will contribute to making that data (and, the models and processes which generated that data) more open and transparent to the user community.

  17. Conviviality of internet social networks: An exploratory study of internet campaigns in Iran

    OpenAIRE

    Ameripour, Aghil; Nicholson, Brian; Newman, Michael

    2010-01-01

    In this study, we focus on the relationship between Internet social networks and societal change by examining case studies of the impact of Internet-based campaigns in Iran. Ivan Illich's theory of Conviviality of Tools enables an analysis of the conviviality of the Internet. Subsequently, this conceptual lens is used to examine empirical data from two Internet-based campaigns. The paper contributes theoretical and practical implications regarding conviviality of Internet social networks and ...

  18. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    Science.gov (United States)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  19. Advanced reservoir characterization for improved oil recovery in a New Mexico Delaware basin project

    Energy Technology Data Exchange (ETDEWEB)

    Martin, F.D.; Kendall, R.P.; Whitney, E.M. [Dave Martin and Associates, Inc., Socorro, NM (United States)] [and others

    1997-08-01

    The Nash Draw Brushy Canyon Pool in Eddy County, New Mexico is a field demonstration site in the Department of Energy Class III program. The basic problem at the Nash Draw Pool is the low recovery typically observed in similar Delaware fields. By comparing a control area using standard infill drilling techniques to a pilot area developed using advanced reservoir characterization methods, the goal of the project is to demonstrate that advanced technology can significantly improve oil recovery. During the first year of the project, four new producing wells were drilled, serving as data acquisition wells. Vertical seismic profiles and a 3-D seismic survey were acquired to assist in interwell correlations and facies prediction. Limited surface access at the Nash Draw Pool, caused by proximity of underground potash mining and surface playa lakes, limits development with conventional drilling. Combinations of vertical and horizontal wells combined with selective completions are being evaluated to optimize production performance. Based on the production response of similar Delaware fields, pressure maintenance is a likely requirement at the Nash Draw Pool. A detailed reservoir model of pilot area was developed, and enhanced recovery options, including waterflooding, lean gas, and carbon dioxide injection, are being evaluated.

  20. Secure Web-based Ground System User Interfaces over the Open Internet

    Science.gov (United States)

    Langston, James H.; Murray, Henry L.; Hunt, Gary R.

    1998-01-01

    A prototype has been developed which makes use of commercially available products in conjunction with the Java programming language to provide a secure user interface for command and control over the open Internet. This paper reports successful demonstration of: (1) Security over the Internet, including encryption and certification; (2) Integration of Java applets with a COTS command and control product; (3) Remote spacecraft commanding using the Internet. The Java-based Spacecraft Web Interface to Telemetry and Command Handling (Jswitch) ground system prototype provides these capabilities. This activity demonstrates the use and integration of current technologies to enable a spacecraft engineer or flight operator to monitor and control a spacecraft from a user interface communicating over the open Internet using standard World Wide Web (WWW) protocols and commercial off-the-shelf (COTS) products. The core command and control functions are provided by the COTS Epoch 2000 product. The standard WWW tools and browsers are used in conjunction with the Java programming technology. Security is provided with the current encryption and certification technology. This system prototype is a step in the direction of giving scientist and flight operators Web-based access to instrument, payload, and spacecraft data.

  1. The Environmental Assessment and Management (TEAM) Guide: Delaware Supplement

    Science.gov (United States)

    2010-01-01

    Department of Health and Social Services - Reports to o r f rom, o r in vestigations, b y t he D elaware D epartment o f Transportation...Satyrium kingi) Rare Skipper ( Problema bulenta) Mulberry Wing (Poanes massasoit chermocki) 5-20 Natural Resources Management Mammals...Schools T2.20.1.DE. Radon Management According to Guidelines for Persons Qualified to Provide Radon Services of the Delaware Health and Social Services

  2. Internet-based guided self-help for posttraumatic stress disorder (PTSD): Randomized controlled trial.

    Science.gov (United States)

    Lewis, Catrin E; Farewell, Daniel; Groves, Vicky; Kitchiner, Neil J; Roberts, Neil P; Vick, Tracey; Bisson, Jonathan I

    2017-06-01

    There are numerous barriers that limit access to evidence-based treatment for posttraumatic stress disorder (PTSD). Internet-based guided self-help is a treatment option that may help widen access to effective intervention, but the approach has not been sufficiently explored for the treatment of PTSD. Forty two adults with DSM-5 PTSD of mild to moderate severity were randomly allocated to internet-based self-help with up to 3 h of therapist assistance, or to a delayed treatment control group. The internet-based program included eight modules that focused on psychoeducation, grounding, relaxation, behavioural activation, real-life and imaginal exposure, cognitive therapy, and relapse prevention. The primary outcome measure was reduction in clinician-rated traumatic stress symptoms using the clinician administered PTSD scale for DSM-V (CAPS-5). Secondary outcomes were self-reported PTSD symptoms, depression, anxiety, alcohol use, perceived social support, and functional impairment. Posttreatment, the internet-based guided self-help group had significantly lower clinician assessed PTSD symptoms than the delayed treatment control group (between-group effect size Cohen's d = 1.86). The difference was maintained at 1-month follow-up and dissipated once both groups had received treatment. Similar patterns of difference between the two groups were found for depression, anxiety, and functional impairment. The average contact with treating clinicians was 2½ h. Internet-based trauma-focused guided self-help for PTSD is a promising treatment option that requires far less therapist time than current first line face-to-face psychological therapy. © 2017 Wiley Periodicals, Inc.

  3. Leveraging Metadata to Create Interactive Images... Today!

    Science.gov (United States)

    Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.

    2011-01-01

    The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org

  4. 226Ra and 228Ra in the mixing zones of the Pee Dee River-Winyah Bay, Yangtze River and Delaware Bay Estuaries

    International Nuclear Information System (INIS)

    Elsinger, R.J.; Moore, W.S.

    1984-01-01

    226 Ra and 228 Ra have non-conservative excess concentrations in the mixing zones of the Pee Dee River-Winyah Bay estuary, the Yangtze River estuary, and the Delaware Bay estuary. Laboratory experiments, using Pee Dee River sediment, indicate desorption of 226 Ra to increase with increasing salinities up to 20 per mille. In Winyah Bay desorption from river-borne sediments could contribute almost all of the increases for both isotopes. Desorption adds only a portion of the excess 228 Ra measured in the Yangtze River and adjacent Shelf waters and Delaware Bay. In the Yangtze River the mixing zone extends over a considerable portion of the Continental Shelf where 228 Ra is added to the water column by diffusion from bottom sediments, while 226 Ra concentrations decrease from dilution. Diffusion of 228 Ra from bottom sediments in Delaware Bay primarily occurs in the upper part of the bay ( 228 Ra of 0.33 dpm cm -2 year was determined for Delaware Bay. (author)

  5. Drivers of Adoption and Implementation of Internet-Based Marketing Channels

    DEFF Research Database (Denmark)

    Nielsen, Jørn Flohr; Mols, Niels Peter; Høst, Viggo

    2007-01-01

    This chapter analyses factors influencing manufacturers= adoption and implementation of Internet-based marketing channels, using models based on marketing channel and organisational innovation theory. Survey data from 1163 Danish, Finnish, and Swedish manufacturers form the empirical basis for te...

  6. Dealing with metadata quality: the legacy of digital library efforts

    OpenAIRE

    Tani, Alice; Candela, Leonardo; Castelli, Donatella

    2013-01-01

    In this work, we elaborate on the meaning of metadata quality by surveying efforts and experiences matured in the digital library domain. In particular, an overview of the frameworks developed to characterize such a multi-faceted concept is presented. Moreover, the most common quality-related problems affecting metadata both during the creation and the aggregation phase are discussed together with the approaches, technologies and tools developed to mitigate them. This survey on digital librar...

  7. Making Information Visible, Accessible, and Understandable: Meta-Data and Registries

    Science.gov (United States)

    2007-07-01

    the data created, the length of play time, album name, and the genre. Without resource metadata, portable digital music players would not be so...notion of a catalog card in a library. An example of metadata is the description of a music file specifying the creator, the artist that performed the song...describe struc- ture and formatting which are critical to interoperability and the management of databases. Going back to the portable music player example

  8. Cooperative Cloud Service Aware Mobile Internet Coverage Connectivity Guarantee Protocol Based on Sensor Opportunistic Coverage Mechanism

    Directory of Open Access Journals (Sweden)

    Qin Qin

    2015-01-01

    Full Text Available In order to improve the Internet coverage ratio and provide connectivity guarantee, based on sensor opportunistic coverage mechanism and cooperative cloud service, we proposed the coverage connectivity guarantee protocol for mobile Internet. In this scheme, based on the opportunistic covering rules, the network coverage algorithm of high reliability and real-time security was achieved by using the opportunity of sensor nodes and the Internet mobile node. Then, the cloud service business support platform is created based on the Internet application service management capabilities and wireless sensor network communication service capabilities, which is the architecture of the cloud support layer. The cooperative cloud service aware model was proposed. Finally, we proposed the mobile Internet coverage connectivity guarantee protocol. The results of experiments demonstrate that the proposed algorithm has excellent performance, in terms of the security of the Internet and the stability, as well as coverage connectivity ability.

  9. SM4AM: A Semantic Metamodel for Analytical Metadata

    DEFF Research Database (Denmark)

    Varga, Jovan; Romero, Oscar; Pedersen, Torben Bach

    2014-01-01

    Next generation BI systems emerge as platforms where traditional BI tools meet semi-structured and unstructured data coming from the Web. In these settings, the user-centric orientation represents a key characteristic for the acceptance and wide usage by numerous and diverse end users in their data....... We present SM4AM, a Semantic Metamodel for Analytical Metadata created as an RDF formalization of the Analytical Metadata artifacts needed for user assistance exploitation purposes in next generation BI systems. We consider the Linked Data initiative and its relevance for user assistance...

  10. The Politics of Race and Educational Disparities in Delaware's Public Schools

    Science.gov (United States)

    Davis, Theodore J., Jr.

    2017-01-01

    Delaware has long played a pivotal role in the nation's struggle to end school segregation and promote educational equality. This article discusses racial disparities in educational achievement and outcomes by examining the state's political history and the politics of race in public education. This article explores educational disparities from a…

  11. CHARMe Commentary metadata for Climate Science: collecting, linking and sharing user feedback on climate datasets

    Science.gov (United States)

    Blower, Jon; Lawrence, Bryan; Kershaw, Philip; Nagni, Maurizio

    2014-05-01

    The research process can be thought of as an iterative activity, initiated based on prior domain knowledge, as well on a number of external inputs, and producing a range of outputs including datasets, studies and peer reviewed publications. These outputs may describe the problem under study, the methodology used, the results obtained, etc. In any new publication, the author may cite or comment other papers or datasets in order to support their research hypothesis. However, as their work progresses, the researcher may draw from many other latent channels of information. These could include for example, a private conversation following a lecture or during a social dinner; an opinion expressed concerning some significant event such as an earthquake or for example a satellite failure. In addition, other sources of information of grey literature are important public such as informal papers such as the arxiv deposit, reports and studies. The climate science community is not an exception to this pattern; the CHARMe project, funded under the European FP7 framework, is developing an online system for collecting and sharing user feedback on climate datasets. This is to help users judge how suitable such climate data are for an intended application. The user feedback could be comments about assessments, citations, or provenance of the dataset, or other information such as descriptions of uncertainty or data quality. We define this as a distinct category of metadata called Commentary or C-metadata. We link C-metadata with target climate datasets using a Linked Data approach via the Open Annotation data model. In the context of Linked Data, C-metadata plays the role of a resource which, depending on its nature, may be accessed as simple text or as more structured content. The project is implementing a range of software tools to create, search or visualize C-metadata including a JavaScript plugin enabling this functionality to be integrated in situ with data provider portals

  12. Teachers' Attitudes toward Web-Based Professional Development, with Relation to Internet Self-Efficacy and Beliefs about Web-Based Learning

    Science.gov (United States)

    Kao, Chia-Pin; Tsai, Chin-Chung

    2009-01-01

    This study was conducted to explore the relationships between teachers' Internet self-efficacy, beliefs about web-based learning and attitudes toward web-based professional development. The sample of this study included 421 teachers, coming from 20 elementary schools in Taiwan. The three instruments used to assess teachers' Internet self-efficacy…

  13. Structural Metadata Research in the Ears Program

    National Research Council Canada - National Science Library

    Liu, Yang; Shriberg, Elizabeth; Stolcke, Andreas; Peskin, Barbara; Ang, Jeremy; Hillard, Dustin; Ostendorf, Mari; Tomalin, Marcus; Woodland, Phil; Harper, Mary

    2005-01-01

    Both human and automatic processing of speech require recognition of more than just words. In this paper we provide a brief overview of research on structural metadata extraction in the DARPA EARS rich transcription program...

  14. Building a High Performance Metadata Broker using Clojure, NoSQL and Message Queues

    Science.gov (United States)

    Truslove, I.; Reed, S.

    2013-12-01

    In practice, Earth and Space Science Informatics often relies on getting more done with less: fewer hardware resources, less IT staff, fewer lines of code. As a capacity-building exercise focused on rapid development of high-performance geoinformatics software, the National Snow and Ice Data Center (NSIDC) built a prototype metadata brokering system using a new JVM language, modern database engines and virtualized or cloud computing resources. The metadata brokering system was developed with the overarching goals of (i) demonstrating a technically viable product with as little development effort as possible, (ii) using very new yet very popular tools and technologies in order to get the most value from the least legacy-encumbered code bases, and (iii) being a high-performance system by using scalable subcomponents, and implementation patterns typically used in web architectures. We implemented the system using the Clojure programming language (an interactive, dynamic, Lisp-like JVM language), Redis (a fast in-memory key-value store) as both the data store for original XML metadata content and as the provider for the message queueing service, and ElasticSearch for its search and indexing capabilities to generate search results. On evaluating the results of the prototyping process, we believe that the technical choices did in fact allow us to do more for less, due to the expressive nature of the Clojure programming language and its easy interoperability with Java libraries, and the successful reuse or re-application of high performance products or designs. This presentation will describe the architecture of the metadata brokering system, cover the tools and techniques used, and describe lessons learned, conclusions, and potential next steps.

  15. Internet user behaviour

    Directory of Open Access Journals (Sweden)

    Radbâță, A.

    2011-01-01

    Full Text Available Internet is a useful tool for everybody in a technologically advanced world. As Internet appears and develops, it creates a totally new network environment. The development of commerce on the Internet based on virtual communities has become one of the most successful business models in the world. After analyzing the concept of internet, the e-commerce market and its marketing mix and the benefits and limitations of the Internet, we have presented a few studies on Internet user behaviour. Furthermore, the paper looks at a representative sample of Romanian internet users. The results reveal that the Romanians are using the Internet especially for information gathering, e-mail, entertainment and social networking.

  16. Research on the cultivation path of smart home-based care service mode in Internet+ vision

    Directory of Open Access Journals (Sweden)

    Peng Qingchao

    2016-01-01

    Full Text Available Home-based care for the aged is an effective method to solve the problem of caring the aged in China. This thesis analyzes some problems existing in the development of current home-based care service for the aged in our country and the positive effects brought by Internet+ in home-based care service. It proposes a new service mode of care for the aged--Internet+ home-based care service, and explains the establishment of this system and the responsibilities of the participants. Also, it explores the path to realize the establishment of Internet+ home-based care service mode so as to promote the healthy development of home-based care service in China.

  17. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata.

    Science.gov (United States)

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-16

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.

  18. An object-oriented programming system for the integration of internet-based bioinformatics resources.

    Science.gov (United States)

    Beveridge, Allan

    2006-01-01

    The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data.

  19. Economic analysis of an internet-based depression prevention intervention.

    Science.gov (United States)

    Ruby, Alexander; Marko-Holguin, Monika; Fogel, Joshua; Van Voorhees, Benjamin W

    2013-09-01

    The transition through adolescence places adolescents at increased risk of depression, yet care-seeking in this population is low, and treatment is often ineffective. In response, we developed an Internet-based depression prevention intervention (CATCH-IT) targeting at-risk adolescents. We explore CATCH-IT program costs, especially safety costs, in the context of an Accountable Care Organization as well as the perceived value of the Internet program. Total and per-patient costs of development were calculated using an assumed cohort of a 5,000-patient Accountable Care Organization. Total and per-patient costs of implementation were calculated from grant data and the Medicare Resource-Based Relative Value Scale (RBRVS) and were compared to the willingness-to-pay for CATCH-IT and to the cost of current treatment options. The cost effectiveness of the safety protocol was assessed using the number of safety calls placed and the percentage of patients receiving at least one safety call. The willingness-to-pay for CATCH-IT, a measure of its perceived value, was assessed using post-study questionnaires and was compared to the development cost for a break-even point. We found the total cost of developing the intervention to be USD 138,683.03. Of the total, 54% was devoted to content development with per patient cost of USD 27.74. The total cost of implementation was found to be USD 49,592.25, with per patient cost of USD 597.50. Safety costs accounted for 35% of the total cost of implementation. For comparison, the cost of a 15-session group cognitive behavioral therapy (CBT) intervention aimed at at-risk adolescents was USD 1,632 per patient. Safety calls were successfully placed to 96.4% of the study participants. The cost per call was USD 40.51 with a cost per participant of USD 197.99. The willingness-to-pay for the Internet portion of CATCH-IT had a median of USD 40. The break-even point to offset the cost of development was 3,468 individuals. Developing Internet-based

  20. Metadata Access Tool for Climate and Health

    Science.gov (United States)

    Trtanji, J.

    2012-12-01

    The need for health information resources to support climate change adaptation and mitigation decisions is growing, both in the United States and around the world, as the manifestations of climate change become more evident and widespread. In many instances, these information resources are not specific to a changing climate, but have either been developed or are highly relevant for addressing health issues related to existing climate variability and weather extremes. To help address the need for more integrated data, the Interagency Cross-Cutting Group on Climate Change and Human Health, a working group of the U.S. Global Change Research Program, has developed the Metadata Access Tool for Climate and Health (MATCH). MATCH is a gateway to relevant information that can be used to solve problems at the nexus of climate science and public health by facilitating research, enabling scientific collaborations in a One Health approach, and promoting data stewardship that will enhance the quality and application of climate and health research. MATCH is a searchable clearinghouse of publicly available Federal metadata including monitoring and surveillance data sets, early warning systems, and tools for characterizing the health impacts of global climate change. Examples of relevant databases include the Centers for Disease Control and Prevention's Environmental Public Health Tracking System and NOAA's National Climate Data Center's national and state temperature and precipitation data. This presentation will introduce the audience to this new web-based geoportal and demonstrate its features and potential applications.

  1. A study of pricing and trading model of Blockchain & Big data-based Energy-Internet electricity

    Science.gov (United States)

    Fan, Tao; He, Qingsu; Nie, Erbao; Chen, Shaozhen

    2018-01-01

    The development of Energy-Internet is currently suffering from a series of issues, such as the conflicts among high capital requirement, low-cost, high efficiency, the spreading gap between capital demand and supply, as well as the lagged trading & valuation mechanism, any of which would hinder Energy-Internet's evolution. However, with the development of Blockchain and big-data technology, it is possible to work out solutions for these issues. Based on current situation of Energy-Internet and its requirements for future progress, this paper demonstrates the validity of employing blockchain technology to solve the problems encountered by Energy-Internet during its development. It proposes applying the blockchain and big-data technologies to pricing and trading energy products through Energy-Internet and to accomplish cyber-based energy or power's transformation from physic products to financial assets.

  2. The roles of social factor and internet self-efficacy in nurses' web-based continuing learning.

    Science.gov (United States)

    Chiu, Yen-Lin; Tsai, Chin-Chung

    2014-03-01

    This study was conducted to explore the relationships among social factor, Internet self-efficacy and attitudes toward web-based continuing learning in a clinical nursing setting. The participants recruited were 244 in-service nurses from hospitals in Taiwan. Three instruments were used to assess their perceptions of social factor, Internet self-efficacy (including basic and advanced Internet self-efficacy) and attitudes toward web-based continuing learning (including perceived usefulness, perceived ease of use, affection and behavior). Structural equation modeling (SEM) was utilized to identify the hypothesized structural model. The results of this study support that social factor is a significant factor correlated to Internet self-efficacy and attitudes toward web-based continuing learning (including perceived usefulness, perceived ease of use and affection). In addition, nurses' basic Internet self-efficacy plays a key role in attitudes including perceived usefulness, perceived ease of use and affection. However, advanced self-efficacy was not correlated to any of the attitudes. The behavior dimension was not linked to social factor or Internet self-efficacy, but was linked to perceived ease of use and affection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Generation of Multiple Metadata Formats from a Geospatial Data Repository

    Science.gov (United States)

    Hudspeth, W. B.; Benedict, K. K.; Scott, S.

    2012-12-01

    The Earth Data Analysis Center (EDAC) at the University of New Mexico is partnering with the CYBERShARE and Environmental Health Group from the Center for Environmental Resource Management (CERM), located at the University of Texas, El Paso (UTEP), the Biodiversity Institute at the University of Kansas (KU), and the New Mexico Geo- Epidemiology Research Network (GERN) to provide a technical infrastructure that enables investigation of a variety of climate-driven human/environmental systems. Two significant goals of this NASA-funded project are: a) to increase the use of NASA Earth observational data at EDAC by various modeling communities through enabling better discovery, access, and use of relevant information, and b) to expose these communities to the benefits of provenance for improving understanding and usability of heterogeneous data sources and derived model products. To realize these goals, EDAC has leveraged the core capabilities of its Geographic Storage, Transformation, and Retrieval Engine (Gstore) platform, developed with support of the NSF EPSCoR Program. The Gstore geospatial services platform provides general purpose web services based upon the REST service model, and is capable of data discovery, access, and publication functions, metadata delivery functions, data transformation, and auto-generated OGC services for those data products that can support those services. Central to the NASA ACCESS project is the delivery of geospatial metadata in a variety of formats, including ISO 19115-2/19139, FGDC CSDGM, and the Proof Markup Language (PML). This presentation details the extraction and persistence of relevant metadata in the Gstore data store, and their transformation into multiple metadata formats that are increasingly utilized by the geospatial community to document not only core library catalog elements (e.g. title, abstract, publication data, geographic extent, projection information, and database elements), but also the processing steps used to

  4. Towards an Interoperable Field Spectroscopy Metadata Standard with Extended Support for Marine Specific Applications

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2015-11-01

    Full Text Available This paper presents an approach to developing robust metadata standards for specific applications that serves to ensure a high level of reliability and interoperability for a spectroscopy dataset. The challenges of designing a metadata standard that meets the unique requirements of specific user communities are examined, including in situ measurement of reflectance underwater, using coral as a case in point. Metadata schema mappings from seven existing metadata standards demonstrate that they consistently fail to meet the needs of field spectroscopy scientists for general and specific applications (μ = 22%, σ = 32% conformance with the core metadata requirements and μ = 19%, σ = 18% for the special case of a benthic (e.g., coral reflectance metadataset. Issues such as field measurement methods, instrument calibration, and data representativeness for marine field spectroscopy campaigns are investigated within the context of submerged benthic measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. A hybrid standard that serves as a “best of breed” incorporating useful modules and parameters within the standards is proposed. This paper is Part 3 in a series of papers in this journal, examining the issues central to a metadata standard for field spectroscopy datasets. The results presented in this paper are an important step towards field spectroscopy metadata standards that address the specific needs of field spectroscopy data stakeholders while facilitating dataset documentation, quality assurance, discoverability and data exchange within large-scale information sharing platforms.

  5. 78 FR 13496 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Prevention of...

    Science.gov (United States)

    2013-02-28

    ... are listed in the www.regulations.gov Web site. Although listed in the electronic docket, some... modifies Delaware's PSD program at 7 DE Admin. Code 1125 to establish appropriate emission thresholds for...

  6. MCM generator: a Java-based tool for generating medical metadata.

    Science.gov (United States)

    Munoz, F; Hersh, W

    1998-01-01

    In a previous paper we introduced the need to implement a mechanism to facilitate the discovery of relevant Web medical documents. We maintained that the use of META tags, specifically ones that define the medical subject and resource type of a document, help towards this goal. We have now developed a tool to facilitate the generation of these tags for the authors of medical documents. Written entirely in Java, this tool makes use of the SAPHIRE server, and helps the author identify the Medical Subject Heading terms that most appropriately describe the subject of the document. Furthermore, it allows the author to generate metadata tags for the 15 elements that the Dublin Core considers as core elements in the description of a document. This paper describes the use of this tool in the cataloguing of Web and non-Web medical documents, such as images, movie, and sound files.

  7. Internet-Based Self-Help Intervention for ICD-11 Adjustment Disorder: Preliminary Findings.

    Science.gov (United States)

    Eimontas, Jonas; Rimsaite, Zivile; Gegieckaite, Goda; Zelviene, Paulina; Kazlauskas, Evaldas

    2018-06-01

    Adjustment disorder is one of the most diagnosed mental disorders. However, there is a lack of studies of specialized internet-based psychosocial interventions for adjustment disorder. We aimed to analyze the outcomes of an internet-based unguided self-help psychosocial intervention BADI for adjustment disorder in a two armed randomized controlled trial with a waiting list control group. In total 284 adult participants were randomized in this study. We measured adjustment disorder as a primary outcome, and psychological well-being as a secondary outcome at pre-intervention (T1) and one month after the intervention (T2). We found medium effect size of the intervention for the completer sample on adjustment disorder symptoms. Intervention was effective for those participants who used it at least one time in 30-day period. Our results revealed the potential of unguided internet-based self-help intervention for adjustment disorder. However, high dropout rates in the study limits the generalization of the outcomes of the intervention only to completers.

  8. Going Multi-viral: Synthedemic Modelling of Internet-based Spreading Phenomena

    Directory of Open Access Journals (Sweden)

    Marily Nika

    2015-02-01

    Full Text Available Epidemics of a biological and technological nature pervade modern life. For centuries, scientific research focused on biological epidemics, with simple compartmental epidemiological models emerging as the dominant explanatory paradigm. Yet there has been limited translation of this effort to explain internet-based spreading phenomena. Indeed, single-epidemic models are inadequate to explain the multimodal nature of complex phenomena. In this paper we propose a novel paradigm for modelling internet-based spreading phenomena based on the composition of multiple compartmental epidemiological models. Our approach is inspired by Fourier analysis, but rather than trigonometric wave forms, our components are compartmental epidemiological models. We show results on simulated multiple epidemic data, swine flu data and BitTorrent downloads of a popular music artist. Our technique can characterise these multimodal data sets utilising a parsimonous number of subepidemic models.

  9. Designing Internet Learning for Novice Users -Paper Based on a Action Research Project In India

    DEFF Research Database (Denmark)

    Purushothaman, Aparna

    2012-01-01

    The paper centre on an Action Research project undertaken in India for enabling the female students empowered through Internet use. The paper will discuss the design elements of Internet training for the first time users with limited Internet access based on Blooms Digital Taxonomy of Learning...... Domains.The paper also illustrates the identity formation of students, through learning to use Internet, using wengers social theory of learning with the empirical data....

  10. A network analysis using metadata to investigate innovation in clean-tech – Implications for energy policy

    International Nuclear Information System (INIS)

    Marra, Alessandro; Antonelli, Paola; Dell’Anna, Luca; Pozzi, Cesare

    2015-01-01

    Clean-technology (clean-tech) is a large and increasing sector. Research and development (R&D) is the lifeline of the industry and innovation is fostered by a plethora of high-tech start-ups and small and medium-sized enterprises (SMEs). Any empirical-based attempt to detect the pattern of technological innovation in the industry is challenging. This paper proposes an investigation of innovation in clean-tech using metadata provided by CrunchBase. Metadata reveal information on markets, products, services and technologies driving innovation in the clean-tech industry worldwide and for San Francisco, the leader in clean-tech innovation with more than two hundred specialised companies. A network analysis using metadata is the employed methodology and the main metrics of the resulting networks are discussed from an economic point of view. The purpose of the paper is to understand specifically specializations and technological complementarities underlying innovative companies, detect emerging industrial clusters at the global and local/metropolitan level and, finally, suggest a way to realize whether observed start-ups, SMEs and clusters follow a technological path of complementary innovation and market opportunity or, instead, present a risk of lock-in. The discussion of the results of the network analysis shows interesting implications for energy policy, particularly useful from an operational point of view. - Highlights: • Metadata provide information on companies' products and technologies. • A network analysis enables detection of specializations and complementarities. • An investigation of the network allows to identify emerging industrial clusters. • Metrics help to appreciate complementary innovation and market opportunity. • Results of the network analysis show interesting policy implications.

  11. Readability assessment of internet-based consumer health information.

    Science.gov (United States)

    Walsh, Tiffany M; Volsko, Teresa A

    2008-10-01

    A substantial amount of consumer health-related information is available on the Internet. Studies suggest that consumer comprehension may be compromised if content exceeds a 7th-grade reading level, which is the average American reading level identified by the United States Department of Health and Human Services (USDHHS). To determine the readability of Internet-based consumer health information offered by organizations that represent the top 5 medical-related causes of death in America. We hypothesized that the average readability (reading grade level) of Internet-based consumer health information on heart disease, cancer, stroke, chronic obstructive pulmonary disease, and diabetes would exceed the USDHHS recommended reading level. From the Web sites of the American Heart Association, American Cancer Society, American Lung Association, American Diabetes Association, and American Stroke Association we randomly gathered 100 consumer-health-information articles. We assessed each article with 3 readability-assessment tools: SMOG (Simple Measure of Gobbledygook), Gunning FOG (Frequency of Gobbledygook), and Flesch-Kincaid Grade Level. We also categorized the articles per the USDHHS readability categories: easy to read (below 6th-grade level), average difficulty (7th to 9th grade level), and difficult (above 9th-grade level). Most of the articles exceeded the 7th-grade reading level and were in the USDHHS "difficult" category. The mean +/- SD readability score ranges were: SMOG 11.80 +/- 2.44 to 14.40 +/- 1.47, Flesch-Kincaid 9.85 +/- 2.25 to 11.55 +/- 0.76, and Gunning FOG 13.10 +/- 3.42 to 16.05 +/- 2.31. The articles from the American Lung Association had the lowest reading-level scores with each of the readability-assessment tools. Our findings support that Web-based medical information intended for consumer use is written above USDHHS recommended reading levels. Compliance with these recommendations may increase the likelihood of consumer comprehension.

  12. Delaware's Wellness Program: Motivating Employees Improves Health and Saves Money.

    Science.gov (United States)

    Davis, Jennifer J J

    2008-09-01

    Every year, employers around the country evaluate their company benefits package in the hopes of finding a solution to the ever-rising cost of health insurance premiums. For many business executives, the only logical choice is to pass along those costs to the employee. As an employer, our goal in Delaware has always been to come up with innovative solutions to drive down the cost of health insurance premiums while encouraging our employees to take responsibility for their own health and wellness by living a healthy and active lifestyle, and provide them with the necessary tools. The DelaWELL program (N = 68,000) was launched in 2007, after being tested in initial (N = 100) and expanded (N = 1500) pilot programs from 2004 to 2006 in which 3 similar groups were compared before and after the pilot. Employee health risk assessment, education, and incentives provided employees the necessary tools we had assumed would help them make healthier lifestyle choices. In the first pilot, fewer emergency department visits and lower blood pressure levels resulted in direct savings of more than $62,000. In the expanded pilot, in all 3 groups blood pressure was significantly reduced (P employees participating in DelaWELL had a combined weight loss of 5162 lb. Decision makers in the State of Delaware have come up with an innovative solution to controlling costs while offering employees an attractive benefits package. The savings from its employee benefit program have allowed the state to pass along the savings to employees by maintaining employee-paid health insurance contributions at the same level for the past 3 years. DelaWELL has already confirmed our motto, "Although it may seem an unusual business investment to pay for healthcare before the need arises, in Delaware we concluded that this makes perfect sense." This promising approach to improving health and reducing healthcare costs could potentially be applied to other employer groups.

  13. Delaware's Wellness Program: Motivating Employees Improves Health and Saves Money

    Science.gov (United States)

    Davis, Jennifer “J. J.”

    2008-01-01

    Background Every year, employers around the country evaluate their company benefits package in the hopes of finding a solution to the ever-rising cost of health insurance premiums. For many business executives, the only logical choice is to pass along those costs to the employee. Objectives As an employer, our goal in Delaware has always been to come up with innovative solutions to drive down the cost of health insurance premiums while encouraging our employees to take responsibility for their own health and wellness by living a healthy and active lifestyle, and provide them with the necessary tools. Methods The DelaWELL program (N = 68,000) was launched in 2007, after being tested in initial (N = 100) and expanded (N = 1500) pilot programs from 2004 to 2006 in which 3 similar groups were compared before and after the pilot. Employee health risk assessment, education, and incentives provided employees the necessary tools we had assumed would help them make healthier lifestyle choices. Results In the first pilot, fewer emergency department visits and lower blood pressure levels resulted in direct savings of more than $62,000. In the expanded pilot, in all 3 groups blood pressure was significantly reduced (P employees participating in DelaWELL had a combined weight loss of 5162 lb. Conclusions Decision makers in the State of Delaware have come up with an innovative solution to controlling costs while offering employees an attractive benefits package. The savings from its employee benefit program have allowed the state to pass along the savings to employees by maintaining employee-paid health insurance contributions at the same level for the past 3 years. DelaWELL has already confirmed our motto, “Although it may seem an unusual business investment to pay for healthcare before the need arises, in Delaware we concluded that this makes perfect sense.” This promising approach to improving health and reducing healthcare costs could potentially be applied to other

  14. An Internet-based tailored hearing protection intervention for firefighters: development process and users' feedback.

    Science.gov (United States)

    Hong, OiSaeng; Eakin, Brenda L; Chin, Dal Lae; Feld, Jamie; Vogel, Stephen

    2013-07-01

    Noise-induced hearing loss is a significant occupational injury for firefighters exposed to intermittent noise on the job. It is important to educate firefighters about using hearing protection devices whenever they are exposed to loud noise. Computer technology is a relatively new health education approach and can be useful for tailoring specific aspects of behavioral change training. The purpose of this study is to present the development process of an Internet-based tailored intervention program and to assess its efficacy. The intervention programs were implemented for 372 firefighters (mean age = 44 years, Caucasian = 82%, male = 95%) in three states (California, Illinois, and Indiana). The efficacy was assessed from firefighters' feedback through an Internet-based survey. A multimedia Internet-based training program was developed through (a) determining program content and writing scripts, (b) developing decision-making algorithms for tailoring, (c) graphic design and audio and video productions, (d) creating computer software and a database, and (e) postproduction quality control and pilot testing. Participant feedback regarding the training has been very positive. Participants reported that they liked completing the training via computer (83%) and also that the Internet-based training program was well organized (97%), easy to use (97%), and effective (98%) and held their interest (79%). Almost all (95%) would recommend this Internet training program to other firefighters. Interactive multimedia computer technology using the Internet was a feasible mode of delivery for a hearing protection intervention among firefighters. Participants' favorable feedback strongly supports the continued utilization of this approach for designing and developing interventions to promote healthy behaviors.

  15. Carbon Monoxide Photoproduction from Particles and Solutes in the Delaware Estuary under Contrasting Hydrological Conditions.

    Science.gov (United States)

    Song, Guisheng; Richardson, John D; Werner, James P; Xie, Huixiang; Kieber, David J

    2015-12-15

    Full-spectrum, ultraviolet (UV), and visible broadband apparent quantum yields (AQYs) for carbon monoxide (CO) photoproduction from chromophoric dissolved organic matter (CDOM) and particulate organic matter (POM) were determined in the Delaware Estuary in two hydrologically contrasting seasons in 2012: an unusually low flow in August and a storm-driven high flow in November. Average AQYs for CDOM and POM in November were 10 and 16 times the corresponding AQYs in August. Maximum AQYs in November occurred in a midestuary particle absorption maximum zone. Although POM AQYs were generally smaller than CDOM AQYs, the ratio of the former to the latter increased substantially from the UV to the visible. In both seasons, UV solar radiation was the primary driver for CO photoproduction from CDOM whereas visible light was the principal contributor to POM-based CO photoproduction. CDOM dominated CO photoproduction in the uppermost water layer while POM prevailed at deeper depths. On a depth-integrated basis, the Delaware Estuary shifted from a CDOM-dominated system in August to a POM-dominated system in November with respect to CO photoproduction. This study reveals that flood events may enhance photochemical cycling of terrigenous organic matter and switch the primary photochemical driver from CDOM to POM.

  16. Educational benefits of Internet and computer-based programmes for prostate cancer patients: a systematic review.

    Science.gov (United States)

    Salonen, Anne; Ryhänen, Anne M; Leino-Kilpi, Helena

    2014-01-01

    This study aims to review systematically the available literature on Internet and computer-based patient education programmes, assess the quality of these studies and analyze the benefit of these programmes for prostate cancer patients. Complete databases were searched. Studies were included if they concerned patient education of prostate cancer patients, were qualitative or quantitative and examined Internet or interactive CD-ROM use. Eighteen studies met the inclusion criteria. The majority of the studies reported a significant increase in the knowledge of the disease, satisfaction with treatment options and support for men. The benefit of the programmes was that the patients felt more empowered and obtained a heightened sense of control over their disease. The Internet or computer-based programmes had a positive impact on prostate cancer patient education. Most papers reported that the programmes were beneficial, but few presented data from studies with rigorous research methodologies to support these claims. Internet and computer-based programmes can be useful tools in prostate cancer patient education. In order to improve the benefits of the programmes, more Internet and computer-based programmes need to be developed and studied. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Fuzzy knowledge bases integration based on ontology

    OpenAIRE

    Ternovoy, Maksym; Shtogrina, Olena

    2012-01-01

    the paper describes the approach for fuzzy knowledge bases integration with the usage of ontology. This approach is based on metadata-base usage for integration of different knowledge bases with common ontology. The design process of metadata-base is described.

  18. Ethical problems inherent in psychological research based on internet communication as stored information

    DEFF Research Database (Denmark)

    Øhrstrøm, Peter; Dyhrberg, Johan

    2007-01-01

    This paper deals with certain ethical problems inherent in psychological research based on internet communication as stored information. Section 1 contains an analysis of research on Internet debates. In particular, it takes into account a famous example of deception for psychology research...... purposes. In section 2, the focus is on research on personal data in texts published on the Internet. Section 3 includes an attempt to formulate some ethical principles and guidelines, which should be regarded as fundamental in research on stored information....

  19. Internet and Fuzzy Based Control System for Rotary Kiln in Cement Manufacturing Plant

    Directory of Open Access Journals (Sweden)

    Hanane Zermane

    2017-01-01

    Full Text Available This paper develops an Internet-based fuzzy control system for an industrial process plant to ensure the remote and fuzzy control in cement factories in Algeria. The remote process consists of control, diagnosing alarms occurs, maintaining and synchronizing different regulation loops. Fuzzy control of the kiln ensures that the system be operational at all times, with minimal downtime. Internet technology ensures remote control. The system reduces downtimes and can guided by operators in the main control room or via Internet.

  20. Nurses' experiences of the use of an Internet-based support system for adolescents with depressive disorders.

    Science.gov (United States)

    Kurki, Marjo; Anttila, Minna; Koivunen, Marita; Marttunen, Mauri; Välimäki, Maritta

    2018-09-01

    Internet-based applications are potentially useful and effective interventions to reach and support adolescents with mental health problems. Adolescents' commitment to the use of a new Internet-based intervention is closely related to the support they receive from healthcare professionals. This study describes nurses' experiences of the use of an Internet-based support system for adolescents with depressive disorders. Qualitative descriptive study design including individual interviews with nine nurses at two psychiatric outpatient clinics. The Technology Acceptance Model (TAM) was used as the theoretical background of the study. Nurses described several benefits of using the Internet-based support system in the care of adolescents with depressive disorders if the nurses integrate it into daily nursing practices. As perceived disadvantages the nurses thought that an adolescent's mental status might be a barrier to working with the support system. Perceived enablers could be organizational support, nurses' attitudes, and technology-related factors. Nurses' attitudes were identified as a barrier to supporting adolescents' use of the Internet-based support system. The findings suggest that the implementation plan and support from the organization, including that from nurse managers, are crucial in the process of implementing a technology-based support system.