WorldWideScience

Sample records for islands california metadata

  1. Human responses to Middle Holocene climate change on California's Channel Islands

    Science.gov (United States)

    Kennett, Douglas J.; Kennett, James P.; Erlandson, Jon M.; Cannariato, Kevin G.

    2007-02-01

    High-resolution archaeological and paleoenvironmental records from California's Channel Islands provide a unique opportunity to examine potential relationships between climatically induced environmental changes and prehistoric human behavioral responses. Available climate records in western North America (7-3.8 ka) indicate a severe dry interval between 6.3 and 4.8 ka embedded within a generally warm and dry Middle Holocene. Very dry conditions in western North America between 6.3 and 4.8 ka correlate with cold to moderate sea-surface temperatures (SST) along the southern California Coast evident in Ocean Drilling Program (ODP) Core 893A/B (Santa Barbara Basin). An episode of inferred high marine productivity between 6.3 and 5.8 ka corresponds with the coldest estimated SSTs of the Middle Holocene, otherwise marked by warm/low productivity marine conditions (7.5-3.8 ka). The impact of this severe aridity on humans was different between the northern and southern Channel Islands, apparently related to degree of island isolation, size and productivity of islands relative to population, fresh water availability, and on-going social relationships between island and continental populations. Northern Channel Islanders seem to have been largely unaffected by this severe arid phase. In contrast, cultural changes on the southern Channel Islands were likely influenced by the climatically induced environmental changes. We suggest that productive marine conditions coupled with a dry terrestrial climate between 6.3 and 5.8 ka stimulated early village development and intensified fishing on the more remote southern islands. Contact with people on the adjacent southern California Coast increased during this time with increased participation in a down-the-line trade network extending into the western Great Basin and central Oregon. Genetic similarities between Middle Holocene burial populations on the southern Channel Islands and modern California Uto-Aztecan populations suggest

  2. 2010 U.S. Geological Survey (USGS) Topographic Lidar: Channel Islands, California

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Terrapoint collected LiDAR for 197 square miles covering five islands off the coast of Los Angeles, California. These islands are part of the Channel Islands...

  3. AFSC/NMML/CCEP: Survival Rate of California sea lions at San Miguel Island, California from 1987-2009

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dataset contains initial capture and marking data for California sea lion (Zalophus californianus) pups at San Miguel Island, California and subsequent...

  4. for presence of hookworms (Uncinaria spp. on San Miguel Island, California

    Directory of Open Access Journals (Sweden)

    Lyons E. T.

    2016-06-01

    Full Text Available Necropsy and extensive parasitological examination of dead northern elephant seal (NES pups was done on San Miguel Island, California, in February, 2015. The main interest in the current study was to determine if hookworms were present in NESs on San Miguel Island where two hookworm species of the genus Uncinaria are known to be present - Uncinaria lyonsi in California sea lions and Uncinaria lucasi in northern fur seals. Hookworms were not detected in any of the NESs examined: stomachs or intestines of 16 pups, blubber of 13 pups and blubber of one bull. The results obtained in the present study of NESs on San Miguel Island plus similar finding on Año Nuevo State Reserve and The Marine Mammal Center provide strong indication that NES are not appropriate hosts for Uncinaria spp. Hookworm free-living third stage larvae, developed from eggs of California sea lions and northern fur seals, were recovered from sand. It seems that at this time, further search for hookworms in NESs would be nonproductive.

  5. Mitochondrial genomes suggest rapid evolution of dwarf California Channel Islands foxes (Urocyon littoralis).

    Science.gov (United States)

    Hofman, Courtney A; Rick, Torben C; Hawkins, Melissa T R; Funk, W Chris; Ralls, Katherine; Boser, Christina L; Collins, Paul W; Coonan, Tim; King, Julie L; Morrison, Scott A; Newsome, Seth D; Sillett, T Scott; Fleischer, Robert C; Maldonado, Jesus E

    2015-01-01

    Island endemics are typically differentiated from their mainland progenitors in behavior, morphology, and genetics, often resulting from long-term evolutionary change. To examine mechanisms for the origins of island endemism, we present a phylogeographic analysis of whole mitochondrial genomes from the endangered island fox (Urocyon littoralis), endemic to California's Channel Islands, and mainland gray foxes (U. cinereoargenteus). Previous genetic studies suggested that foxes first appeared on the islands >16,000 years ago, before human arrival (~13,000 cal BP), while archaeological and paleontological data supported a colonization >7000 cal BP. Our results are consistent with initial fox colonization of the northern islands probably by rafting or human introduction ~9200-7100 years ago, followed quickly by human translocation of foxes from the northern to southern Channel Islands. Mitogenomes indicate that island foxes are monophyletic and most closely related to gray foxes from northern California that likely experienced a Holocene climate-induced range shift. Our data document rapid morphological evolution of island foxes (in ~2000 years or less). Despite evidence for bottlenecks, island foxes have generated and maintained multiple mitochondrial haplotypes. This study highlights the intertwined evolutionary history of island foxes and humans, and illustrates a new approach for investigating the evolutionary histories of other island endemics.

  6. Development and characterization of 12 microsatellite markers for the Island Night Lizard (Xantusia riversiana), a threatened species endemic to the Channel Islands, California, USA

    Science.gov (United States)

    O'Donnell, Ryan P.; Drost, Charles A.; Mock, Karen E.

    2014-01-01

    The Island Night Lizard is a federally threatened species endemic to the Channel Islands of California. Twelve microsatellite loci were developed for use in this species and screened in 197 individuals from across San Nicolas Island, California. The number of alleles per locus ranged from 6 to 21. Observed heterozygosities ranged from 0.520 to 0.843. These microsatellite loci will be used to investigate population structure, effective population size, and gene flow across the island, to inform protection and management of this species.

  7. Uta stansburiana and Elgaria multicarinata on the California Channel Islands: Natural dispersal or artificial introduction?

    Science.gov (United States)

    Mahoney, Meredith J.; Parks, Duncan S.M.; Fellers, Gary M.

    2003-01-01

    Uta stansburiana and Elgaria multicarinata occur on several California Channel Islands, and recent introduction of some populations has been suggested because of similarity in life-history traits and body size to mainland populations. We sequenced representatives of each species from mainland southern California and some of the islands on which they occur. For each species, cytochrome bsequence divergence is low across the narrow geographic area sampled. Analyses of 14 haplotypes of U. stansburiana suggest long-established residency on Santa Catalina and San Clemente Islands but more recent arrival on San Nicolas and Santa Cruz Islands. Analyses of eight haplotypes of E. multicarinata suggest these lizards may have been recently transported to San Nicolas Island.

  8. Status of the Island Night Lizard and Two Non-Native Lizards on Outlying Landing Field San Nicolas Island, California

    Science.gov (United States)

    Fellers, Gary M.; Drost, Charles A.; Murphey, Thomas G.

    2008-01-01

    More than 900 individually marked island night lizards (Xantusia riversiana) were captured on San Nicolas Island, California, between 1984 and 2007 as part of an ongoing study to monitor the status of this threatened species. Our data suggest that at least a few lizards are probably more than 20 years old, and one lizard would be 31.5 years old if it grew at an average rate for the population. Ages of 20 and 30 years seem reasonable given the remarkably slow growth during capture intervals of more than a decade for five of the lizards which we estimated to be 20 or more years old. Like other lizards, island night lizard growth rates vary by size, with larger lizards growing more slowly. In general, growth rates were somewhat greater on San Nicolas Island (compared with Santa Barbara Island), and this increase was sustained through all of the intermediate size classes. The higher growth rate may account for the somewhat larger lizards present on San Nicolas Island, although we cannot discount the possibility that night lizards on San Nicolas are merely living longer. The high percentage of small lizards in the Eucalyptus habitat might seem to reflect a healthy population in that habitat, but the high proportion of small lizards appears to be caused by good reproduction in the 1900s and substantially poorer reproduction in subsequent years. The Eucalyptus habitat has dried quite a bit in recent years. Night lizards in the Haplopappus/Grassland habitat have shown an increase in the proportion of larger lizards since 2000. There has also been an increase in the proportion of large lizards in the Rock Cobble habitat at Redeye Beach. However, there are has been some change in habitat with more elephant seals occupying the same area just above the high tide as do the night lizards. Southern alligator lizards and side-blotched lizards are both non-native on San Nicolas Island. Neither lizard causes obvious harm to island night lizards, and management time and effort should

  9. A programmatic view of metadata, metadata services, and metadata flow in ATLAS

    International Nuclear Information System (INIS)

    Malon, D; Albrand, S; Gallas, E; Stewart, G

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS are considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and integrated view to physicists, and support both human use and programmatic access. In this paper we consider ATLAS metadata, metadata services, and metadata flow principally from the illustrative perspective of how disparate metadata are made available to executing jobs and, conversely, how metadata generated by such jobs are returned. We describe how metadata are read, how metadata are cached, and how metadata generated by jobs and the tasks of which they are a part are communicated, associated with data products, and preserved. We also discuss the principles that guide decision-making about metadata storage, replication, and access.

  10. Airborne dust transport to the eastern Pacific Ocean off southern California: Evidence from San Clemente Island

    Science.gov (United States)

    Muhs, D.R.; Budahn, J.; Reheis, M.; Beann, J.; Skipp, G.; Fisher, E.

    2007-01-01

    Islands are natural dust traps, and San Clemente Island, California, is a good example. Soils on marine terraces cut into Miocene andesite on this island are clay-rich Vertisols or Alfisols with vertic properties. These soils are overlain by silt-rich mantles, 5-20 cm thick, that contrast sharply with the underlying clay-rich subsoils. The silt mantles have a mineralogy that is distinct from the island bedrock. Silt mantles are rich in quartz, which is rare in the island andesite. The clay fraction of the silt mantles is dominated by mica, also absent from local andesite, and contrasts with the subsoils, dominated by smectite. Ternary plots of immobile trace elements (Sc-Th-La and Ta-Nd-Cr) show that the island andesite has a composition intermediate between average upper continental crust and average oceanic crust. In contrast, the silt and, to a lesser extent, clay fractions of the silt mantles have compositions closer to average upper continental crust. The silt mantles have particle size distributions similar to loess and Mojave Desert dust, but are coarser than long-range-transported Asian dust. We infer from these observations that the silt mantles are derived from airborne dust from the North American mainland, probably river valleys in the coastal mountains of southern California and/or the Mojave Desert. Although average winds are from the northwest in coastal California, easterly winds occur numerous times of the year when "Santa Ana" conditions prevail, caused by a high-pressure cell centered over the Great Basin. Examination of satellite imagery shows that easterly Santa Ana winds carry abundant dust to the eastern Pacific Ocean and the California Channel Islands. Airborne dust from mainland North America may be an important component of the offshore sediment budget in the easternmost Pacific Ocean, a finding of potential biogeochemical and climatic significance.

  11. Metadata

    CERN Document Server

    Zeng, Marcia Lei

    2016-01-01

    Metadata remains the solution for describing the explosively growing, complex world of digital information, and continues to be of paramount importance for information professionals. Providing a solid grounding in the variety and interrelationships among different metadata types, Zeng and Qin's thorough revision of their benchmark text offers a comprehensive look at the metadata schemas that exist in the world of library and information science and beyond, as well as the contexts in which they operate. Cementing its value as both an LIS text and a handy reference for professionals already in the field, this book: * Lays out the fundamentals of metadata, including principles of metadata, structures of metadata vocabularies, and metadata descriptions * Surveys metadata standards and their applications in distinct domains and for various communities of metadata practice * Examines metadata building blocks, from modelling to defining properties, and from designing application profiles to implementing value vocabu...

  12. Geochemical evidence for airborne dust additions to soils in Channel Islands National Park, California

    Science.gov (United States)

    Muhs, D.R.; Budahn, J.R.; Johnson, D.L.; Reheis, M.; Beann, J.; Skipp, G.; Fisher, E.; Jones, J.A.

    2008-01-01

    There is an increasing awareness that dust plays important roles in climate change, biogeochemical cycles, nutrient supply to ecosystems, and soil formation. In Channel Islands National Park, California, soils are clay-rich Vertisols or Alfisols and Mollisols with vertic properties. The soils are overlain by silt-rich mantles that contrast sharply with the underlying clay-rich horizons. Silt mantles contain minerals that are rare or absent in the volcanic rocks that dominate these islands. Immobile trace elements (Sc-Th-La and Ta-Nd-Cr) and rare-earth elements show that the basalt and andesite on the islands have a composition intermediate between upper-continental crust and oceanic crust. In contrast, the silt fractions and, to a lesser extent, clay fractions of the silt mantle have compositions closer to average upper-continental crust and very similar to Mojave Desert dust. Island shelves, exposed during the last glacial period, could have provided a source of eolian sediment for the silt mantles, but this is not supported by mineralogical data. We hypothesize that a more likely source for the silt-rich mantles is airborne dust from mainland California and Baja California, either from the Mojave Desert or from the continental shelf during glacial low stands of sea. Although average winds are from the northwest in coastal California, easterly winds occur numerous times of the year when "Santa Ana" conditions prevail, caused by a high-pressure cell centered over the Great Basin. The eolian silt mantles constitute an important medium of plant growth and provide evidence that abundant eolian silt and clay may be delivered to the eastern Pacific Ocean from inland desert sources. ?? 2007 Geological Society of America.

  13. Department of the Interior metadata implementation guide—Framework for developing the metadata component for data resource management

    Science.gov (United States)

    Obuch, Raymond C.; Carlino, Jennifer; Zhang, Lin; Blythe, Jonathan; Dietrich, Christopher; Hawkinson, Christine

    2018-04-12

    The Department of the Interior (DOI) is a Federal agency with over 90,000 employees across 10 bureaus and 8 agency offices. Its primary mission is to protect and manage the Nation’s natural resources and cultural heritage; provide scientific and other information about those resources; and honor its trust responsibilities or special commitments to American Indians, Alaska Natives, and affiliated island communities. Data and information are critical in day-to-day operational decision making and scientific research. DOI is committed to creating, documenting, managing, and sharing high-quality data and metadata in and across its various programs that support its mission. Documenting data through metadata is essential in realizing the value of data as an enterprise asset. The completeness, consistency, and timeliness of metadata affect users’ ability to search for and discover the most relevant data for the intended purpose; and facilitates the interoperability and usability of these data among DOI bureaus and offices. Fully documented metadata describe data usability, quality, accuracy, provenance, and meaning.Across DOI, there are different maturity levels and phases of information and metadata management implementations. The Department has organized a committee consisting of bureau-level points-of-contacts to collaborate on the development of more consistent, standardized, and more effective metadata management practices and guidance to support this shared mission and the information needs of the Department. DOI’s metadata implementation plans establish key roles and responsibilities associated with metadata management processes, procedures, and a series of actions defined in three major metadata implementation phases including: (1) Getting started—Planning Phase, (2) Implementing and Maintaining Operational Metadata Management Phase, and (3) the Next Steps towards Improving Metadata Management Phase. DOI’s phased approach for metadata management addresses

  14. Vegetation (MCV / NVCS) Mapping Projects - California [ds515

    Data.gov (United States)

    California Natural Resource Agency — This metadata layer shows the footprint of vegetation mapping projects completed in California that have used the Manual California of Vegetation ( MCV 1st edition)...

  15. Census Snapshot: California's Asian/Pacific Islander LGB Population

    OpenAIRE

    Ramos, Christopher; Gates, Gary J

    2008-01-01

    This report provides a general overview of Asian and Pacific Islanders (API) in same-sex couples as well as the broader API lesbian, gay, and bisexual (LGB) population in California. We use data from the 2005/2006 American Community Survey (ACS), conducted by the U.S. Census Bureau, to compare the characteristics of APIs in same-sex couples to their different-sex married counterparts. In all cases, when this report describes characteristics of couples, the data source is the ACS. Whi...

  16. A Programmatic View of Metadata, Metadata Services, and Metadata Flow in ATLAS

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS is considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. Trigger information and data from the Large Hadron Collider itself provide cases in point, but examples abound. Metadata about logical or physics constructs, such as data-taking periods and runs and luminosity blocks and events and algorithms, often need to be mapped to deployment and production constructs, such as datasets and jobs and files and software versions, and vice versa. Metadata at one level of granularity may have implications at another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and ...

  17. Genetic diversity of Bactrocera dorsalis (Diptera: Tephritidae) on the Hawaiian islands: Implications for an introduction pathway into California

    International Nuclear Information System (INIS)

    Barr, Norman B.; Ledezma, Lisa A.; Bartels, David W.; Garza, Daniel; Leblanc, Luc; Jose, Michael San; Rubinoff, Daniel; Geib, Scott M.; Fujita, Brian; Kerr, Peter; Hauser, Martin; Gaimari, Stephen

    2015-01-01

    Population genetic diversity of the oriental fruit fly, Bactrocera dorsalis (Hendel), on the Hawaiian islands of Oahu, Maui, Kauai, and Hawaii (the Big Island) was estimated using DNA sequences of the mitochondrial cytochrome c oxidase subunit I gene. In total, 932 flies representing 36 sampled sites across the four islands were sequenced for a 1,500-bp fragment of the gene named the C1500 marker. Genetic variation was low on the Hawaiian Islands with >96% of flies having just two haplotypes: C1500- Haplotype 1 (63.2%) or C1500-Haplotype 2 (33.3%). The other 33 flies (3.5%) had haplotypes similar to the two dominant haplotypes. No population structure was detected among the islands or within islands. The two haplotypes were present at similar frequencies at each sample site, suggesting that flies on the various islands can be considered one population. Comparison of the Hawaiian data set to DNA sequences of 165 flies from outbreaks in California between 2006 and 2012 indicates that a single-source introduction pathway of Hawaiian origin cannot explain many of the flies in California. Hawaii, however, could not be excluded as a maternal source for 69 flies. There was no clear geographic association for Hawaiian or non-Hawaiian haplotypes in the Bay Area or Los Angeles Basin over time. This suggests that California experienced multiple, independent introductions from different sources. (author)

  18. On the importance of stratigraphic control for vertebrate fossil sites in Channel Islands National Park, California, USA: Examples from new Mammuthus finds on San Miguel Island

    Science.gov (United States)

    Pigati, Jeffery S.; Muhs, Daniel R.; McGeehin, John P.

    2016-01-01

    Quaternary vertebrate fossils, most notably mammoth remains, are relatively common on the northern Channel Islands of California. Well-preserved cranial, dental, and appendicular elements of Mammuthus exilis (pygmy mammoth) and Mammuthus columbi (Columbian mammoth) have been recovered from hundreds of localities on the islands during the past half-century or more. Despite this paleontological wealth, the geologic context of the fossils is described in the published literature only briefly or not at all, which has hampered the interpretation of associated 14C ages and reconstruction of past environmental conditions. We recently discovered a partial tusk, several large bones, and a tooth enamel plate (all likely mammoth) at two sites on the northwest flank of San Miguel Island, California. At both localities, we documented the stratigraphic context of the fossils, described the host sediments in detail, and collected charcoal and terrestrial gastropod shells for radiocarbon dating. The resulting 14C ages indicate that the mammoths were present on San Miguel Island between ∼20 and 17 ka as well as between ∼14 and 13 ka (thousands of calibrated 14C years before present), similar to other mammoth sites on San Miguel, Santa Cruz, and Santa Rosa Islands. In addition to documenting the geologic context and ages of the fossils, we present a series of protocols for documenting and reporting geologic and stratigraphic information at fossil sites on the California Channel Islands in general, and in Channel Islands National Park in particular, so that pertinent information is collected prior to excavation of vertebrate materials, thus maximizing their scientific value.

  19. Sea-level rise and refuge habitats for tidal marsh species: can artificial islands save the California Ridgway's rail?

    Science.gov (United States)

    Overton, Cory T.; Takekawa, John Y.; Casazza, Michael L.; Bui, Thuy-Vy D.; Holyoak, Marcel; Strong, Donald R.

    2014-01-01

    Terrestrial species living in intertidal habitats experience refuge limitation during periods of tidal inundation, which may be exacerbated by seasonal variation in vegetation structure, tidal cycles, and land-use change. Sea-level rise projections indicate the severity of refuge limitation may increase. Artificial habitats that provide escape cover during tidal inundation have been proposed as a temporary solution to alleviate these limitations. We tested for evidence of refuge habitat limitation in a population of endangered California Ridgway's rail (Rallus obsoletus obsoletus; hereafter California rail) through use of artificial floating island habitats provided during two winters. Previous studies demonstrated that California rail mortality was especially high during the winter and periods of increased tidal inundation, suggesting that tidal refuge habitat is critical to survival. In our study, California rail regularly used artificial islands during higher tides and daylight hours. When tide levels inundated the marsh plain, use of artificial islands was at least 300 times more frequent than would be expected if California rails used artificial habitats proportional to their availability (0.016%). Probability of use varied among islands, and low levels of use were observed at night. These patterns may result from anti-predator behaviors and heterogeneity in either rail density or availability of natural refuges. Endemic saltmarsh species are increasingly at risk from habitat change resulting from sea-level rise and development of adjacent uplands. Escape cover during tidal inundation may need to be supplemented if species are to survive. Artificial habitats may provide effective short-term mitigation for habitat change and sea-level rise in tidal marsh environments, particularly for conservation-reliant species such as California rails.

  20. Metadata

    CERN Document Server

    Pomerantz, Jeffrey

    2015-01-01

    When "metadata" became breaking news, appearing in stories about surveillance by the National Security Agency, many members of the public encountered this once-obscure term from information science for the first time. Should people be reassured that the NSA was "only" collecting metadata about phone calls -- information about the caller, the recipient, the time, the duration, the location -- and not recordings of the conversations themselves? Or does phone call metadata reveal more than it seems? In this book, Jeffrey Pomerantz offers an accessible and concise introduction to metadata. In the era of ubiquitous computing, metadata has become infrastructural, like the electrical grid or the highway system. We interact with it or generate it every day. It is not, Pomerantz tell us, just "data about data." It is a means by which the complexity of an object is represented in a simpler form. For example, the title, the author, and the cover art are metadata about a book. When metadata does its job well, it fades i...

  1. Phylogeography and genetic structure of endemic Acmispon argophyllus and A. dendroideus (Fabaceae) across the California Channel Islands.

    Science.gov (United States)

    Wallace, Lisa E; Wheeler, Gregory L; McGlaughlin, Mitchell E; Bresowar, Gerald; Helenurm, Kaius

    2017-05-01

    Taxa inhabiting the California Channel Islands exhibit variation in their degree of isolation, but few studies have considered patterns across the entire archipelago. We studied phylogeography of insular Acmispon argophyllus and A. dendroideus to determine whether infraspecific taxa are genetically divergent and to elucidate patterns of diversification across these islands. DNA sequences were collected from nuclear (ADH) and plastid genomes ( rpL16 , ndhA , psbD-trnT ) from >450 samples on the Channel Islands and California. We estimated population genetic diversity and structure, phylogenetic patterns among populations, and migration rates, and tested for population growth. Populations of northern island A. argophyllus var. niveus are genetically distinct from conspecific populations on southern islands. On the southern islands, A. argophyllus var. argenteus populations on Santa Catalina are phylogenetically distinct from populations of var. argenteus and var. adsurgens on the other southern islands. For A. dendroideus , we found the varieties to be monophyletic. Populations of A. dendroideus var. traskiae on San Clemente are genetically differentiated from other conspecific populations, whereas populations on the northern islands and Santa Catalina show varying degrees of gene flow. Evidence of population growth was found in both species. Oceanic barriers between islands have had a strong influence on population genetic structure in both Acmispon species, although the species have differing phylogeographic patterns. This study provides a contrasting pattern of dispersal on a near island system that does not follow a strict stepping-stone model, commonly found on isolated island systems. © 2017 Botanical Society of America.

  2. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    Directory of Open Access Journals (Sweden)

    Jianwei Liao

    2014-01-01

    Full Text Available This paper presents a novel metadata management mechanism on the metadata server (MDS for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  3. Oak habitat recovery on California's largest islands: Scenarios for the role of corvid seed dispersal

    Science.gov (United States)

    Pesendorfer, Mario B.; Baker, Christopher M.; Stringer, Martin; McDonald-Madden, Eve; Bode, Michael; McEachern, A. Kathryn; Morrison, Scott A.; Sillett, T. Scott

    2018-01-01

    Seed dispersal by birds is central to the passive restoration of many tree communities. Reintroduction of extinct seed dispersers can therefore restore degraded forests and woodlands. To test this, we constructed a spatially explicit simulation model, parameterized with field data, to consider the effect of different seed dispersal scenarios on the extent of oak populations. We applied the model to two islands in California's Channel Islands National Park (USA), one of which has lost a key seed disperser.We used an ensemble modelling approach to simulate island scrub oak (Quercus pacifica) demography. The model was developed and trained to recreate known population changes over a 20-year period on 250-km2 Santa Cruz Island, and incorporated acorn dispersal by island scrub-jays (Aphelocoma insularis), deer mice (Peromyscus maniculatus) and gravity, as well as seed predation. We applied the trained model to 215-km2 Santa Rosa Island to examine how reintroducing island scrub-jays would affect the rate and pattern of oak population expansion. Oak habitat on Santa Rosa Island has been greatly reduced from its historical extent due to past grazing by introduced ungulates, the last of which were removed by 2011.Our simulation model predicts that a seed dispersal scenario including island scrub-jays would increase the extent of the island scrub oak population on Santa Rosa Island by 281% over 100 years, and by 544% over 200 years. Scenarios without jays would result in little expansion. Simulated long-distance seed dispersal by jays also facilitates establishment of discontinuous patches of oaks, and increases their elevational distribution.Synthesis and applications. Scenario planning provides powerful decision support for conservation managers. We used ensemble modelling of plant demographic and seed dispersal processes to investigate whether the reintroduction of seed dispersers could provide cost-effective means of achieving broader ecosystem restoration goals on

  4. Creating preservation metadata from XML-metadata profiles

    Science.gov (United States)

    Ulbricht, Damian; Bertelmann, Roland; Gebauer, Petra; Hasler, Tim; Klump, Jens; Kirchner, Ingo; Peters-Kottig, Wolfgang; Mettig, Nora; Rusch, Beate

    2014-05-01

    Registration of dataset DOIs at DataCite makes research data citable and comes with the obligation to keep data accessible in the future. In addition, many universities and research institutions measure data that is unique and not repeatable like the data produced by an observational network and they want to keep these data for future generations. In consequence, such data should be ingested in preservation systems, that automatically care for file format changes. Open source preservation software that is developed along the definitions of the ISO OAIS reference model is available but during ingest of data and metadata there are still problems to be solved. File format validation is difficult, because format validators are not only remarkably slow - due to variety in file formats different validators return conflicting identification profiles for identical data. These conflicts are hard to resolve. Preservation systems have a deficit in the support of custom metadata. Furthermore, data producers are sometimes not aware that quality metadata is a key issue for the re-use of data. In the project EWIG an university institute and a research institute work together with Zuse-Institute Berlin, that is acting as an infrastructure facility, to generate exemplary workflows for research data into OAIS compliant archives with emphasis on the geosciences. The Institute for Meteorology provides timeseries data from an urban monitoring network whereas GFZ Potsdam delivers file based data from research projects. To identify problems in existing preservation workflows the technical work is complemented by interviews with data practitioners. Policies for handling data and metadata are developed. Furthermore, university teaching material is created to raise the future scientists awareness of research data management. As a testbed for ingest workflows the digital preservation system Archivematica [1] is used. During the ingest process metadata is generated that is compliant to the

  5. Minimum area thresholds for rattlesnakes and colubrid snakes on islands in the Gulf of California, Mexico.

    Science.gov (United States)

    Meik, Jesse M; Makowsky, Robert

    2018-01-01

    We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.

  6. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    Science.gov (United States)

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  7. The contributions of Donald Lee Johnson to understanding the Quaternary geologic and biogeographic history of the California Channel Islands

    Science.gov (United States)

    Muhs, Daniel R.

    2013-01-01

    Over a span of 50 years, native Californian Donald Lee Johnson made a number of memorable contributions to our understanding of the California Channel Islands. Among these are (1) recognizing that carbonate dunes, often cemented into eolianite and derived from offshore shelf sediments during lowered sea level, are markers of glacial periods on the Channel Islands; (2) identifying beach rock on the Channel Islands as the northernmost occurrence of this feature on the Pacific Coast of North America; (3) recognizing of the role of human activities in historic landscape modification; (4) identifying both the biogenic and pedogenic origins of caliche “ghost forests” and laminar calcrete forms on the Channel Islands; (5) providing the first soil maps of several of the islands, showing diverse pathways of pedogenesis; (6) pointing out the importance of fire in Quaternary landscape history on the Channel Islands, based on detailed stratigraphic studies; and (7), perhaps his greatest contribution, clarifying the origin of Pleistocene pygmy mammoths on the Channel Islands, due not to imagined ancient land bridges, but rather the superb swimming abilities of proboscideans combined with lowered sea level, favorable paleowinds, and an attractive paleovegetation on the Channel Islands. Don was a classic natural historian in the great tradition of Charles Darwin and George Gaylord Simpson, his role models. Don’s work will remain important and useful for many years and is an inspiration to those researching the California Channel Islands today.

  8. Survival and natality rate observations of California sea lions at San Miguel Island, California conducted by Alaska Fisheries Science Center, National Marine Mammal Laboratory from 1987-09-20 to 2014-09-25 (NCEI Accession 0145167)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dataset contains initial capture and marking data for California sea lion (Zalophus californianus) pups at San Miguel Island, California and subsequent...

  9. THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA

    Energy Technology Data Exchange (ETDEWEB)

    Devarakonda, Ranjeet [ORNL; Shrestha, Biva [ORNL; Palanisamy, Giri [ORNL; Hook, Leslie A [ORNL; Killeffer, Terri S [ORNL; Boden, Thomas A [ORNL; Cook, Robert B [ORNL; Zolly, Lisa [United States Geological Service (USGS); Hutchison, Viv [United States Geological Service (USGS); Frame, Mike [United States Geological Service (USGS); Cialella, Alice [Brookhaven National Laboratory (BNL); Lazer, Kathy [Brookhaven National Laboratory (BNL)

    2014-01-01

    Nobody is better suited to describe data than the scientist who created it. This description about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset [1]. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [2][4]. OME is part of ORNL s Mercury software fleet [2][3]. It was jointly developed to support projects funded by the United States Geological Survey (USGS), U.S. Department of Energy (DOE), National Aeronautics and Space Administration (NASA) and National Oceanic and Atmospheric Administration (NOAA). OME s architecture provides a customizable interface to support project-specific requirements. Using this new architecture, the ORNL team developed OME instances for USGS s Core Science Analytics, Synthesis, and Libraries (CSAS&L), DOE s Next Generation Ecosystem Experiments (NGEE) and Atmospheric Radiation Measurement (ARM) Program, and the international Surface Ocean Carbon Dioxide ATlas (SOCAT). Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. From the information on the form, the Metadata Editor can create an XML file on the server that the editor is installed or to the user s personal computer. Researchers can also use the ORNL Metadata Editor to modify existing XML metadata files. As an example, an NGEE Arctic scientist use OME to register

  10. Evolution in Metadata Quality: Common Metadata Repository's Role in NASA Curation Efforts

    Science.gov (United States)

    Gilman, Jason; Shum, Dana; Baynes, Katie

    2016-01-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This poster covers how we use humanizers, a technique for dealing with the symptoms of metadata issues, as well as our plans for future metadata validation enhancements. The CMR currently indexes 35K collections and 300M granules.

  11. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    Fulachier, Jerome; The ATLAS collaboration

    2016-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, Javascript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  12. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.; Tzelnic, Percy; Ting, Dennis P. J.; Ionkov, Latchesar A.; Grider, Gary

    2017-12-26

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata request can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.

  13. Harvesting NASA's Common Metadata Repository

    Science.gov (United States)

    Shum, D.; Mitchell, A. E.; Durbin, C.; Norton, J.

    2017-12-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  14. ATLAS Metadata Interface (AMI), a generic metadata framework

    Science.gov (United States)

    Fulachier, J.; Odier, J.; Lambert, F.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, JavaScript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  15. ATLAS Metadata Interface (AMI), a generic metadata framework

    CERN Document Server

    AUTHOR|(SzGeCERN)573735; The ATLAS collaboration; Odier, Jerome; Lambert, Fabian

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. We briefly describe the architecture, the main services and the benefits of using AMI in big collaborations, especially for high energy physics. We focus on the recent improvements, for instance: the lightweight clients (Python, JavaScript, C++), the new smart task server system and the Web 2.0 AMI framework for simplifying the development of metadata-oriented web interfaces.

  16. USGIN ISO metadata profile

    Science.gov (United States)

    Richard, S. M.

    2011-12-01

    The USGIN project has drafted and is using a specification for use of ISO 19115/19/39 metadata, recommendations for simple metadata content, and a proposal for a URI scheme to identify resources using resolvable http URI's(see http://lab.usgin.org/usgin-profiles). The principal target use case is a catalog in which resources can be registered and described by data providers for discovery by users. We are currently using the ESRI Geoportal (Open Source), with configuration files for the USGIN profile. The metadata offered by the catalog must provide sufficient content to guide search engines to locate requested resources, to describe the resource content, provenance, and quality so users can determine if the resource will serve for intended usage, and finally to enable human users and sofware clients to obtain or access the resource. In order to achieve an operational federated catalog system, provisions in the ISO specification must be restricted and usage clarified to reduce the heterogeneity of 'standard' metadata and service implementations such that a single client can search against different catalogs, and the metadata returned by catalogs can be parsed reliably to locate required information. Usage of the complex ISO 19139 XML schema allows for a great deal of structured metadata content, but the heterogenity in approaches to content encoding has hampered development of sophisticated client software that can take advantage of the rich metadata; the lack of such clients in turn reduces motivation for metadata producers to produce content-rich metadata. If the only significant use of the detailed, structured metadata is to format into text for people to read, then the detailed information could be put in free text elements and be just as useful. In order for complex metadata encoding and content to be useful, there must be clear and unambiguous conventions on the encoding that are utilized by the community that wishes to take advantage of advanced metadata

  17. Metadata Realities for Cyberinfrastructure: Data Authors as Metadata Creators

    Science.gov (United States)

    Mayernik, Matthew Stephen

    2011-01-01

    As digital data creation technologies become more prevalent, data and metadata management are necessary to make data available, usable, sharable, and storable. Researchers in many scientific settings, however, have little experience or expertise in data and metadata management. In this dissertation, I explore the everyday data and metadata…

  18. The metadata manual a practical workbook

    CERN Document Server

    Lubas, Rebecca; Schneider, Ingrid

    2013-01-01

    Cultural heritage professionals have high levels of training in metadata. However, the institutions in which they practice often depend on support staff, volunteers, and students in order to function. With limited time and funding for training in metadata creation for digital collections, there are often many questions about metadata without a reliable, direct source for answers. The Metadata Manual provides such a resource, answering basic metadata questions that may appear, and exploring metadata from a beginner's perspective. This title covers metadata basics, XML basics, Dublin Core, VRA C

  19. Quaternary sea-level history and the origin of the northernmost coastal aeolianites in the Americas: Channel Islands National Park, California, USA

    Science.gov (United States)

    Muhs, Daniel; Pigati, Jeffrey S.; Schumann, R. Randall; Skipp, Gary L.; Porat, Naomi; DeVogel, Stephen B.

    2018-01-01

    Along most of the Pacific Coast of North America, sand dunes are dominantly silicate-rich. On the California Channel Islands, however, dunes are carbonate-rich, due to high productivity offshore and a lack of dilution by silicate minerals. Older sands on the Channel Islands contain enough carbonate to be cemented into aeolianite. Several generations of carbonate aeolianites are present on the California Channel Islands and represent the northernmost Quaternary coastal aeolianites on the Pacific Coast of North America. The oldest aeolianites on the islands may date to the early Pleistocene and thus far have only been found on Santa Cruz Island. Aeolianites with well-developed soils are found on both San Miguel Island and Santa Rosa Island and likely date to the middle Pleistocene. The youngest and best-dated aeolianites are located on San Miguel Island and Santa Rosa Island. These sediments were deposited during the late Pleistocene following the emergence of marine terraces that date to the last interglacial complex (~ 120,000 yr to ~ 80,000 yr). Based on radiocarbon and luminescence dating, the ages of these units correspond in time with marine isotope stages [MIS] 4, 3, and 2. Sea level was significantly lower than present during all three time periods. Reconstruction of insular paleogeography indicates that large areas to the north and northwest of the islands would have been exposed at these times, providing a ready source of carbonate-rich skeletal sands. These findings differ from a previously held concept that carbonate aeolianites are dominantly an interglacial phenomenon forming during high stands of sea. In contrast, our results are consistent with the findings of other investigators of the past decade who have reported evidence of glacial-age and interstadial-age aeolianites on coastlines of Australia and South Africa. They are also consistent with observations made by Darwin regarding the origin of aeolianites on the island of St. Helena, in the

  20. A Metadata-Rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2009-01-07

    Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  1. METADATA, DESKRIPSI SERTA TITIK AKSESNYA DAN INDOMARC

    Directory of Open Access Journals (Sweden)

    Sulistiyo Basuki

    2012-07-01

    Full Text Available lstilah metadata mulai sering muncul dalam literature tentang database management systems (DBMS pada tahun 1980 an. lstilah tersebut digunakan untuk menggambarkan informasi yang diperlukan untuk mencatat karakteristik informasi yang terdapat pada pangkalan data. Banyak sumber yang mengartikan istilah metadata. Metadata dapat diartikan sumber, menunjukan lokasi dokumen, serta memberikan ringkasan yang diperlukan untuk memanfaat-kannya. Secara umum ada 3 bagian yang digunakan untuk membuat metadata sebagai sebuah paket informasi, dan penyandian (encoding pembuatan deskripsi paket informasi, dan penyediaan akses terhadap deskripsi tersebut. Dalam makalah ini diuraikan mengenai konsep data dalam kaitannya dengan perpustakaan. Uraian meliputi definisi metadata; fungsi metadata; standar penyandian (encoding, cantuman bibliografis. surogat, metadata; penciptaan isi cantuman surogat; ancangan terhadap format metadata; serta metadata dan standar metadata.

  2. Mercury Toolset for Spatiotemporal Metadata

    Science.gov (United States)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris

    2010-06-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  3. Mercury Toolset for Spatiotemporal Metadata

    Science.gov (United States)

    Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James

    2010-01-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  4. The RBV metadata catalog

    Science.gov (United States)

    Andre, Francois; Fleury, Laurence; Gaillardet, Jerome; Nord, Guillaume

    2015-04-01

    RBV (Réseau des Bassins Versants) is a French initiative to consolidate the national efforts made by more than 15 elementary observatories funded by various research institutions (CNRS, INRA, IRD, IRSTEA, Universities) that study river and drainage basins. The RBV Metadata Catalogue aims at giving an unified vision of the work produced by every observatory to both the members of the RBV network and any external person interested by this domain of research. Another goal is to share this information with other existing metadata portals. Metadata management is heterogeneous among observatories ranging from absence to mature harvestable catalogues. Here, we would like to explain the strategy used to design a state of the art catalogue facing this situation. Main features are as follows : - Multiple input methods: Metadata records in the catalog can either be entered with the graphical user interface, harvested from an existing catalogue or imported from information system through simplified web services. - Hierarchical levels: Metadata records may describe either an observatory, one of its experimental site or a single dataset produced by one instrument. - Multilingualism: Metadata can be easily entered in several configurable languages. - Compliance to standards : the backoffice part of the catalogue is based on a CSW metadata server (Geosource) which ensures ISO19115 compatibility and the ability of being harvested (globally or partially). On going tasks focus on the use of SKOS thesaurus and SensorML description of the sensors. - Ergonomy : The user interface is built with the GWT Framework to offer a rich client application with a fully ajaxified navigation. - Source code sharing : The work has led to the development of reusable components which can be used to quickly create new metadata forms in other GWT applications You can visit the catalogue (http://portailrbv.sedoo.fr/) or contact us by email rbv@sedoo.fr.

  5. Creating context for the experiment record. User-defined metadata: investigations into metadata usage in the LabTrove ELN.

    Science.gov (United States)

    Willoughby, Cerys; Bird, Colin L; Coles, Simon J; Frey, Jeremy G

    2014-12-22

    The drive toward more transparency in research, the growing willingness to make data openly available, and the reuse of data to maximize the return on research investment all increase the importance of being able to find information and make links to the underlying data. The use of metadata in Electronic Laboratory Notebooks (ELNs) to curate experiment data is an essential ingredient for facilitating discovery. The University of Southampton has developed a Web browser-based ELN that enables users to add their own metadata to notebook entries. A survey of these notebooks was completed to assess user behavior and patterns of metadata usage within ELNs, while user perceptions and expectations were gathered through interviews and user-testing activities within the community. The findings indicate that while some groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users are making little attempts to use it, thereby endangering their ability to recover data in the future. A survey of patterns of metadata use in these notebooks, together with feedback from the user community, indicated that while a few groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users adopt a "minimum required" approach to metadata. To investigate whether the patterns of metadata use in LabTrove were unusual, a series of surveys were undertaken to investigate metadata usage in a variety of platforms supporting user-defined metadata. These surveys also provided the opportunity to investigate whether interface designs in these other environments might inform strategies for encouraging metadata creation and more effective use of metadata in LabTrove.

  6. Viewing and Editing Earth Science Metadata MOBE: Metadata Object Browser and Editor in Java

    Science.gov (United States)

    Chase, A.; Helly, J.

    2002-12-01

    Metadata is an important, yet often neglected aspect of successful archival efforts. However, to generate robust, useful metadata is often a time consuming and tedious task. We have been approaching this problem from two directions: first by automating metadata creation, pulling from known sources of data, and in addition, what this (paper/poster?) details, developing friendly software for human interaction with the metadata. MOBE and COBE(Metadata Object Browser and Editor, and Canonical Object Browser and Editor respectively), are Java applications for editing and viewing metadata and digital objects. MOBE has already been designed and deployed, currently being integrated into other areas of the SIOExplorer project. COBE is in the design and development stage, being created with the same considerations in mind as those for MOBE. Metadata creation, viewing, data object creation, and data object viewing, when taken on a small scale are all relatively simple tasks. Computer science however, has an infamous reputation for transforming the simple into complex. As a system scales upwards to become more robust, new features arise and additional functionality is added to the software being written to manage the system. The software that emerges from such an evolution, though powerful, is often complex and difficult to use. With MOBE the focus is on a tool that does a small number of tasks very well. The result has been an application that enables users to manipulate metadata in an intuitive and effective way. This allows for a tool that serves its purpose without introducing additional cognitive load onto the user, an end goal we continue to pursue.

  7. Multi-facetted Metadata - Describing datasets with different metadata schemas at the same time

    Science.gov (United States)

    Ulbricht, Damian; Klump, Jens; Bertelmann, Roland

    2013-04-01

    Inspired by the wish to re-use research data a lot of work is done to bring data systems of the earth sciences together. Discovery metadata is disseminated to data portals to allow building of customized indexes of catalogued dataset items. Data that were once acquired in the context of a scientific project are open for reappraisal and can now be used by scientists that were not part of the original research team. To make data re-use easier, measurement methods and measurement parameters must be documented in an application metadata schema and described in a written publication. Linking datasets to publications - as DataCite [1] does - requires again a specific metadata schema and every new use context of the measured data may require yet another metadata schema sharing only a subset of information with the meta information already present. To cope with the problem of metadata schema diversity in our common data repository at GFZ Potsdam we established a solution to store file-based research data and describe these with an arbitrary number of metadata schemas. Core component of the data repository is an eSciDoc infrastructure that provides versioned container objects, called eSciDoc [2] "items". The eSciDoc content model allows assigning files to "items" and adding any number of metadata records to these "items". The eSciDoc items can be submitted, revised, and finally published, which makes the data and metadata available through the internet worldwide. GFZ Potsdam uses eSciDoc to support its scientific publishing workflow, including mechanisms for data review in peer review processes by providing temporary web links for external reviewers that do not have credentials to access the data. Based on the eSciDoc API, panMetaDocs [3] provides a web portal for data management in research projects. PanMetaDocs, which is based on panMetaWorks [4], is a PHP based web application that allows to describe data with any XML-based schema. It uses the eSciDoc infrastructures

  8. Discussion of "Fluvial system response to late Pleistocene-Holocene sea-level change on Santa Rosa Island, Channel Islands National Park, California" (Schumann et al., 2016. Geomorphology, 268: 322-340)

    Science.gov (United States)

    Pinter, Nicholas; Hardiman, Mark; Scott, Andrew C.; Anderson, R. Scott

    2018-01-01

    Schumann et al. (2016) presented a field assessment of late Pleistocene to Holocene fluvial sediments preserved in the valleys of Santa Rosa Island, California. This is a rigorous study, based on stratigraphic descriptions of 54 sections and numerous radiocarbon ages. The paper makes important contributions that we would like to highlight, but other parts of the paper rely upon overly simplistic interpretations that lead to misleading conclusions. In one case, a conclusion of the Schumann et al. paper has important management implications for Santa Rosa Island and similar locations, compelling us to discuss and qualify this conclusion.

  9. Tethys Acoustic Metadata Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Tethys database houses the metadata associated with the acoustic data collection efforts by the Passive Acoustic Group. These metadata include dates, locations...

  10. An integrated overview of metadata in ATLAS

    International Nuclear Information System (INIS)

    Gallas, E J; Malon, D; Hawkings, R J; Albrand, S; Torrence, E

    2010-01-01

    Metadata (data about data) arise in many contexts, from many diverse sources, and at many levels in ATLAS. Familiar examples include run-level, luminosity-block-level, and event-level metadata, and, related to processing and organization, dataset-level and file-level metadata, but these categories are neither exhaustive nor orthogonal. Some metadata are known a priori, in advance of data taking or simulation; other metadata are known only after processing, and occasionally, quite late (e.g., detector status or quality updates that may appear after initial reconstruction is complete). Metadata that may seem relevant only internally to the distributed computing infrastructure under ordinary conditions may become relevant to physics analysis under error conditions ('What can I discover about data I failed to process?'). This talk provides an overview of metadata and metadata handling in ATLAS, and describes ongoing work to deliver integrated metadata services in support of physics analysis.

  11. How libraries use publisher metadata

    Directory of Open Access Journals (Sweden)

    Steve Shadle

    2013-11-01

    Full Text Available With the proliferation of electronic publishing, libraries are increasingly relying on publisher-supplied metadata to meet user needs for discovery in library systems. However, many publisher/content provider staff creating metadata are unaware of the end-user environment and how libraries use their metadata. This article provides an overview of the three primary discovery systems that are used by academic libraries, with examples illustrating how publisher-supplied metadata directly feeds into these systems and is used to support end-user discovery and access. Commonly seen metadata problems are discussed, with recommendations suggested. Based on a series of presentations given in Autumn 2012 to the staff of a large publisher, this article uses the University of Washington Libraries systems and services as illustrative examples. Judging by the feedback received from these presentations, publishers (specifically staff not familiar with the big picture of metadata standards work would benefit from a better understanding of the systems and services libraries provide using the data that is created and managed by publishers.

  12. Metadata Life Cycles, Use Cases and Hierarchies

    Directory of Open Access Journals (Sweden)

    Ted Habermann

    2018-05-01

    Full Text Available The historic view of metadata as “data about data” is expanding to include data about other items that must be created, used, and understood throughout the data and project life cycles. In this context, metadata might better be defined as the structured and standard part of documentation, and the metadata life cycle can be described as the metadata content that is required for documentation in each phase of the project and data life cycles. This incremental approach to metadata creation is similar to the spiral model used in software development. Each phase also has distinct users and specific questions to which they need answers. In many cases, the metadata life cycle involves hierarchies where latter phases have increased numbers of items. The relationships between metadata in different phases can be captured through structure in the metadata standard, or through conventions for identifiers. Metadata creation and management can be streamlined and simplified by re-using metadata across many records. Many of these ideas have been developed to various degrees in several Geoscience disciplines and are being used in metadata for documenting the integrated life cycle of environmental research in the Arctic, including projects, collection sites, and datasets.

  13. Active Marine Station Metadata

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Active Marine Station Metadata is a daily metadata report for active marine bouy and C-MAN (Coastal Marine Automated Network) platforms from the National Data...

  14. Critical Metadata for Spectroscopy Field Campaigns

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2014-04-01

    Full Text Available A field spectroscopy metadata standard is defined as those data elements that explicitly document the spectroscopy dataset and field protocols, sampling strategies, instrument properties and environmental and logistical variables. Standards for field spectroscopy metadata affect the quality, completeness, reliability, and usability of datasets created in situ. Currently there is no standardized methodology for documentation of in situ spectroscopy data or metadata. This paper presents results of an international experiment comprising a web-based survey and expert panel evaluation that investigated critical metadata in field spectroscopy. The survey participants were a diverse group of scientists experienced in gathering spectroscopy data across a wide range of disciplines. Overall, respondents were in agreement about a core metadataset for generic campaign metadata, allowing for a prioritization of critical metadata elements to be proposed including those relating to viewing geometry, location, general target and sampling properties, illumination, instrument properties, reference standards, calibration, hyperspectral signal properties, atmospheric conditions, and general project details. Consensus was greatest among individual expert groups in specific application domains. The results allow the identification of a core set of metadata fields that enforce long term data storage and serve as a foundation for a metadata standard. This paper is part one in a series about the core elements of a robust and flexible field spectroscopy metadata standard.

  15. A multi-decade time series of kelp forest community structure at San Nicolas Island, California

    Science.gov (United States)

    Lafferty, Kevin D.; Kenner, Michael C.; Estes, James A.; Tinker, M. Tim; Bodkin, James L.; Cowen, Robert K.; Harrold, Christopher; Novak, Mark; Rassweiler, Andrew; Reed, Daniel C.

    2013-01-01

    San Nicolas Island is surrounded by broad areas of shallow subtidal habitat, characterized by dynamic kelp forest communities that undergo dramatic and abrupt shifts in community composition. Although these reefs are fished, the physical isolation of the island means that they receive less impact from human activities than most reefs in Southern California, making San Nicolas an ideal place to evaluate alternative theories about the dynamics of these communities. Here we present monitoring data from seven sampling stations surrounding the island, including data on fish, invertebrate, and algal abundance. These data are unusual among subtidal monitoring data sets in that they combine relatively frequent sampling (twice per year) with an exceptionally long time series (since 1980). Other outstanding qualities of the data set are the high taxonomic resolution captured and the monitoring of permanent quadrats and swaths where the history of the community structure at specific locations has been recorded through time. Finally, the data span a period that includes two of the strongest ENSO events on record, a major shift in the Pacific decadal oscillation, and the reintroduction of sea otters to the island in 1987 after at least 150 years of absence. These events provide opportunities to evaluate the effects of bottom-up forcing, top-down control, and physical disturbance on shallow rocky reef communities.

  16. XML for catalogers and metadata librarians

    CERN Document Server

    Cole, Timothy W

    2013-01-01

    How are today's librarians to manage and describe the everexpanding volumes of resources, in both digital and print formats? The use of XML in cataloging and metadata workflows can improve metadata quality, the consistency of cataloging workflows, and adherence to standards. This book is intended to enable current and future catalogers and metadata librarians to progress beyond a bare surfacelevel acquaintance with XML, thereby enabling them to integrate XML technologies more fully into their cataloging workflows. Building on the wealth of work on library descriptive practices, cataloging, and metadata, XML for Catalogers and Metadata Librarians explores the use of XML to serialize, process, share, and manage library catalog and metadata records. The authors' expert treatment of the topic is written to be accessible to those with little or no prior practical knowledge of or experience with how XML is used. Readers will gain an educated appreciation of the nuances of XML and grasp the benefit of more advanced ...

  17. Security in a Replicated Metadata Catalogue

    CERN Document Server

    Koblitz, B

    2007-01-01

    The gLite-AMGA metadata has been developed by NA4 to provide simple relational metadata access for the EGEE user community. As advanced features, which will be the focus of this presentation, AMGA provides very fine-grained security also in connection with the built-in support for replication and federation of metadata. AMGA is extensively used by the biomedical community to store medical images metadata, digital libraries, in HEP for logging and bookkeeping data and in the climate community. The biomedical community intends to deploy a distributed metadata system for medical images consisting of various sites, which range from hospitals to computing centres. Only safe sharing of the highly sensitive metadata as provided in AMGA makes such a scenario possible. Other scenarios are digital libraries, which federate copyright protected (meta-) data into a common catalogue. The biomedical and digital libraries have been deployed using a centralized structure already for some time. They now intend to decentralize ...

  18. Mdmap: A Tool for Metadata Collection and Matching

    Directory of Open Access Journals (Sweden)

    Rico Simke

    2014-10-01

    Full Text Available This paper describes a front-end for the semi-automatic collection, matching, and generation of bibliographic metadata obtained from different sources for use within a digitization architecture. The Library of a Billion Words project is building an infrastructure for digitizing text that requires high-quality bibliographic metadata, but currently only sparse metadata from digitized editions is available. The project’s approach is to collect metadata for each digitized item from as many sources as possible. An expert user can then use an intuitive front-end tool to choose matching metadata. The collected metadata are centrally displayed in an interactive grid view. The user can choose which metadata they want to assign to a certain edition, and export these data as MARCXML. This paper presents a new approach to bibliographic work and metadata correction. We try to achieve a high quality of the metadata by generating a large amount of metadata to choose from, as well as by giving librarians an intuitive tool to manage their data.

  19. The essential guide to metadata for books

    CERN Document Server

    Register, Renee

    2013-01-01

    In The Essential Guide to Metadata for Books, you will learn exactly what you need to know to effectively generate, handle and disseminate metadata for books and ebooks. This comprehensive but digestible document will explain the life-cycle of book metadata, industry standards, XML, ONIX and the essential elements of metadata. It will also show you how effective, well-organized metadata can improve your efforts to sell a book, especially when it comes to marketing, discoverability and converting at the point of sale. This information-packed document also includes a glossary of terms

  20. Metadata Effectiveness in Internet Discovery: An Analysis of Digital Collection Metadata Elements and Internet Search Engine Keywords

    Science.gov (United States)

    Yang, Le

    2016-01-01

    This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…

  1. Association between mapped vegetation and Quaternary geology on Santa Rosa Island, California

    Science.gov (United States)

    Cronkite-Ratcliff, C.; Corbett, S.; Schmidt, K. M.

    2017-12-01

    Vegetation and surficial geology are closely connected through the interface generally referred to as the critical zone. Not only do they influence each other, but they also provide clues into the effects of climate, topography, and hydrology on the earth's surface. This presentation describes quantitative analyses of the association between the recently compiled, independently generated vegetation and geologic map units on Santa Rosa Island, part of the Channel Islands National Park in Southern California. Santa Rosa Island was heavily grazed by sheep and cattle ranching for over one hundred years prior to its acquisition by the National Park Service. During this period, the island experienced significant erosion and spatial reduction and diversity of native plant species. Understanding the relationship between geology and vegetation is necessary for monitoring the recovery of native plant species, enhancing the viability of restoration sites, and understanding hydrologic conditions favorable for plant growth. Differences in grain size distribution and soil depth between geologic units support different plant communities through their influence on soil moisture, while differences in unit age reflect different degrees of pedogenic maturity. We find that unsupervised machine learning methods provide more informative insight into vegetation-geology associations than traditional measures such as Cramer's V and Goodman and Kruskal's lambda. Correspondence analysis shows that unique vegetation-geology patterns associated with beach/dune, grassland, hillslope/colluvial, and fluvial/wetland environments can be discerned from the data. By combining geology and vegetation with topographic variables, mixture models can be used to partition the landscape into multiple representative types, which then be compared with conceptual models of plant growth and succession over different landforms. Using this collection of methods, we show various ways that that Quaternary geology

  2. Science friction: data, metadata, and collaboration.

    Science.gov (United States)

    Edwards, Paul N; Mayernik, Matthew S; Batcheller, Archer L; Bowker, Geoffrey C; Borgman, Christine L

    2011-10-01

    When scientists from two or more disciplines work together on related problems, they often face what we call 'science friction'. As science becomes more data-driven, collaborative, and interdisciplinary, demand increases for interoperability among data, tools, and services. Metadata--usually viewed simply as 'data about data', describing objects such as books, journal articles, or datasets--serve key roles in interoperability. Yet we find that metadata may be a source of friction between scientific collaborators, impeding data sharing. We propose an alternative view of metadata, focusing on its role in an ephemeral process of scientific communication, rather than as an enduring outcome or product. We report examples of highly useful, yet ad hoc, incomplete, loosely structured, and mutable, descriptions of data found in our ethnographic studies of several large projects in the environmental sciences. Based on this evidence, we argue that while metadata products can be powerful resources, usually they must be supplemented with metadata processes. Metadata-as-process suggests the very large role of the ad hoc, the incomplete, and the unfinished in everyday scientific work.

  3. Uncinariasis in northern fur seal and California sea lion pups from California.

    Science.gov (United States)

    Lyons, E T; DeLong, R L; Melin, S R; Tolliver, S C

    1997-10-01

    Northern fur seal (Callorhinus ursinus) (n = 25) and California sea lion (Zalophus californianus) (n = 53) pups, found dead on rookeries on San Miguel Island (California, USA), were examined for adult Uncinaria spp. Prevalence of these nematodes was 96% in fur seal pups and 100% in sea lion pups. Mean intensity of Uncinaria spp. per infected pup was 643 in fur seals and 1,284 in sea lions. Eggs of Uncinaria spp. from dead sea lion pups underwent embryonation in an incubator; development to the free-living third stage larva occurred within the egg. This study provided some specific information on hookworm infections in northern fur seal and California sea lion pups on San Miguel Island. High prevalence rate of Uncinaria spp. in both species of pinnipeds was documented and much higher numbers (2X) of hookworms were present in sea lion than fur seal pups.

  4. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    Science.gov (United States)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web

  5. Optimising metadata workflows in a distributed information environment

    OpenAIRE

    Robertson, R. John; Barton, Jane

    2005-01-01

    The different purposes present within a distributed information environment create the potential for repositories to enhance their metadata by capitalising on the diversity of metadata available for any given object. This paper presents three conceptual reference models required to achieve this optimisation of metadata workflow: the ecology of repositories, the object lifecycle model, and the metadata lifecycle model. It suggests a methodology for developing the metadata lifecycle model, and ...

  6. Island Fox Veterinary And Pathology Services On San Clemente Island, California

    Science.gov (United States)

    2017-02-01

    2010), which lead to 4 of the subspecies being listed as federally endangered (U.S. Fish and Wildlife Service 2004). The declines on the northern...the California Animal Health and Food Safety Laboratory System (CAHFS), at the University of California, Davis, to be necropsied. Necropsy reports... additional database cataloging all foxes submitted for necropsy for use in tracking both submissions and subsequent findings. IWS submits full data bases

  7. Metadata in Scientific Dialects

    Science.gov (United States)

    Habermann, T.

    2011-12-01

    Discussions of standards in the scientific community have been compared to religious wars for many years. The only things scientists agree on in these battles are either "standards are not useful" or "everyone can benefit from using my standard". Instead of achieving the goal of facilitating interoperable communities, in many cases the standards have served to build yet another barrier between communities. Some important progress towards diminishing these obstacles has been made in the data layer with the merger of the NetCDF and HDF scientific data formats. The universal adoption of XML as the standard for representing metadata and the recent adoption of ISO metadata standards by many groups around the world suggests that similar convergence is underway in the metadata layer. At the same time, scientists and tools will likely need support for native tongues for some time. I will describe an approach that combines re-usable metadata "components" and restful web services that provide those components in many dialects. This approach uses advanced XML concepts of referencing and linking to construct complete records that include reusable components and builds on the ISO Standards as the "unabridged dictionary" that encompasses the content of many other dialects.

  8. Metadata Wizard: an easy-to-use tool for creating FGDC-CSDGM metadata for geospatial datasets in ESRI ArcGIS Desktop

    Science.gov (United States)

    Ignizio, Drew A.; O'Donnell, Michael S.; Talbert, Colin B.

    2014-01-01

    Creating compliant metadata for scientific data products is mandated for all federal Geographic Information Systems professionals and is a best practice for members of the geospatial data community. However, the complexity of the The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata, the limited availability of easy-to-use tools, and recent changes in the ESRI software environment continue to make metadata creation a challenge. Staff at the U.S. Geological Survey Fort Collins Science Center have developed a Python toolbox for ESRI ArcDesktop to facilitate a semi-automated workflow to create and update metadata records in ESRI’s 10.x software. The U.S. Geological Survey Metadata Wizard tool automatically populates several metadata elements: the spatial reference, spatial extent, geospatial presentation format, vector feature count or raster column/row count, native system/processing environment, and the metadata creation date. Once the software auto-populates these elements, users can easily add attribute definitions and other relevant information in a simple Graphical User Interface. The tool, which offers a simple design free of esoteric metadata language, has the potential to save many government and non-government organizations a significant amount of time and costs by facilitating the development of The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata compliant metadata for ESRI software users. A working version of the tool is now available for ESRI ArcDesktop, version 10.0, 10.1, and 10.2 (downloadable at http:/www.sciencebase.gov/metadatawizard).

  9. Inheritance rules for Hierarchical Metadata Based on ISO 19115

    Science.gov (United States)

    Zabala, A.; Masó, J.; Pons, X.

    2012-04-01

    Mainly, ISO19115 has been used to describe metadata for datasets and services. Furthermore, ISO19115 standard (as well as the new draft ISO19115-1) includes a conceptual model that allows to describe metadata at different levels of granularity structured in hierarchical levels, both in aggregated resources such as particularly series, datasets, and also in more disaggregated resources such as types of entities (feature type), types of attributes (attribute type), entities (feature instances) and attributes (attribute instances). In theory, to apply a complete metadata structure to all hierarchical levels of metadata, from the whole series to an individual feature attributes, is possible, but to store all metadata at all levels is completely impractical. An inheritance mechanism is needed to store each metadata and quality information at the optimum hierarchical level and to allow an ease and efficient documentation of metadata in both an Earth observation scenario such as a multi-satellite mission multiband imagery, as well as in a complex vector topographical map that includes several feature types separated in layers (e.g. administrative limits, contour lines, edification polygons, road lines, etc). Moreover, and due to the traditional split of maps in tiles due to map handling at detailed scales or due to the satellite characteristics, each of the previous thematic layers (e.g. 1:5000 roads for a country) or band (Landsat-5 TM cover of the Earth) are tiled on several parts (sheets or scenes respectively). According to hierarchy in ISO 19115, the definition of general metadata can be supplemented by spatially specific metadata that, when required, either inherits or overrides the general case (G.1.3). Annex H of this standard states that only metadata exceptions are defined at lower levels, so it is not necessary to generate the full registry of metadata for each level but to link particular values to the general value that they inherit. Conceptually the metadata

  10. The Machinic Temporality of Metadata

    Directory of Open Access Journals (Sweden)

    Claudio Celis

    2015-03-01

    Full Text Available In 1990 Deleuze introduced the hypothesis that disciplinary societies are gradually being replaced by a new logic of power: control. Accordingly, Matteo Pasquinelli has recently argued that we are moving towards societies of metadata, which correspond to a new stage of what Deleuze called control societies. Societies of metadata are characterised for the central role that meta-information acquires both as a source of surplus value and as an apparatus of social control. The aim of this article is to develop Pasquinelli’s thesis by examining the temporal scope of these emerging societies of metadata. In particular, this article employs Guattari’s distinction between human and machinic times. Through these two concepts, this article attempts to show how societies of metadata combine the two poles of capitalist power formations as identified by Deleuze and Guattari, i.e. social subjection and machinic enslavement. It begins by presenting the notion of metadata in order to identify some of the defining traits of contemporary capitalism. It then examines Berardi’s account of the temporality of the attention economy from the perspective of the asymmetric relation between cyber-time and human time. The third section challenges Berardi’s definition of the temporality of the attention economy by using Guattari’s notions of human and machinic times. Parts four and five fall back upon Deleuze and Guattari’s notions of machinic surplus labour and machinic enslavement, respectively. The concluding section tries to show that machinic and human times constitute two poles of contemporary power formations that articulate the temporal dimension of societies of metadata.

  11. The impact of lidar elevation uncertainty on mapping intertidal habitats on barrier islands

    Science.gov (United States)

    Enwright, Nicholas M.; Wang, Lei; Borchert, Sinéad M.; Day, Richard H.; Feher, Laura C.; Osland, Michael J.

    2018-01-01

    While airborne lidar data have revolutionized the spatial resolution that elevations can be realized, data limitations are often magnified in coastal settings. Researchers have found that airborne lidar can have a vertical error as high as 60 cm in densely vegetated intertidal areas. The uncertainty of digital elevation models is often left unaddressed; however, in low-relief environments, such as barrier islands, centimeter differences in elevation can affect exposure to physically demanding abiotic conditions, which greatly influence ecosystem structure and function. In this study, we used airborne lidar elevation data, in situ elevation observations, lidar metadata, and tide gauge information to delineate low-lying lands and the intertidal wetlands on Dauphin Island, a barrier island along the coast of Alabama, USA. We compared three different elevation error treatments, which included leaving error untreated and treatments that used Monte Carlo simulations to incorporate elevation vertical uncertainty using general information from lidar metadata and site-specific Real-Time Kinematic Global Position System data, respectively. To aid researchers in instances where limited information is available for error propagation, we conducted a sensitivity test to assess the effect of minor changes to error and bias. Treatment of error with site-specific observations produced the fewest omission errors, although the treatment using the lidar metadata had the most well-balanced results. The percent coverage of intertidal wetlands was increased by up to 80% when treating the vertical error of the digital elevation models. Based on the results from the sensitivity analysis, it could be reasonable to use error and positive bias values from literature for similar environments, conditions, and lidar acquisition characteristics in the event that collection of site-specific data is not feasible and information in the lidar metadata is insufficient. The methodology presented in

  12. Incorporating ISO Metadata Using HDF Product Designer

    Science.gov (United States)

    Jelenak, Aleksandar; Kozimor, John; Habermann, Ted

    2016-01-01

    The need to store in HDF5 files increasing amounts of metadata of various complexity is greatly overcoming the capabilities of the Earth science metadata conventions currently in use. Data producers until now did not have much choice but to come up with ad hoc solutions to this challenge. Such solutions, in turn, pose a wide range of issues for data managers, distributors, and, ultimately, data users. The HDF Group is experimenting on a novel approach of using ISO 19115 metadata objects as a catch-all container for all the metadata that cannot be fitted into the current Earth science data conventions. This presentation will showcase how the HDF Product Designer software can be utilized to help data producers include various ISO metadata objects in their products.

  13. Evaluating the privacy properties of telephone metadata

    Science.gov (United States)

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C.

    2016-01-01

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences. PMID:27185922

  14. Archive of Geosample Data and Information from the University of Southern California (USC) Department of Earth Sciences

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Metadata describing geological samples curated by Earth Sciences Department of the University of Southern California (USC) collected during the period from 1922 to...

  15. U.S. EPA Metadata Editor (EME)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Metadata Editor (EME) allows users to create geospatial metadata that meets EPA's requirements. The tool has been developed as a desktop application that...

  16. Ecological change on California's Channel Islands from the Pleistocene to the Anthropocene

    Science.gov (United States)

    Rick, Torben C.; Sillett, T. Scott; Ghalambor, Cameron K.; Hofman, Courtney A.; Ralls, Katherine; Anderson, R. Scott; Boser, Christina L.; Braje, Todd J.; Cayan, Daniel R.; Chesser, R. Terry; Collins, Paul W.; Erlandson, Jon M.; Faulkner, Kate R.; Fleischer, Robert; Funk, W. Chris; Galipeau, Russell; Huston, Ann; King, Julie; Laughrin, Lyndal L.; Maldonado, Jesus; McEachern, Kathryn; Muhs, Daniel R.; Newsome, Seth D.; Reeder-Myers, Leslie; Still, Christopher; Morrison, Scott A.

    2014-01-01

    Historical ecology is becoming an important focus in conservation biology and offers a promising tool to help guide ecosystem management. Here, we integrate data from multiple disciplines to illuminate the past, present, and future of biodiversity on California's Channel Islands, an archipelago that has undergone a wide range of land-use and ecological changes. Our analysis spans approximately 20,000 years, from before human occupation and through Native American hunter–gatherers, commercial ranchers and fishers, the US military, and other land managers. We demonstrate how long-term, interdisciplinary research provides insight into conservation decisions, such as setting ecosystem restoration goals, preserving rare and endemic taxa, and reducing the impacts of climate change on natural and cultural resources. We illustrate the importance of historical perspectives for understanding modern patterns and ecological change and present an approach that can be applied generally in conservation management planning.

  17. Making Interoperability Easier with the NASA Metadata Management Tool

    Science.gov (United States)

    Shum, D.; Reese, M.; Pilone, D.; Mitchell, A. E.

    2016-12-01

    ISO 19115 has enabled interoperability amongst tools, yet many users find it hard to build ISO metadata for their collections because it can be large and overly flexible for their needs. The Metadata Management Tool (MMT), part of NASA's Earth Observing System Data and Information System (EOSDIS), offers users a modern, easy to use browser based tool to develop ISO compliant metadata. Through a simplified UI experience, metadata curators can create and edit collections without any understanding of the complex ISO-19115 format, while still generating compliant metadata. The MMT is also able to assess the completeness of collection level metadata by evaluating it against a variety of metadata standards. The tool provides users with clear guidance as to how to change their metadata in order to improve their quality and compliance. It is based on NASA's Unified Metadata Model for Collections (UMM-C) which is a simpler metadata model which can be cleanly mapped to ISO 19115. This allows metadata authors and curators to meet ISO compliance requirements faster and more accurately. The MMT and UMM-C have been developed in an agile fashion, with recurring end user tests and reviews to continually refine the tool, the model and the ISO mappings. This process is allowing for continual improvement and evolution to meet the community's needs.

  18. Geospatial metadata retrieval from web services

    Directory of Open Access Journals (Sweden)

    Ivanildo Barbosa

    Full Text Available Nowadays, producers of geospatial data in either raster or vector formats are able to make them available on the World Wide Web by deploying web services that enable users to access and query on those contents even without specific software for geoprocessing. Several providers around the world have deployed instances of WMS (Web Map Service, WFS (Web Feature Service and WCS (Web Coverage Service, all of them specified by the Open Geospatial Consortium (OGC. In consequence, metadata about the available contents can be retrieved to be compared with similar offline datasets from other sources. This paper presents a brief summary and describes the matching process between the specifications for OGC web services (WMS, WFS and WCS and the specifications for metadata required by the ISO 19115 - adopted as reference for several national metadata profiles, including the Brazilian one. This process focuses on retrieving metadata about the identification and data quality packages as well as indicates the directions to retrieve metadata related to other packages. Therefore, users are able to assess whether the provided contents fit to their purposes.

  19. Metadata and Service at the GFZ ISDC Portal

    Science.gov (United States)

    Ritschel, B.

    2008-05-01

    The online service portal of the GFZ Potsdam Information System and Data Center (ISDC) is an access point for all manner of geoscientific geodata, its corresponding metadata, scientific documentation and software tools. At present almost 2000 national and international users and user groups have the opportunity to request Earth science data from a portfolio of 275 different products types and more than 20 Million single data files with an added volume of approximately 12 TByte. The majority of the data and information, the portal currently offers to the public, are global geomonitoring products such as satellite orbit and Earth gravity field data as well as geomagnetic and atmospheric data for the exploration. These products for Earths changing system are provided via state-of-the art retrieval techniques. The data product catalog system behind these techniques is based on the extensive usage of standardized metadata, which are describing the different geoscientific product types and data products in an uniform way. Where as all ISDC product types are specified by NASA's Directory Interchange Format (DIF), Version 9.0 Parent XML DIF metadata files, the individual data files are described by extended DIF metadata documents. Depending on the beginning of the scientific project, one part of data files are described by extended DIF, Version 6 metadata documents and the other part are specified by data Child XML DIF metadata documents. Both, the product type dependent parent DIF metadata documents and the data file dependent child DIF metadata documents are derived from a base-DIF.xsd xml schema file. The ISDC metadata philosophy defines a geoscientific product as a package consisting of mostly one or sometimes more than one data file plus one extended DIF metadata file. Because NASA's DIF metadata standard has been developed in order to specify a collection of data only, the extension of the DIF standard consists of new and specific attributes, which are necessary for

  20. The PDS4 Metadata Management System

    Science.gov (United States)

    Raugh, A. C.; Hughes, J. S.

    2018-04-01

    We present the key features of the Planetary Data System (PDS) PDS4 Information Model as an extendable metadata management system for planetary metadata related to data structure, analysis/interpretation, and provenance.

  1. Data, Metadata, and Ted

    OpenAIRE

    Borgman, Christine L.

    2014-01-01

    Ted Nelson coined the term “hypertext” and developed Xanadu in a universe parallel to the one in which librarians, archivists, and documentalists were creating metadata to establish cross-connections among the myriad topics of this world. When these universes collided, comets exploded as ontologies proliferated. Black holes were formed as data disappeared through lack of description. Today these universes coexist, each informing the other, if not always happily: the formal rules of metadata, ...

  2. Handbook of metadata, semantics and ontologies

    CERN Document Server

    Sicilia, Miguel-Angel

    2013-01-01

    Metadata research has emerged as a discipline cross-cutting many domains, focused on the provision of distributed descriptions (often called annotations) to Web resources or applications. Such associated descriptions are supposed to serve as a foundation for advanced services in many application areas, including search and location, personalization, federation of repositories and automated delivery of information. Indeed, the Semantic Web is in itself a concrete technological framework for ontology-based metadata. For example, Web-based social networking requires metadata describing people and

  3. Pembuatan Aplikasi Metadata Generator untuk Koleksi Peninggalan Warisan Budaya

    Directory of Open Access Journals (Sweden)

    Wimba Agra Wicesa

    2017-03-01

    Full Text Available Warisan budaya merupakan suatu aset penting yang digunakan sebagai sumber informasi dalam mempelajari ilmu sejarah. Mengelola data warisan budaya menjadi suatu hal yang harus diperhatikan guna menjaga keutuhan data warisan budaya di masa depan. Menciptakan sebuah metadata warisan budaya merupakan salah satu langkah yang dapat diambil untuk menjaga nilai dari sebuah artefak. Dengan menggunakan konsep metadata, informasi dari setiap objek warisan budaya tersebut menjadi mudah untuk dibaca, dikelola, maupun dicari kembali meskipun telah tersimpan lama. Selain itu dengan menggunakan konsep metadata, informasi tentang warisan budaya dapat digunakan oleh banyak sistem. Metadata warisan budaya merupakan metadata yang cukup besar. Sehingga untuk membangun metada warisan budaya dibutuhkan waktu yang cukup lama. Selain itu kesalahan (human error juga dapat menghambat proses pembangunan metadata warisan budaya. Proses pembangkitan metadata warisan budaya melalui Aplikasi Metadata Generator menjadi lebih cepat dan mudah karena dilakukan secara otomatis oleh sistem. Aplikasi ini juga dapat menekan human error sehingga proses pembangkitan menjadi lebih efisien.

  4. Fire and vegetation history on Santa Rosa Island, Channel Islands, and long-term environmental change in southern California

    Science.gov (United States)

    Starratt, Scott W.; Pinter, N.; Anderson, Robert S.; Jass, R.B.

    2009-01-01

    The long-term history of vegetation and fire was investigated at two locations – Soledad Pond (275 m; from ca. 12 000 cal. a BP) and Abalone Rocks Marsh (0 m; from ca. 7000 cal. a BP) – on Santa Rosa Island, situated off the coast of southern California. A coastal conifer forest covered highlands of Santa Rosa during the last glacial, but by ca. 11 800 cal. a BP Pinus stands, coastal sage scrub and grassland replaced the forest as the climate warmed. The early Holocene became increasingly drier, particularly after ca. 9150 cal. a BP, as the pond dried frequently, and coastal sage scrub covered the nearby hillslopes. By ca. 6900 cal. a BP grasslands recovered at both sites. Pollen of wetland plants became prominent at Soledad Pond after ca. 4500 cal. a BP, and at Abalone Rocks Marsh after ca. 3465 cal. a BP. Diatoms suggest freshening of the Abalone Rocks Marsh somewhat later, probably by additional runoff from the highlands. Introduction of non-native species by ranchers occurred subsequent to AD 1850. Charcoal influx is high early in the record, but declines during the early Holocene when minimal biomass suggests extended drought. A general increase occurs after ca. 7000 cal. a BP, and especially after ca. 4500 cal. a BP. The Holocene pattern closely resembles population levels constructed from the archaeological record, and suggests a potential influence by humans on the fire regime of the islands, particularly during the late Holocene.

  5. The critical role of islands for waterbird breeding and foraging habitat in managed ponds of the South Bay Salt Pond Restoration Project, South San Francisco Bay, California

    Science.gov (United States)

    Ackerman, Joshua T.; Hartman, C. Alex; Herzog, Mark P.; Smith, Lacy M.; Moskal, Stacy M.; De La Cruz, Susan E. W.; Yee, Julie L.; Takekawa, John Y.

    2014-01-01

    The South Bay Salt Pond Restoration Project aims to restore 50–90 percent of former salt evaporation ponds into tidal marsh in South San Francisco Bay, California. However, large numbers of waterbirds use these ponds annually as nesting and foraging habitat. Islands within ponds are particularly important habitat for nesting, foraging, and roosting waterbirds. To maintain current waterbird populations, the South Bay Salt Pond Restoration Project plans to create new islands within former salt ponds in South San Francisco Bay. In a series of studies, we investigated pond and individual island attributes that are most beneficial to nesting, foraging, and roosting waterbirds.

  6. Evolving Metadata in NASA Earth Science Data Systems

    Science.gov (United States)

    Mitchell, A.; Cechini, M. F.; Walter, J.

    2011-12-01

    NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 3500 data products ranging from various types of science disciplines. EOSDIS is currently comprised of 12 discipline specific data centers that are collocated with centers of science discipline expertise. Metadata is used in all aspects of NASA's Earth Science data lifecycle from the initial measurement gathering to the accessing of data products. Missions use metadata in their science data products when describing information such as the instrument/sensor, operational plan, and geographically region. Acting as the curator of the data products, data centers employ metadata for preservation, access and manipulation of data. EOSDIS provides a centralized metadata repository called the Earth Observing System (EOS) ClearingHouse (ECHO) for data discovery and access via a service-oriented-architecture (SOA) between data centers and science data users. ECHO receives inventory metadata from data centers who generate metadata files that complies with the ECHO Metadata Model. NASA's Earth Science Data and Information System (ESDIS) Project established a Tiger Team to study and make recommendations regarding the adoption of the international metadata standard ISO 19115 in EOSDIS. The result was a technical report recommending an evolution of NASA data systems towards a consistent application of ISO 19115 and related standards including the creation of a NASA-specific convention for core ISO 19115 elements. Part of

  7. The XML Metadata Editor of GFZ Data Services

    Science.gov (United States)

    Ulbricht, Damian; Elger, Kirsten; Tesei, Telemaco; Trippanera, Daniele

    2017-04-01

    Following the FAIR data principles, research data should be Findable, Accessible, Interoperable and Reuseable. Publishing data under these principles requires to assign persistent identifiers to the data and to generate rich machine-actionable metadata. To increase the interoperability, metadata should include shared vocabularies and crosslink the newly published (meta)data and related material. However, structured metadata formats tend to be complex and are not intended to be generated by individual scientists. Software solutions are needed that support scientists in providing metadata describing their data. To facilitate data publication activities of 'GFZ Data Services', we programmed an XML metadata editor that assists scientists to create metadata in different schemata popular in the earth sciences (ISO19115, DIF, DataCite), while being at the same time usable by and understandable for scientists. Emphasis is placed on removing barriers, in particular the editor is publicly available on the internet without registration [1] and the scientists are not requested to provide information that may be generated automatically (e.g. the URL of a specific licence or the contact information of the metadata distributor). Metadata are stored in browser cookies and a copy can be saved to the local hard disk. To improve usability, form fields are translated into the scientific language, e.g. 'creators' of the DataCite schema are called 'authors'. To assist filling in the form, we make use of drop down menus for small vocabulary lists and offer a search facility for large thesauri. Explanations to form fields and definitions of vocabulary terms are provided in pop-up windows and a full documentation is available for download via the help menu. In addition, multiple geospatial references can be entered via an interactive mapping tool, which helps to minimize problems with different conventions to provide latitudes and longitudes. Currently, we are extending the metadata editor

  8. Improving Metadata Compliance for Earth Science Data Records

    Science.gov (United States)

    Armstrong, E. M.; Chang, O.; Foster, D.

    2014-12-01

    One of the recurring challenges of creating earth science data records is to ensure a consistent level of metadata compliance at the granule level where important details of contents, provenance, producer, and data references are necessary to obtain a sufficient level of understanding. These details are important not just for individual data consumers but also for autonomous software systems. Two of the most popular metadata standards at the granule level are the Climate and Forecast (CF) Metadata Conventions and the Attribute Conventions for Dataset Discovery (ACDD). Many data producers have implemented one or both of these models including the Group for High Resolution Sea Surface Temperature (GHRSST) for their global SST products and the Ocean Biology Processing Group for NASA ocean color and SST products. While both the CF and ACDD models contain various level of metadata richness, the actual "required" attributes are quite small in number. Metadata at the granule level becomes much more useful when recommended or optional attributes are implemented that document spatial and temporal ranges, lineage and provenance, sources, keywords, and references etc. In this presentation we report on a new open source tool to check the compliance of netCDF and HDF5 granules to the CF and ACCD metadata models. The tool, written in Python, was originally implemented to support metadata compliance for netCDF records as part of the NOAA's Integrated Ocean Observing System. It outputs standardized scoring for metadata compliance for both CF and ACDD, produces an objective summary weight, and can be implemented for remote records via OPeNDAP calls. Originally a command-line tool, we have extended it to provide a user-friendly web interface. Reports on metadata testing are grouped in hierarchies that make it easier to track flaws and inconsistencies in the record. We have also extended it to support explicit metadata structures and semantic syntax for the GHRSST project that can be

  9. Handling multiple metadata streams regarding digital learning material

    NARCIS (Netherlands)

    Roes, J.B.M.; Vuuren, J. van; Verbeij, N.; Nijstad, H.

    2010-01-01

    This paper presents the outcome of a study performed in the Netherlands on handling multiple metadata streams regarding digital learning material. The paper describes the present metadata architecture in the Netherlands, the present suppliers and users of metadata and digital learning materials. It

  10. On the Origin of Metadata

    Directory of Open Access Journals (Sweden)

    Sam Coppens

    2012-12-01

    Full Text Available Metadata has been around and has evolved for centuries, albeit not recognized as such. Medieval manuscripts typically had illuminations at the start of each chapter, being both a kind of signature for the author writing the script and a pictorial chapter anchor for the illiterates at the time. Nowadays, there is so much fragmented information on the Internet that users sometimes fail to distinguish the real facts from some bended truth, let alone being able to interconnect different facts. Here, the metadata can both act as noise-reductors for detailed recommendations to the end-users, as it can be the catalyst to interconnect related information. Over time, metadata thus not only has had different modes of information, but furthermore, metadata’s relation of information to meaning, i.e., “semantics”, evolved. Darwin’s evolutionary propositions, from “species have an unlimited reproductive capacity”, over “natural selection”, to “the cooperation of mutations leads to adaptation to the environment” show remarkable parallels to both metadata’s different modes of information and to its relation of information to meaning over time. In this paper, we will show that the evolution of the use of (metadata can be mapped to Darwin’s nine evolutionary propositions. As mankind and its behavior are products of an evolutionary process, the evolutionary process of metadata with its different modes of information is on the verge of a new-semantic-era.

  11. Quick-Reaction Report on the Audit of Defense Base Realignment and Closure Budget Data for Naval Station Treasure Island, California

    Science.gov (United States)

    1994-05-19

    the audit of two projects: P-608T, Building Modifications, valued at...Island, California, to the Naval Training Center Great Lakes, Illinois. The audit also evaluated the implementation of the DoD Internal Management...related to the two projects in this report and is discussed in Report No. 94-109, Quick-Reaction Report on the Audit of Defense Base Realignment and Closure Budget Data for the Naval Training Center Great Lakes, Illinois, May 19,

  12. Late Quaternary sea-level history and the antiquity of mammoths (Mammuthus exilis and Mammuthus columbi), Channel Islands NationalPark, California, USA

    Science.gov (United States)

    Muhs, Daniel R.; Simmons, Kathleen R.; Groves, Lindsey T.; McGeehin, John P.; Schumann, R. Randall; Agenbroad, Larry D.

    2015-01-01

    Fossils of Columbian mammoths (Mammuthus columbi) and pygmy mammoths (Mammuthus exilis) have been reported from Channel Islands National Park, California. Most date to the last glacial period (Marine Isotope Stage [MIS] 2), but a tusk of M. exilis (or immature M. columbi) was found in the lowest marine terrace of Santa Rosa Island. Uranium-series dating of corals yielded ages from 83.8 ± 0.6 ka to 78.6 ± 0.5 ka, correlating the terrace with MIS 5.1, a time of relatively high sea level. Mammoths likely immigrated to the islands by swimming during the glacial periods MIS 6 (~ 150 ka) or MIS 8 (~ 250 ka), when sea level was low and the island–mainland distance was minimal, as during MIS 2. Earliest mammoth immigration to the islands likely occurred late enough in the Quaternary that uplift of the islands and the mainland decreased the swimming distance to a range that could be accomplished by mammoths. Results challenge the hypothesis that climate change, vegetation change, and decreased land area from sea-level rise were the causes of mammoth extinction at the Pleistocene/Holocene boundary on the Channel Islands. Pre-MIS 2 mammoth populations would have experienced similar or even more dramatic changes at the MIS 6/5.5 transition.

  13. Developing Cyberinfrastructure Tools and Services for Metadata Quality Evaluation

    Science.gov (United States)

    Mecum, B.; Gordon, S.; Habermann, T.; Jones, M. B.; Leinfelder, B.; Powers, L. A.; Slaughter, P.

    2016-12-01

    Metadata and data quality are at the core of reusable and reproducible science. While great progress has been made over the years, much of the metadata collected only addresses data discovery, covering concepts such as titles and keywords. Improving metadata beyond the discoverability plateau means documenting detailed concepts within the data such as sampling protocols, instrumentation used, and variables measured. Given that metadata commonly do not describe their data at this level, how might we improve the state of things? Giving scientists and data managers easy to use tools to evaluate metadata quality that utilize community-driven recommendations is the key to producing high-quality metadata. To achieve this goal, we created a set of cyberinfrastructure tools and services that integrate with existing metadata and data curation workflows which can be used to improve metadata and data quality across the sciences. These tools work across metadata dialects (e.g., ISO19115, FGDC, EML, etc.) and can be used to assess aspects of quality beyond what is internal to the metadata such as the congruence between the metadata and the data it describes. The system makes use of a user-friendly mechanism for expressing a suite of checks as code in popular data science programming languages such as Python and R. This reduces the burden on scientists and data managers to learn yet another language. We demonstrated these services and tools in three ways. First, we evaluated a large corpus of datasets in the DataONE federation of data repositories against a metadata recommendation modeled after existing recommendations such as the LTER best practices and the Attribute Convention for Dataset Discovery (ACDD). Second, we showed how this service can be used to display metadata and data quality information to data producers during the data submission and metadata creation process, and to data consumers through data catalog search and access tools. Third, we showed how the centrally

  14. From CLARIN Component Metadata to Linked Open Data

    NARCIS (Netherlands)

    Durco, M.; Windhouwer, Menzo

    2014-01-01

    In the European CLARIN infrastructure a growing number of resources are described with Component Metadata. In this paper we describe a transformation to make this metadata available as linked data. After this first step it becomes possible to connect the CLARIN Component Metadata with other valuable

  15. Collection Metadata Solutions for Digital Library Applications

    Science.gov (United States)

    Hill, Linda L.; Janee, Greg; Dolin, Ron; Frew, James; Larsgaard, Mary

    1999-01-01

    Within a digital library, collections may range from an ad hoc set of objects that serve a temporary purpose to established library collections intended to persist through time. The objects in these collections vary widely, from library and data center holdings to pointers to real-world objects, such as geographic places, and the various metadata schemas that describe them. The key to integrated use of such a variety of collections in a digital library is collection metadata that represents the inherent and contextual characteristics of a collection. The Alexandria Digital Library (ADL) Project has designed and implemented collection metadata for several purposes: in XML form, the collection metadata "registers" the collection with the user interface client; in HTML form, it is used for user documentation; eventually, it will be used to describe the collection to network search agents; and it is used for internal collection management, including mapping the object metadata attributes to the common search parameters of the system.

  16. Study on high-level waste geological disposal metadata model

    International Nuclear Information System (INIS)

    Ding Xiaobin; Wang Changhong; Zhu Hehua; Li Xiaojun

    2008-01-01

    This paper expatiated the concept of metadata and its researches within china and abroad, then explain why start the study on the metadata model of high-level nuclear waste deep geological disposal project. As reference to GML, the author first set up DML under the framework of digital underground space engineering. Based on DML, a standardized metadata employed in high-level nuclear waste deep geological disposal project is presented. Then, a Metadata Model with the utilization of internet is put forward. With the standardized data and CSW services, this model may solve the problem in the data sharing and exchanging of different data form A metadata editor is build up in order to search and maintain metadata based on this model. (authors)

  17. Metadata to Support Data Warehouse Evolution

    Science.gov (United States)

    Solodovnikova, Darja

    The focus of this chapter is metadata necessary to support data warehouse evolution. We present the data warehouse framework that is able to track evolution process and adapt data warehouse schemata and data extraction, transformation, and loading (ETL) processes. We discuss the significant part of the framework, the metadata repository that stores information about the data warehouse, logical and physical schemata and their versions. We propose the physical implementation of multiversion data warehouse in a relational DBMS. For each modification of a data warehouse schema, we outline the changes that need to be made to the repository metadata and in the database.

  18. Sediment data collected in 2010 from Cat Island, Mississippi

    Science.gov (United States)

    Buster, Noreen A.; Kelso, Kyle W.; Miselis, Jennifer L.; Kindinger, Jack G.

    2014-01-01

    Scientists from the U.S. Geological Survey, St. Petersburg Coastal and Marine Science Center, in collaboration with the U.S. Army Corps of Engineers, conducted geophysical and sedimentological surveys in 2010 around Cat Island, Mississippi, which is the westernmost island in the Mississippi-Alabama barrier island chain. The objective of the study was to understand the geologic evolution of Cat Island relative to other barrier islands in the northern Gulf of Mexico by identifying relationships between the geologic history, present day morphology, and sediment distribution. This data series serves as an archive of terrestrial and marine sediment vibracores collected August 4-6 and October 20-22, 2010, respectively. Geographic information system data products include marine and terrestrial core locations and 2007 shoreline data. Additional files include marine and terrestrial core description logs, core photos, results of sediment grain-size analyses, optically stimulated luminescence dating and carbon-14 dating locations and results, Field Activity Collection System logs, and formal Federal Geographic Data Committee metadata.

  19. Streamlining geospatial metadata in the Semantic Web

    Science.gov (United States)

    Fugazza, Cristiano; Pepe, Monica; Oggioni, Alessandro; Tagliolato, Paolo; Carrara, Paola

    2016-04-01

    In the geospatial realm, data annotation and discovery rely on a number of ad-hoc formats and protocols. These have been created to enable domain-specific use cases generalized search is not feasible for. Metadata are at the heart of the discovery process and nevertheless they are often neglected or encoded in formats that either are not aimed at efficient retrieval of resources or are plainly outdated. Particularly, the quantum leap represented by the Linked Open Data (LOD) movement did not induce so far a consistent, interlinked baseline in the geospatial domain. In a nutshell, datasets, scientific literature related to them, and ultimately the researchers behind these products are only loosely connected; the corresponding metadata intelligible only to humans, duplicated on different systems, seldom consistently. Instead, our workflow for metadata management envisages i) editing via customizable web- based forms, ii) encoding of records in any XML application profile, iii) translation into RDF (involving the semantic lift of metadata records), and finally iv) storage of the metadata as RDF and back-translation into the original XML format with added semantics-aware features. Phase iii) hinges on relating resource metadata to RDF data structures that represent keywords from code lists and controlled vocabularies, toponyms, researchers, institutes, and virtually any description one can retrieve (or directly publish) in the LOD Cloud. In the context of a distributed Spatial Data Infrastructure (SDI) built on free and open-source software, we detail phases iii) and iv) of our workflow for the semantics-aware management of geospatial metadata.

  20. The Global Streamflow Indices and Metadata Archive (GSIM) - Part 1: The production of a daily streamflow archive and metadata

    Science.gov (United States)

    Do, Hong Xuan; Gudmundsson, Lukas; Leonard, Michael; Westra, Seth

    2018-04-01

    This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM), a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections). It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477" target="_blank">https://doi.pangaea.de/10.1594/PANGAEA.887477): (1) a GSIM catalogue collating basic metadata associated with each time series, (2) catchment boundaries for the contributing area of each gauge, and (3) catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.

  1. Physically Based Modeling of Delta Island Consumptive Use: Fabian Tract and Staten Island, California

    Directory of Open Access Journals (Sweden)

    Lucas J. Siegfried

    2014-12-01

    Full Text Available doi: http://dx.doi.org/10.15447/sfews.2014v12iss4art2Water use estimation is central to managing most water problems. To better understand water use in California’s Sacramento–San Joaquin Delta, a collaborative, integrated approach was used to predict Delta island diversion, consumption, and return of water on a more detailed temporal and spatial resolution. Fabian Tract and Staten Island were selected for this pilot study based on available data and island accessibility. Historical diversion and return location data, water rights claims, LiDAR digital elevation model data, and Google Earth were used to predict island diversion and return locations, which were tested and improved through ground-truthing. Soil and land-use characteristics as well as weather data were incorporated with the Integrated Water Flow Model Demand Calculator to estimate water use and runoff returns from input agricultural lands. For modeling, the islands were divided into grid cells forming subregions, representing fields, levees, ditches, and roads. The subregions were joined hydrographically to form diversion and return watersheds related to return and diversion locations. Diversions and returns were limited by physical capacities. Differences between initial model and measured results point to the importance of seepage into deeply subsided islands. The capabilities of the models presented far exceeded current knowledge of agricultural practices within the Delta, demonstrating the need for more data collection to enable improvements upon current Delta Island Consumptive Use estimates.

  2. Metadata Aided Run Selection at ATLAS

    CERN Document Server

    Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E

    2010-01-01

    Management of the large volume of data collected by any large scale sci- entific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user in- terfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called “runBrowser” makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions at...

  3. Metadata aided run selection at ATLAS

    CERN Document Server

    Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called “runBrowser” makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attrib...

  4. The Global Streamflow Indices and Metadata Archive (GSIM – Part 1: The production of a daily streamflow archive and metadata

    Directory of Open Access Journals (Sweden)

    H. X. Do

    2018-04-01

    Full Text Available This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM, a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections. It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477: (1 a GSIM catalogue collating basic metadata associated with each time series, (2 catchment boundaries for the contributing area of each gauge, and (3 catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.

  5. Prediction of Solar Eruptions Using Filament Metadata

    Science.gov (United States)

    Aggarwal, Ashna; Schanche, Nicole; Reeves, Katharine K.; Kempton, Dustin; Angryk, Rafal

    2018-05-01

    We perform a statistical analysis of erupting and non-erupting solar filaments to determine the properties related to the eruption potential. In order to perform this study, we correlate filament eruptions documented in the Heliophysics Event Knowledgebase (HEK) with HEK filaments that have been grouped together using a spatiotemporal tracking algorithm. The HEK provides metadata about each filament instance, including values for length, area, tilt, and chirality. We add additional metadata properties such as the distance from the nearest active region and the magnetic field decay index. We compare trends in the metadata from erupting and non-erupting filament tracks to discover which properties present signs of an eruption. We find that a change in filament length over time is the most important factor in discriminating between erupting and non-erupting filament tracks, with erupting tracks being more likely to have decreasing length. We attempt to find an ensemble of predictive filament metadata using a Random Forest Classifier approach, but find the probability of correctly predicting an eruption with the current metadata is only slightly better than chance.

  6. A web-based, dynamic metadata interface to MDSplus

    International Nuclear Information System (INIS)

    Gardner, Henry J.; Karia, Raju; Manduchi, Gabriele

    2008-01-01

    We introduce the concept of a Fusion Data Grid and discuss the management of metadata within such a Grid. We describe a prototype application which serves fusion data over the internet together with metadata information which can be flexibly created and modified over time. The application interfaces with the MDSplus data acquisition system and it has been designed to capture metadata which is generated by scientists from the post-processing of experimental data. The implementation of dynamic metadata tables using the Java programming language together with an object-relational mapping system, Hibernate, is described in the Appendix

  7. Creating metadata that work for digital libraries and Google

    OpenAIRE

    Dawson, Alan

    2004-01-01

    For many years metadata has been recognised as a significant component of the digital information environment. Substantial work has gone into creating complex metadata schemes for describing digital content. Yet increasingly Web search engines, and Google in particular, are the primary means of discovering and selecting digital resources, although they make little use of metadata. This article considers how digital libraries can gain more value from their metadata by adapting it for Google us...

  8. Technologies for metadata management in scientific a

    OpenAIRE

    Castro-Romero, Alexander; González-Sanabria, Juan S.; Ballesteros-Ricaurte, Javier A.

    2015-01-01

    The use of Semantic Web technologies has been increasing, so it is common using them in different ways. This article evaluates how these technologies can contribute to improve the indexing in articles in scientific journals. Initially, there is a conceptual review about metadata. Later, studying the most important technologies for the use of metadata in Web and, this way, choosing one of them to apply it in the case of study of scientific articles indexing, in order to determine the metadata ...

  9. The role of metadata in managing large environmental science datasets. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Melton, R.B.; DeVaney, D.M. [eds.] [Pacific Northwest Lab., Richland, WA (United States); French, J. C. [Univ. of Virginia, (United States)

    1995-06-01

    The purpose of this workshop was to bring together computer science researchers and environmental sciences data management practitioners to consider the role of metadata in managing large environmental sciences datasets. The objectives included: establishing a common definition of metadata; identifying categories of metadata; defining problems in managing metadata; and defining problems related to linking metadata with primary data.

  10. Bathymetry and acoustic backscatter data collected in 2010 from Cat Island, Mississippi

    Science.gov (United States)

    Buster, Noreen A.; Pfeiffer, William R.; Miselis, Jennifer L.; Kindinger, Jack G.; Wiese, Dana S.; Reynolds, B.J.

    2012-01-01

    Scientists from the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center (SPCMSC), in collaboration with the U.S. Army Corps of Engineers (USACE), conducted geophysical and sedimentological surveys around Cat Island, the westernmost island in the Mississippi-Alabama barrier island chain (fig. 1). The objectives of the study were to understand the geologic evolution of Cat Island relative to other barrier islands in the northern Gulf of Mexico and to identify relationships between the geologic history, present day morphology, and sediment distribution. This report contains data from the bathymetry and side-scan sonar portion of the study collected during two geophysical cruises. Interferometric swath bathymetry and side-scan sonar data were collected aboard the RV G.K. Gilbert September 7-15, 2010. Single-beam bathymetry was collected in shallow water around the island (Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FDGC) metadata.

  11. Efficient processing of MPEG-21 metadata in the binary domain

    Science.gov (United States)

    Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas

    2005-10-01

    XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.

  12. PATHOGENIC LEPTOSPIRA SEROVARS IN FREE-LIVING SEA LIONS IN THE GULF OF CALIFORNIA AND ALONG THE BAJA CALIFORNIA COAST OF MEXICO.

    Science.gov (United States)

    Avalos-Téllez, Rosalía; Carrillo-Casas, Erika M; Atilano-López, Daniel; Godínez-Reyes, Carlos R; Díaz-Aparicio, Efrén; Ramírez-Delgado, David; Ramírez-Echenique, María F; Leyva-Leyva, Margarita; Suzán, Gerardo; Suárez-Güemes, Francisco

    2016-04-28

    The California sea lion ( Zalophus californianus ), a permanent inhabitant of the Gulf of California in Mexico, is susceptible to pathogenic Leptospira spp. infection, which can result in hepatic and renal damage and may lead to renal failure and death. During summer 2013, we used the microscopic agglutination test (MAT) to investigate the prevalence of anti-Leptospira antibodies in blood of clinically healthy sea lion pups from seven rookery islands on the Pacific Coast of Baja California (Pacific Ocean) and in the Gulf of California. We also used PCR to examine blood for Leptospira DNA. Isolation of Leptospira in liquid media was unsuccessful. We found higher antibody prevalence in sea lions from the rookery islands in the gulf than in those from the Pacific Coast. Antibodies against 11 serovars were identified in the Gulf of California population; the most frequent reactions were against serovars Bataviae (90%), Pyrogenes (86%), Wolffi (86%), Celledoni (71%), and Pomona (65%). In the Pacific Ocean population, MAT was positive against eight serovars, where Wolffi (88%), Pomona (75%), and Bataviae (70%) were the most frequent. Serum samples agglutinated with more than one Leptospira serovar. The maximum titer was 3,200. Each island had a different serology profile, and islands combined showed a distinct profile for each region. We detected pathogenic Leptospira DNA in 63% of blood samples, but we found no saprophytic Leptospira. Positive PCR results were obtained in blood samples with high and low MAT titers. Together, these two methods enhance the diagnosis and interpretation of sea lion leptospirosis. Our results may be related to human activities or the presence of other reservoirs with which sea lions interact, and they may also be related to sea lion stranding.

  13. Improving Scientific Metadata Interoperability And Data Discoverability using OAI-PMH

    Science.gov (United States)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James M.; Wilson, Bruce E.

    2010-12-01

    While general-purpose search engines (such as Google or Bing) are useful for finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. However, there are a number of different protocols for harvesting metadata, with some challenges for ensuring that updates are propagated and for collaborations with repositories using differing metadata standards. The Open Archive Initiative Protocol for Metadata Handling (OAI-PMH) is a standard that is seeing increased use as a means for exchanging structured metadata. OAI-PMH implementations must support Dublin Core as a metadata standard, with other metadata formats as optional. We have developed tools which enable our structured search tool (Mercury; http://mercury.ornl.gov) to consume metadata from OAI-PMH services in any of the metadata formats we support (Dublin Core, Darwin Core, FCDC CSDGM, GCMD DIF, EML, and ISO 19115/19137). We are also making ORNL DAAC metadata available through OAI-PMH for other metadata tools to utilize, such as the NASA Global Change Master Directory, GCMD). This paper describes Mercury capabilities with multiple metadata formats, in general, and, more specifically, the results of our OAI-PMH implementations and

  14. Nine endangered taxa, one recovering ecosystem: Identifying common ground for recovery on Santa Cruz Island, California

    Science.gov (United States)

    McEachern, A. Kathryn; Wilken, Dieter H.

    2011-01-01

    It is not uncommon to have several rare and listed taxa occupying habitats in one landscape or management area where conservation amounts to defense against the possibility of further loss. It is uncommon and extremely exciting, however, to have several listed taxa occupying one island that is managed cooperatively for conservation and recovery. On Santa Cruz Island, the largest of the northern California island group in the Santa Barbara Channel, we have a golden opportunity to marry ecological knowledge and institutional "good will" in a field test of holistic rare plant conservation. Here, the last feral livestock have been removed, active weed control is underway, and management is focused on understanding and demonstrating system response to conservation management. Yet funding limitations still exist and we need to plan the most fiscally conservative and marketable approach to rare plant restoration. We still experience the tension between desirable quick results and the ecological pace of system recovery. Therefore, our research has focused on identifying fundamental constraints on species recovery at individual, demographic, habitat, and ecosystem levels, and then developing suites of actions that might be taken across taxa and landscapes. At the same time, we seek a performance middle ground that balances an institutional need for quick demonstration of hands-on positive results with a contrasting approach that allows ecosystem recovery to facilitate species recovery in the long term. We find that constraints vary across breeding systems, life-histories, and island locations. We take a hybrid approach in which we identify several actions that we can take now to enhance population size or habitat occupancy for some taxa by active restoration, while allowing others to recover at the pace of ecosystem change. We make our recommendations on the basis of data we have collected over the last decade, so that management is firmly grounded in ecological observation.

  15. Mining Building Metadata by Data Stream Comparison

    DEFF Research Database (Denmark)

    Holmegaard, Emil; Kjærgaard, Mikkel Baun

    2016-01-01

    to handle data streams with only slightly similar patterns. We have evaluated Metafier with points and data from one building located in Denmark. We have evaluated Metafier with 903 points, and the overall accuracy, with only 3 known examples, was 94.71%. Furthermore we found that using DTW for mining...... ways to annotate sensor and actuation points. This makes it difficult to create intuitive queries for retrieving data streams from points. Another problem is the amount of insufficient or missing metadata. We introduce Metafier, a tool for extracting metadata from comparing data streams. Metafier...... enables a semi-automatic labeling of metadata to building instrumentation. Metafier annotates points with metadata by comparing the data from a set of validated points with unvalidated points. Metafier has three different algorithms to compare points with based on their data. The three algorithms...

  16. openPDS: protecting the privacy of metadata through SafeAnswers.

    Directory of Open Access Journals (Sweden)

    Yves-Alexandre de Montjoye

    Full Text Available The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1 we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2 we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research.

  17. Field Surveys of Rare Plants on Santa Cruz Island, California, 2003-2006: Historical Records and Current Distributions

    Science.gov (United States)

    McEachern, A. Kathryn; Chess, Katherine A.; Niessen, Ken

    2010-01-01

    Santa Cruz Island is the largest of the northern Channel Islands located off the coast of California. It is owned and managed as a conservation reserve by The Nature Conservancy and the Channel Islands National Park. The island is home to nine plant taxa listed in 1997 as threatened or endangered under the federal Endangered Species Act, because of declines related to nearly 150 years of ranching on the island. Feral livestock were removed from the island as a major conservation step, which was part of a program completed in early 2007 with the eradication of pigs and turkeys. For the first time in more than a century, the rare plants of Santa Cruz Island have a chance to recover in the wild. This study provides survey information and living plant materials needed for recovery management of the listed taxa. We developed a database containing information about historical collections of the nine taxa and used it to plan a survey strategy. Our objectives were to relocate as many of the previously known populations as possible, with emphasis on documenting sites not visited in several decades, sites that were poorly documented in the historical record, and sites spanning the range of environmental conditions inhabited by the taxa. From 2003 through 2006, we searched for and found 39 populations of the taxa, indicating that nearly 80 percent of the populations known earlier in the 1900s still existed. Most populations are small and isolated, occupying native-dominated habitat patches in a highly fragmented and invaded landscape; they are still at risk of declining through population losses. Most are not expanding beyond the edges of their habitat patches. However, most taxa appeared to have good seed production and a range of size classes in populations, indicating a good capacity for plant recruitment and population growth in these restricted sites. For these taxa, seed collection and outplanting might be a good strategy to increase numbers of populations for species

  18. Handling Metadata in a Neurophysiology Laboratory

    Directory of Open Access Journals (Sweden)

    Lyuba Zehl

    2016-07-01

    Full Text Available To date, non-reproducibility of neurophysiological research is a matterof intense discussion in the scientific community. A crucial componentto enhance reproducibility is to comprehensively collect and storemetadata, that is all information about the experiment, the data,and the applied preprocessing steps on the data, such that they canbe accessed and shared in a consistent and simple manner. However,the complexity of experiments, the highly specialized analysis workflowsand a lack of knowledge on how to make use of supporting softwaretools often overburden researchers to perform such a detailed documentation.For this reason, the collected metadata are often incomplete, incomprehensiblefor outsiders or ambiguous. Based on our research experience in dealingwith diverse datasets, we here provide conceptual and technical guidanceto overcome the challenges associated with the collection, organization,and storage of metadata in a neurophysiology laboratory. Through theconcrete example of managing the metadata of a complex experimentthat yields multi-channel recordings from monkeys performing a behavioralmotor task, we practically demonstrate the implementation of theseapproaches and solutions with the intention that they may be generalizedto a specific project at hand. Moreover, we detail five use casesthat demonstrate the resulting benefits of constructing a well-organizedmetadata collection when processing or analyzing the recorded data,in particular when these are shared between laboratories in a modernscientific collaboration. Finally, we suggest an adaptable workflowto accumulate, structure and store metadata from different sourcesusing, by way of example, the odML metadata framework.

  19. Metadata aided run selection at ATLAS

    International Nuclear Information System (INIS)

    Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  20. NAIP National Metadata

    Data.gov (United States)

    Farm Service Agency, Department of Agriculture — The NAIP National Metadata Map contains USGS Quarter Quad and NAIP Seamline boundaries for every year NAIP imagery has been collected. Clicking on the map also makes...

  1. Nearshore coastal bathymetry data collected in 2016 from West Ship Island to Horn Island, Gulf Islands National Seashore, Mississippi

    Science.gov (United States)

    DeWitt, Nancy T.; Stalk, Chelsea A.; Fredericks, Jake J.; Flocks, James G.; Kelso, Kyle W.; Farmer, Andrew S.; Tuten, Thomas M.; Buster, Noreen A.

    2018-04-13

    The U.S. Geological Survey (USGS) St. Petersburg Coastal and Marine Science Center, in cooperation with the U.S. Army Corps of Engineers, Mobile District, conducted bathymetric surveys of the nearshore waters surrounding Ship and Horn Islands, Gulf Islands National Seashore, Mississippi. The objective of this study was to establish base-level elevation conditions around West Ship, East Ship, and Horn Islands and their associated active littoral system prior to restoration activities. These activities include the closure of Camille Cut and the placement of sediment in the littoral zone of East Ship Island. These surveys can be compared with future surveys to monitor sediment migration patterns post-restoration and can also be measured against historic bathymetric datasets to further our understanding of island evolution.The USGS collected 667 line-kilometers (km) of single-beam bathymetry data and 844 line-km of interferometric swath bathymetry data in July 2016 under Field Activity Number 2016-347-FA. Data are provided in three datums: (1) the International Terrestrial Reference Frame of 2000 (ellipsoid height); (2) the North American Datum of 1983 (NAD83) CORS96 realization and the North American Vertical Datum of 1988 with respect to the GEOID12B model (orthometric height); and (3) NAD83 (CORS96) and Mean Lower Low Water (tidal datum). Data products, including x,y,zpoint datasets, trackline shapefiles, digital and handwritten Field Activity Collection Systems logs, 50-meter digital elevation model, and formal Federal Geographic Data Committee metadata, are available for download.

  2. An emergent theory of digital library metadata enrich then filter

    CERN Document Server

    Stevens, Brett

    2015-01-01

    An Emergent Theory of Digital Library Metadata is a reaction to the current digital library landscape that is being challenged with growing online collections and changing user expectations. The theory provides the conceptual underpinnings for a new approach which moves away from expert defined standardised metadata to a user driven approach with users as metadata co-creators. Moving away from definitive, authoritative, metadata to a system that reflects the diversity of users’ terminologies, it changes the current focus on metadata simplicity and efficiency to one of metadata enriching, which is a continuous and evolving process of data linking. From predefined description to information conceptualised, contextualised and filtered at the point of delivery. By presenting this shift, this book provides a coherent structure in which future technological developments can be considered.

  3. Design and Implementation of a Metadata-rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  4. Improving Access to NASA Earth Science Data through Collaborative Metadata Curation

    Science.gov (United States)

    Sisco, A. W.; Bugbee, K.; Shum, D.; Baynes, K.; Dixon, V.; Ramachandran, R.

    2017-12-01

    The NASA-developed Common Metadata Repository (CMR) is a high-performance metadata system that currently catalogs over 375 million Earth science metadata records. It serves as the authoritative metadata management system of NASA's Earth Observing System Data and Information System (EOSDIS), enabling NASA Earth science data to be discovered and accessed by a worldwide user community. The size of the EOSDIS data archive is steadily increasing, and the ability to manage and query this archive depends on the input of high quality metadata to the CMR. Metadata that does not provide adequate descriptive information diminishes the CMR's ability to effectively find and serve data to users. To address this issue, an innovative and collaborative review process is underway to systematically improve the completeness, consistency, and accuracy of metadata for approximately 7,000 data sets archived by NASA's twelve EOSDIS data centers, or Distributed Active Archive Centers (DAACs). The process involves automated and manual metadata assessment of both collection and granule records by a team of Earth science data specialists at NASA Marshall Space Flight Center. The team communicates results to DAAC personnel, who then make revisions and reingest improved metadata into the CMR. Implementation of this process relies on a network of interdisciplinary collaborators leveraging a variety of communication platforms and long-range planning strategies. Curating metadata at this scale and resolving metadata issues through community consensus improves the CMR's ability to serve current and future users and also introduces best practices for stewarding the next generation of Earth Observing System data. This presentation will detail the metadata curation process, its outcomes thus far, and also share the status of ongoing curation activities.

  5. Body size structure, biometric relationships and density of Chiton albolineatus (Mollusca: Polyplacophora) on the intertidal rocky zone of three islands of Mazatlan Bay, SE of the Gulf of California

    OpenAIRE

    Flores-Campaña, Luis Miguel; Arzola-González, Juan Francisco; de León-Herrera, Ramón

    2012-01-01

    Populations of the polyplacophoran mollusk Chiton albolineatus were studied at 6 sites with different wave exposure of the rocky shores of 3 islands of Mazatlan Bay (southeastern side of the Gulf of California). This chiton species is endemic to the Mexican Pacific coast. Chitons were sampled on wave-exposed and wave-protected sites in the intertidal zone of these islands from January to December 2008 to determine its demographic patterns based on density and body size. Length (L), breadth (B...

  6. ASDC Collaborations and Processes to Ensure Quality Metadata and Consistent Data Availability

    Science.gov (United States)

    Trapasso, T. J.

    2017-12-01

    With the introduction of new tools, faster computing, and less expensive storage, increased volumes of data are expected to be managed with existing or fewer resources. Metadata management is becoming a heightened challenge from the increase in data volume, resulting in more metadata records needed to be curated for each product. To address metadata availability and completeness, NASA ESDIS has taken significant strides with the creation of the United Metadata Model (UMM) and Common Metadata Repository (CMR). These UMM helps address hurdles experienced by the increasing number of metadata dialects and the CMR provides a primary repository for metadata so that required metadata fields can be served through a growing number of tools and services. However, metadata quality remains an issue as metadata is not always inherent to the end-user. In response to these challenges, the NASA Atmospheric Science Data Center (ASDC) created the Collaboratory for quAlity Metadata Preservation (CAMP) and defined the Product Lifecycle Process (PLP) to work congruently. CAMP is unique in that it provides science team members a UI to directly supply metadata that is complete, compliant, and accurate for their data products. This replaces back-and-forth communication that often results in misinterpreted metadata. Upon review by ASDC staff, metadata is submitted to CMR for broader distribution through Earthdata. Further, approval of science team metadata in CAMP automatically triggers the ASDC PLP workflow to ensure appropriate services are applied throughout the product lifecycle. This presentation will review the design elements of CAMP and PLP as well as demonstrate interfaces to each. It will show the benefits that CAMP and PLP provide to the ASDC that could potentially benefit additional NASA Earth Science Data and Information System (ESDIS) Distributed Active Archive Centers (DAACs).

  7. Metadata Authoring with Versatility and Extensibility

    Science.gov (United States)

    Pollack, Janine; Olsen, Lola

    2004-01-01

    NASA's Global Change Master Directory (GCMD) assists the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 13,800 data set descriptions in Directory Interchange Format (DIF) and 700 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information and direct links to the data, thus allowing researchers to discover data pertaining to a geographic location of interest, then quickly acquire those data. The GCMD strives to be the preferred data locator for world-wide directory-level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are attracting widespread usage; however, a need for tools that are portable, customizable and versatile still exists. With tool usage directly influencing metadata population, it has become apparent that new tools are needed to fill these voids. As a result, the GCMD has released a new authoring tool allowing for both web-based and stand-alone authoring of descriptions. Furthermore, this tool incorporates the ability to plug-and-play the metadata format of choice, offering users options of DIF, SERF, FGDC, ISO or any other defined standard. Allowing data holders to work with their preferred format, as well as an option of a stand-alone application or web-based environment, docBUlLDER will assist the scientific community in efficiently creating quality data and services metadata.

  8. Making the Case for Embedded Metadata in Digital Images

    DEFF Research Database (Denmark)

    Smith, Kari R.; Saunders, Sarah; Kejser, U.B.

    2014-01-01

    This paper discusses the standards, methods, use cases, and opportunities for using embedded metadata in digital images. In this paper we explain the past and current work engaged with developing specifications, standards for embedding metadata of different types, and the practicalities of data...... exchange in heritage institutions and the culture sector. Our examples and findings support the case for embedded metadata in digital images and the opportunities for such use more broadly in non-heritage sectors as well. We encourage the adoption of embedded metadata by digital image content creators...... and curators as well as those developing software and hardware that support the creation or re-use of digital images. We conclude that the usability of born digital images as well as physical objects that are digitized can be extended and the files preserved more readily with embedded metadata....

  9. Interpreting the ASTM 'content standard for digital geospatial metadata'

    Science.gov (United States)

    Nebert, Douglas D.

    1996-01-01

    ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.

  10. Making the Case for Embedded Metadata in Digital Images

    DEFF Research Database (Denmark)

    Smith, Kari R.; Saunders, Sarah; Kejser, U.B.

    2014-01-01

    exchange in heritage institutions and the culture sector. Our examples and findings support the case for embedded metadata in digital images and the opportunities for such use more broadly in non-heritage sectors as well. We encourage the adoption of embedded metadata by digital image content creators......This paper discusses the standards, methods, use cases, and opportunities for using embedded metadata in digital images. In this paper we explain the past and current work engaged with developing specifications, standards for embedding metadata of different types, and the practicalities of data...... and curators as well as those developing software and hardware that support the creation or re-use of digital images. We conclude that the usability of born digital images as well as physical objects that are digitized can be extended and the files preserved more readily with embedded metadata....

  11. The role of domoic acid in abortion and premature parturition of California sea lions (Zalophus californianus) on San Miguel Island, California.

    Science.gov (United States)

    Goldstein, Tracey; Zabka, Tanja S; Delong, Robert L; Wheeler, Elizabeth A; Ylitalo, Gina; Bargu, Sibel; Silver, Mary; Leighfield, Tod; Van Dolah, Frances; Langlois, Gregg; Sidor, Inga; Dunn, J Lawrence; Gulland, Frances M D

    2009-01-01

    Domoic acid is a glutaminergic neurotoxin produced by marine algae such as Pseudo-nitzschia australis. California sea lions (Zalophus californianus) ingest the toxin when foraging on planktivorous fish. Adult females comprise 60% of stranded animals admitted for rehabilitation due to acute domoic acid toxicosis and commonly suffer from reproductive failure, including abortions and premature live births. Domoic acid has been shown to cross the placenta exposing the fetus to the toxin. To determine whether domoic acid was playing a role in reproductive failure in sea lion rookeries, 67 aborted and live-born premature pups were sampled on San Miguel Island in 2005 and 2006 to investigate the causes for reproductive failure. Analyses included domoic acid, contaminant and infectious disease testing, and histologic examination. Pseudo-nitzschia spp. were present both in the environment and in sea lion feces, and domoic acid was detected in the sea lion feces and in 17% of pup samples tested. Histopathologic findings included systemic and localized inflammation and bacterial infections of amniotic origin, placental abruption, and brain edema. The primary lesion in five animals with measurable domoic acid concentrations was brain edema, a common finding and, in some cases, the only lesion observed in aborted premature pups born to domoic acid-intoxicated females in rehabilitation. Blubber organochlorine concentrations were lower than those measured previously in premature sea lion pups collected in the 1970s. While the etiology of abortion and premature parturition was varied in this study, these results suggest that domoic acid contributes to reproductive failure on California sea lion rookeries.

  12. A Novel Architecture of Metadata Management System Based on Intelligent Cache

    Institute of Scientific and Technical Information of China (English)

    SONG Baoyan; ZHAO Hongwei; WANG Yan; GAO Nan; XU Jin

    2006-01-01

    This paper introduces a novel architecture of metadata management system based on intelligent cache called Metadata Intelligent Cache Controller (MICC). By using an intelligent cache to control the metadata system, MICC can deal with different scenarios such as splitting and merging of queries into sub-queries for available metadata sets in local, in order to reduce access time of remote queries. Application can find results patially from local cache and the remaining portion of the metadata that can be fetched from remote locations. Using the existing metadata, it can not only enhance the fault tolerance and load balancing of system effectively, but also improve the efficiency of access while ensuring the access quality.

  13. Leveraging Metadata to Create Better Web Services

    Science.gov (United States)

    Mitchell, Erik

    2012-01-01

    Libraries have been increasingly concerned with data creation, management, and publication. This increase is partly driven by shifting metadata standards in libraries and partly by the growth of data and metadata repositories being managed by libraries. In order to manage these data sets, libraries are looking for new preservation and discovery…

  14. A Metadata Schema for Geospatial Resource Discovery Use Cases

    Directory of Open Access Journals (Sweden)

    Darren Hardy

    2014-07-01

    Full Text Available We introduce a metadata schema that focuses on GIS discovery use cases for patrons in a research library setting. Text search, faceted refinement, and spatial search and relevancy are among GeoBlacklight's primary use cases for federated geospatial holdings. The schema supports a variety of GIS data types and enables contextual, collection-oriented discovery applications as well as traditional portal applications. One key limitation of GIS resource discovery is the general lack of normative metadata practices, which has led to a proliferation of metadata schemas and duplicate records. The ISO 19115/19139 and FGDC standards specify metadata formats, but are intricate, lengthy, and not focused on discovery. Moreover, they require sophisticated authoring environments and cataloging expertise. Geographic metadata standards target preservation and quality measure use cases, but they do not provide for simple inter-institutional sharing of metadata for discovery use cases. To this end, our schema reuses elements from Dublin Core and GeoRSS to leverage their normative semantics, community best practices, open-source software implementations, and extensive examples already deployed in discovery contexts such as web search and mapping. Finally, we discuss a Solr implementation of the schema using a "geo" extension to MODS.

  15. Managing ebook metadata in academic libraries taming the tiger

    CERN Document Server

    Frederick, Donna E

    2016-01-01

    Managing ebook Metadata in Academic Libraries: Taming the Tiger tackles the topic of ebooks in academic libraries, a trend that has been welcomed by students, faculty, researchers, and library staff. However, at the same time, the reality of acquiring ebooks, making them discoverable, and managing them presents library staff with many new challenges. Traditional methods of cataloging and managing library resources are no longer relevant where the purchasing of ebooks in packages and demand driven acquisitions are the predominant models for acquiring new content. Most academic libraries have a complex metadata environment wherein multiple systems draw upon the same metadata for different purposes. This complexity makes the need for standards-based interoperable metadata more important than ever. In addition to complexity, the nature of the metadata environment itself typically varies slightly from library to library making it difficult to recommend a single set of practices and procedures which would be releva...

  16. International Metadata Initiatives: Lessons in Bibliographic Control.

    Science.gov (United States)

    Caplan, Priscilla

    This paper looks at a subset of metadata schemes, including the Text Encoding Initiative (TEI) header, the Encoded Archival Description (EAD), the Dublin Core Metadata Element Set (DCMES), and the Visual Resources Association (VRA) Core Categories for visual resources. It examines why they developed as they did, major point of difference from…

  17. Building a Disciplinary Metadata Standards Directory

    Directory of Open Access Journals (Sweden)

    Alexander Ball

    2014-07-01

    Full Text Available The Research Data Alliance (RDA Metadata Standards Directory Working Group (MSDWG is building a directory of descriptive, discipline-specific metadata standards. The purpose of the directory is to promote the discovery, access and use of such standards, thereby improving the state of research data interoperability and reducing duplicative standards development work.This work builds upon the UK Digital Curation Centre's Disciplinary Metadata Catalogue, a resource created with much the same aim in mind. The first stage of the MSDWG's work was to update and extend the information contained in the catalogue. In the current, second stage, a new platform is being developed in order to extend the functionality of the directory beyond that of the catalogue, and to make it easier to maintain and sustain. Future work will include making the directory more amenable to use by automated tools.

  18. Treating metadata as annotations: separating the content markup from the content

    Directory of Open Access Journals (Sweden)

    Fredrik Paulsson

    2007-11-01

    Full Text Available The use of digital learning resources creates an increasing need for semantic metadata, describing the whole resource, as well as parts of resources. Traditionally, schemas such as Text Encoding Initiative (TEI have been used to add semantic markup for parts of resources. This is not sufficient for use in a ”metadata ecology”, where metadata is distributed, coherent to different Application Profiles, and added by different actors. A new methodology, where metadata is “pointed in” as annotations, using XPointers, and RDF is proposed. A suggestion for how such infrastructure can be implemented, using existing open standards for metadata, and for the web is presented. We argue that such methodology and infrastructure is necessary to realize the decentralized metadata infrastructure needed for a “metadata ecology".

  19. A Generic Metadata Editor Supporting System Using Drupal CMS

    Science.gov (United States)

    Pan, J.; Banks, N. G.; Leggott, M.

    2011-12-01

    Metadata handling is a key factor in preserving and reusing scientific data. In recent years, standardized structural metadata has become widely used in Geoscience communities. However, there exist many different standards in Geosciences, such as the current version of the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata (FGDC CSDGM), the Ecological Markup Language (EML), the Geography Markup Language (GML), and the emerging ISO 19115 and related standards. In addition, there are many different subsets within the Geoscience subdomain such as the Biological Profile of the FGDC (CSDGM), or for geopolitical regions, such as the European Profile or the North American Profile in the ISO standards. It is therefore desirable to have a software foundation to support metadata creation and editing for multiple standards and profiles, without re-inventing the wheels. We have developed a software module as a generic, flexible software system to do just that: to facilitate the support for multiple metadata standards and profiles. The software consists of a set of modules for the Drupal Content Management System (CMS), with minimal inter-dependencies to other Drupal modules. There are two steps in using the system's metadata functions. First, an administrator can use the system to design a user form, based on an XML schema and its instances. The form definition is named and stored in the Drupal database as a XML blob content. Second, users in an editor role can then use the persisted XML definition to render an actual metadata entry form, for creating or editing a metadata record. Behind the scenes, the form definition XML is transformed into a PHP array, which is then rendered via Drupal Form API. When the form is submitted the posted values are used to modify a metadata record. Drupal hooks can be used to perform custom processing on metadata record before and after submission. It is trivial to store the metadata record as an actual XML file

  20. Macrobenthic community response to copper in Shelter Island Yacht Basin, San Diego Bay, California.

    Science.gov (United States)

    Neira, Carlos; Mendoza, Guillermo; Levin, Lisa A; Zirino, Alberto; Delgadillo-Hinojosa, Francisco; Porrachia, Magali; Deheyn, Dimitri D

    2011-04-01

    We examined Cu contamination effects on macrobenthic communities and Cu concentration in invertebrates within Shelter Island Yacht Basin, San Diego Bay, California. Results indicate that at some sites, Cu in sediment has exceeded a threshold for "self defense" mechanisms and highlight the potential negative impacts on benthic faunal communities where Cu accumulates and persists in sediments. At sites with elevated Cu levels in sediment, macrobenthic communities were not only less diverse but also their total biomass and body size (individual biomass) were reduced compared to sites with lower Cu. Cu concentration in tissue varied between species and within the same species, reflecting differing abilities to "regulate" their body load. The spatial complexity of Cu effects in a small marina such as SIYB emphasizes that sediment-quality criteria based solely on laboratory experiments should be used with caution, as they do not necessarily reflect the condition at the community and ecosystem levels. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  2. Metadata Design in the New PDS4 Standards - Something for Everybody

    Science.gov (United States)

    Raugh, Anne C.; Hughes, John S.

    2015-11-01

    The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data

  3. Metabolonote: A wiki-based database for managing hierarchical metadata of metabolome analyses

    Directory of Open Access Journals (Sweden)

    Takeshi eAra

    2015-04-01

    Full Text Available Metabolomics—technology for comprehensive detection of small molecules in an organism—lags behind the other omics in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata, existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called TogoMD, with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data, but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitates the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  4. Forensic devices for activism: Metadata tracking and public proof

    Directory of Open Access Journals (Sweden)

    Lonneke van der Velden

    2015-10-01

    Full Text Available The central topic of this paper is a mobile phone application, ‘InformaCam’, which turns metadata from a surveillance risk into a method for the production of public proof. InformaCam allows one to manage and delete metadata from images and videos in order to diminish surveillance risks related to online tracking. Furthermore, it structures and stores the metadata in such a way that the documentary material becomes better accommodated to evidentiary settings, if needed. In this paper I propose InformaCam should be interpreted as a ‘forensic device’. By using the conceptualization of forensics and work on socio-technical devices the paper discusses how InformaCam, through a range of interventions, rearranges metadata into a technology of evidence. InformaCam explicitly recognizes mobile phones as context aware, uses their sensors, and structures metadata in order to facilitate data analysis after images are captured. Through these modifications it invents a form of ‘sensory data forensics'. By treating data in this particular way, surveillance resistance does more than seeking awareness. It becomes engaged with investigatory practices. Considering the extent by which states conduct metadata surveillance, the project can be seen as a timely response to the unequal distribution of power over data.

  5. Metadata Creation, Management and Search System for your Scientific Data

    Science.gov (United States)

    Devarakonda, R.; Palanisamy, G.

    2012-12-01

    Mercury Search Systems is a set of tools for creating, searching, and retrieving of biogeochemical metadata. Mercury toolset provides orders of magnitude improvements in search speed, support for any metadata format, integration with Google Maps for spatial queries, multi-facetted type search, search suggestions, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. Mercury's metadata editor provides a easy way for creating metadata and Mercury's search interface provides a single portal to search for data and information contained in disparate data management systems, each of which may use any metadata format including FGDC, ISO-19115, Dublin-Core, Darwin-Core, DIF, ECHO, and EML. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury is being used more than 14 different projects across 4 federal agencies. It was originally developed for NASA, with continuing development funded by NASA, USGS, and DOE for a consortium of projects. Mercury search won the NASA's Earth Science Data Systems Software Reuse Award in 2008. References: R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010);

  6. Survey data and metadata modelling using document-oriented NoSQL

    Science.gov (United States)

    Rahmatuti Maghfiroh, Lutfi; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Survey data that are collected from year to year have metadata change. However it need to be stored integratedly to get statistical data faster and easier. Data warehouse (DW) can be used to solve this limitation. However there is a change of variables in every period that can not be accommodated by DW. Traditional DW can not handle variable change via Slowly Changing Dimension (SCD). Previous research handle the change of variables in DW to manage metadata by using multiversion DW (MVDW). MVDW is designed using relational model. Some researches also found that developing nonrelational model in NoSQL database has reading time faster than the relational model. Therefore, we propose changes to metadata management by using NoSQL. This study proposes a model DW to manage change and algorithms to retrieve data with metadata changes. Evaluation of the proposed models and algorithms result in that database with the proposed design can retrieve data with metadata changes properly. This paper has contribution in comprehensive data analysis with metadata changes (especially data survey) in integrated storage.

  7. Using Metadata to Build Geographic Information Sharing Environment on Internet

    Directory of Open Access Journals (Sweden)

    Chih-hong Sun

    1999-12-01

    Full Text Available Internet provides a convenient environment to share geographic information. Web GIS (Geographic Information System even provides users a direct access environment to geographic databases through Internet. However, the complexity of geographic data makes it difficult for users to understand the real content and the limitation of geographic information. In some cases, users may misuse the geographic data and make wrong decisions. Meanwhile, geographic data are distributed across various government agencies, academic institutes, and private organizations, which make it even more difficult for users to fully understand the content of these complex data. To overcome these difficulties, this research uses metadata as a guiding mechanism for users to fully understand the content and the limitation of geographic data. We introduce three metadata standards commonly used for geographic data and metadata authoring tools available in the US. We also review the current development of geographic metadata standard in Taiwan. Two metadata authoring tools are developed in this research, which will enable users to build their own geographic metadata easily.[Article content in Chinese

  8. Development of health information search engine based on metadata and ontology.

    Science.gov (United States)

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  9. In Interactive, Web-Based Approach to Metadata Authoring

    Science.gov (United States)

    Pollack, Janine; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    NASA's Global Change Master Directory (GCMD) serves a growing number of users by assisting the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 8000 data set descriptions in Directory Interchange Format (DIF) and 200 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information, thus allowing researchers to discover data pertaining to a particular geographic location, as well as subject of interest. The GCMD strives to be the preeminent data locator for world-wide directory level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are not currently attracting. widespread usage. With usage being the prime indicator of utility, it has become apparent that current tools must be improved. As a result, the GCMD has released a new suite of web-based authoring tools that enable a user to create new data and service entries, as well as modify existing data entries. With these tools, a more interactive approach to metadata authoring is taken, as they feature a visual "checklist" of data/service fields that automatically update when a field is completed. In this way, the user can quickly gauge which of the required and optional fields have not been populated. With the release of these tools, the Earth science community will be further assisted in efficiently creating quality data and services metadata. Keywords: metadata, Earth science, metadata authoring tools

  10. Collaborative Metadata Curation in Support of NASA Earth Science Data Stewardship

    Science.gov (United States)

    Sisco, Adam W.; Bugbee, Kaylin; le Roux, Jeanne; Staton, Patrick; Freitag, Brian; Dixon, Valerie

    2018-01-01

    Growing collection of NASA Earth science data is archived and distributed by EOSDIS’s 12 Distributed Active Archive Centers (DAACs). Each collection and granule is described by a metadata record housed in the Common Metadata Repository (CMR). Multiple metadata standards are in use, and core elements of each are mapped to and from a common model – the Unified Metadata Model (UMM). Work done by the Analysis and Review of CMR (ARC) Team.

  11. Modernization of the Caltech/USGS Southern California Seismic Network

    Science.gov (United States)

    Bhadha, R.; Devora, A.; Hauksson, E.; Johnson, D.; Thomas, V.; Watkins, M.; Yip, R.; Yu, E.; Given, D.; Cone, G.; Koesterer, C.

    2009-12-01

    The USGS/ANSS/ARRA program is providing Government Furnished Equipment (GFE), and two year funding for upgrading the Caltech/USGS Southern California Seismic Network (SCSN). The SCSN is the modern digital ground motion seismic network in southern California that monitors seismicity and provides real-time earthquake information products such as rapid notifications, moment tensors, and ShakeMap. The SCSN has evolved through the years and now consists of several well-integrated components such as Short-Period analog, TERRAscope, digital stations, and real-time strong motion stations, or about 300 stations. In addition, the SCSN records data from about 100 stations provided by partner networks. To strengthen the ability of SCSN to meet the ANSS performance standards, we will install GFE and carry out the following upgrades and improvements of the various components of the SCSN: 1) Upgrade of dataloggers at seven TERRAscope stations; 2) Upgrade of dataloggers at 131 digital stations and upgrade broadband sensors at 25 stations; 3) Upgrade of SCSN metadata capabilities; 4) Upgrade of telemetry capabilities for both seismic and GPS data; and 5) Upgrade balers at stations with existing Q330 dataloggers. These upgrades will enable the SCSN to meet the ANSS Performance Standards more consistently than before. The new equipment will improve station uptimes and reduce maintenance costs. The new equipment will also provide improved waveform data quality and consequently superior data products. The data gaps due to various outages will be minimized, and ‘late’ data will be readily available through retrieval from on-site storage. Compared to the outdated equipment, the new equipment will speed up data delivery by about 10 sec, which is fast enough for earthquake early warning applications. The new equipment also has about a factor of ten lower consumption of power. We will also upgrade the SCSN data acquisition and data center facilities, which will improve the SCSN

  12. Characterization of Urban Heat and Exacerbation: Development of a Heat Island Index for California

    Directory of Open Access Journals (Sweden)

    Haider Taha

    2017-08-01

    Full Text Available To further evaluate the factors influencing public heat and air-quality health, a characterization of how urban areas affect the thermal environment, particularly in terms of the air temperature, is necessary. To assist public health agencies in ranking urban areas in terms of heat stress and developing mitigation plans or allocating various resources, this study characterized urban heat in California and quantified an urban heat island index (UHII at the census-tract level (~1 km2. Multi-scale atmospheric modeling was carried out and a practical UHII definition was developed. The UHII was diagnosed with different metrics and its spatial patterns were characterized for small, large, urban-climate archipelago, inland, and coastal areas. It was found that within each region, wide ranges of urban heat and UHII exist. At the lower end of the scale (in smaller urban areas, the UHII reaches up to 20 degree-hours per day (DH/day; °C.hr/day, whereas at the higher end (in larger areas, it reaches up to 125 DH/day or greater. The average largest temperature difference (urban heat island within each region ranges from 0.5–1.0 °C in smaller areas to up to 5 °C or more at the higher end, such as in urban-climate archipelagos. Furthermore, urban heat is exacerbated during warmer weather and that, in turn, can worsen the health impacts of heat events presently and in the future, for which it is expected that both the frequency and duration of heat waves will increase.

  13. Changes in vegetation and biological soil crust communities on sand dunes stabilizing after a century of grazing on San Miguel Island, Channel Island National Park, California

    Science.gov (United States)

    Zellman, Kristine L.

    2014-01-01

    San Miguel Island is the westernmost of the California Channel Islands and one of the windiest areas on the west coast of North America. The majority of the island is covered by coastal sand dunes, which were stripped of vegetation and subsequently mobilized due to droughts and sheep ranching during the late 19th century and early 20th century. Since the removal of grazing animals, vegetation and biological soil crusts have once again stabilized many of the island's dunes. In this study, historical aerial photographs and field surveys were used to develop a chronosequence of the pattern of change in vegetation communities and biological soil crust levels of development (LOD) along a gradient of dune stabilization. Historical aerial photographs from 1929, 1954, 1977, and 2009 were georeferenced and used to delineate changes in vegetation canopy cover and active (unvegetated) dune extent among 5 historical periods (pre-1929, 1929–1954, 1954–1977, 1977–2009, and 2009–2011). During fieldwork, vegetation and biological soil crust communities were mapped along transects distributed throughout San Miguel Island's central dune field on land forms that had stabilized during the 5 time periods of interest. Analyses in a geographic information system (GIS) quantified the pattern of changes that vegetation and biological soil crust communities have exhibited on the San Miguel Island dunes over the past 80 years. Results revealed that a continuing increase in total vegetation cover and a complex pattern of change in vegetation communities have taken place on the San Miguel Island dunes since the removal of grazing animals. The highly specialized native vascular vegetation (sea rocket, dunedelion, beach-bur, and locoweed) are the pioneer stabilizers of the dunes. This pioneer community is replaced in later stages by communities that are dominated by native shrubs (coastal goldenbush, silver lupine, coyote-brush, and giant coreopsis), with apparently overlapping or

  14. EPA Metadata Style Guide Keywords and EPA Organization Names

    Science.gov (United States)

    The following keywords and EPA organization names listed below, along with EPA’s Metadata Style Guide, are intended to provide suggestions and guidance to assist with the standardization of metadata records.

  15. AFSC/NMML/CCEP: Natality rates of California sea lions at San Miguel Island, California during 1987-2008

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The National Marine Mammal Laboratories' California Current Ecosystem Program (AFSC/NOAA) initiated a long-term marking program of California sea lions (Zalophus...

  16. ATLAS Metadata Task Force

    Energy Technology Data Exchange (ETDEWEB)

    ATLAS Collaboration; Costanzo, D.; Cranshaw, J.; Gadomski, S.; Jezequel, S.; Klimentov, A.; Lehmann Miotto, G.; Malon, D.; Mornacchi, G.; Nemethy, P.; Pauly, T.; von der Schmitt, H.; Barberis, D.; Gianotti, F.; Hinchliffe, I.; Mapelli, L.; Quarrie, D.; Stapnes, S.

    2007-04-04

    This document provides an overview of the metadata, which are needed to characterizeATLAS event data at different levels (a complete run, data streams within a run, luminosity blocks within a run, individual events).

  17. Dyniqx: a novel meta-search engine for metadata based cross search

    OpenAIRE

    Zhu, Jianhan; Song, Dawei; Eisenstadt, Marc; Barladeanu, Cristi; Rüger, Stefan

    2008-01-01

    The effect of metadata in collection fusion has not been sufficiently studied. In response to this, we present a novel meta-search engine called Dyniqx for metadata based cross search. Dyniqx exploits the availability of metadata in academic search services such as PubMed and Google Scholar etc for fusing search results from heterogeneous search engines. In addition, metadata from these search engines are used for generating dynamic query controls such as sliders and tick boxes etc which are ...

  18. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    Science.gov (United States)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  19. Enriching The Metadata On CDS

    CERN Document Server

    Chhibber, Nalin

    2014-01-01

    The project report revolves around the open source software package called Invenio. It provides the tools for management of digital assets in a repository and drives CERN Document Server. Primary objective is to enhance the existing metadata in CDS with data from other libraries. An implicit part of this task is to manage disambiguation (within incoming data), removal of multiple entries and handle replications between new and existing records. All such elements and their corresponding changes are integrated within Invenio to make the upgraded metadata available on the CDS. Latter part of the report discuss some changes related to the Invenio code-base itself.

  20. The Theory and Implementation for Metadata in Digital Library/Museum

    Directory of Open Access Journals (Sweden)

    Hsueh-hua Chen

    1998-12-01

    Full Text Available Digital Libraries and Museums (DL/M have become one of the important research issues of Library and Information Science as well as other related fields. This paper describes the basic concepts of DL/M and briefly introduces the development of Taiwan Digital Museum Project. Based on the features of various collections, wediscuss how to maintain, to manage and to exchange metadata, especially from the viewpoint of users. We propose the draft of metadata, MICI (Metadata Interchange for Chinese Information , developed by ROSS (Resources Organization and SearchingSpecification team. Finally, current problems and future development of metadata will be touched.[Article content in Chinese

  1. Power Hardware-in-the-Loop-Based Anti-Islanding Evaluation and Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Schoder, Karl [Florida State Univ., Tallahassee, FL (United States). Ceter for Advanced Power Systems (CAPS); Langston, James [Florida State Univ., Tallahassee, FL (United States). Ceter for Advanced Power Systems (CAPS); Hauer, John [Florida State Univ., Tallahassee, FL (United States). Ceter for Advanced Power Systems (CAPS); Bogdan, Ferenc [Florida State Univ., Tallahassee, FL (United States). Ceter for Advanced Power Systems (CAPS); Steurer, Michael [Florida State Univ., Tallahassee, FL (United States). Ceter for Advanced Power Systems (CAPS); Mather, Barry [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-10-01

    The National Renewable Energy Laboratory (NREL) teamed with Southern California Edison (SCE), Clean Power Research (CPR), Quanta Technology (QT), and Electrical Distribution Design (EDD) to conduct a U.S. Department of Energy (DOE) and California Public Utility Commission (CPUC) California Solar Initiative (CSI)-funded research project investigating the impacts of integrating high-penetration levels of photovoltaics (PV) onto the California distribution grid. One topic researched in the context of high-penetration PV integration onto the distribution system is the ability of PV inverters to (1) detect islanding conditions (i.e., when the distribution system to which the PV inverter is connected becomes disconnected from the utility power connection) and (2) disconnect from the islanded system within the time specified in the performance specifications outlined in IEEE Standard 1547. This condition may cause damage to other connected equipment due to insufficient power quality (e.g., over-and under-voltages) and may also be a safety hazard to personnel that may be working on feeder sections to restore service. NREL teamed with the Florida State University (FSU) Center for Advanced Power Systems (CAPS) to investigate a new way of testing PV inverters for IEEE Standard 1547 unintentional islanding performance specifications using power hardware-in-loop (PHIL) laboratory testing techniques.

  2. DEVELOPMENT OF A METADATA MANAGEMENT SYSTEM FOR AN INTERDISCIPLINARY RESEARCH PROJECT

    Directory of Open Access Journals (Sweden)

    C. Curdt

    2012-07-01

    Full Text Available In every interdisciplinary, long-term research project it is essential to manage and archive all heterogeneous research data, produced by the project participants during the project funding. This has to include sustainable storage, description with metadata, easy and secure provision, back up, and visualisation of all data. To ensure the accurate description of all project data with corresponding metadata, the design and implementation of a metadata management system is a significant duty. Thus, the sustainable use and search of all research results during and after the end of the project is particularly dependent on the implementation of a metadata management system. Therefore, this paper will describe the practical experiences gained during the development of a scientific research data management system (called the TR32DB including the corresponding metadata management system for the multidisciplinary research project Transregional Collaborative Research Centre 32 (CRC/TR32 'Patterns in Soil-Vegetation-Atmosphere Systems'. The entire system was developed according to the requirements of the funding agency, the user and project requirements, as well as according to recent standards and principles. The TR32DB is basically a combination of data storage, database, and web-interface. The metadata management system was designed, realized, and implemented to describe and access all project data via accurate metadata. Since the quantity and sort of descriptive metadata depends on the kind of data, a user-friendly multi-level approach was chosen to cover these requirements. Thus, the self-developed CRC/TR32 metadata framework is designed. It is a combination of general, CRC/TR32 specific, as well as data type specific properties.

  3. Metadata Exporter for Scientific Photography Management

    Science.gov (United States)

    Staudigel, D.; English, B.; Delaney, R.; Staudigel, H.; Koppers, A.; Hart, S.

    2005-12-01

    Photographs have become an increasingly important medium, especially with the advent of digital cameras. It has become inexpensive to take photographs and quickly post them on a website. However informative photos may be, they still need to be displayed in a convenient way, and be cataloged in such a manner that makes them easily locatable. Managing the great number of photographs that digital cameras allow and creating a format for efficient dissemination of the information related to the photos is a tedious task. Products such as Apple's iPhoto have greatly eased the task of managing photographs, However, they often have limitations. Un-customizable metadata fields and poor metadata extraction tools limit their scientific usefulness. A solution to this persistent problem is a customizable metadata exporter. On the ALIA expedition, we successfully managed the thousands of digital photos we took. We did this with iPhoto and a version of the exporter that is now available to the public under the name "CustomHTMLExport" (http://www.versiontracker.com/dyn/moreinfo/macosx/27777), currently undergoing formal beta testing This software allows the use of customized metadata fields (including description, time, date, GPS data, etc.), which is exported along with the photo. It can also produce webpages with this data straight from iPhoto, in a much more flexible way than is already allowed. With this tool it becomes very easy to manage and distribute scientific photos.

  4. Languages for Metadata

    NARCIS (Netherlands)

    Brussee, R.; Veenstra, M.; Blanken, Henk; de Vries, A.P.; Blok, H.E.; Feng, L.

    2007-01-01

    The term meta origins from the Greek word µ∈τα, meaning after. The word Metaphysics is the title of Aristotle’s book coming after his book on nature called Physics. This has given meta the modern connotation of a nature of a higher order or of a more fundamental kind [1]. Literally, metadata is

  5. An Assessment of the Evolving Common Metadata Repository Standards for Airborne Field Campaigns

    Science.gov (United States)

    Northup, E. A.; Chen, G.; Early, A. B.; Beach, A. L., III; Walter, J.; Conover, H.

    2016-12-01

    The NASA Earth Venture Program has led to a dramatic increase in airborne observations, requiring updated data management practices with clearly defined data standards and protocols for metadata. While the current data management practices demonstrate some success in serving airborne science team data user needs, existing metadata models and standards such as NASA's Unified Metadata Model (UMM) for Collections (UMM-C) present challenges with respect to accommodating certain features of airborne science metadata. UMM is the model implemented in the Common Metadata Repository (CMR), which catalogs all metadata records for NASA's Earth Observing System Data and Information System (EOSDIS). One example of these challenges is with representation of spatial and temporal metadata. In addition, many airborne missions target a particular geophysical event, such as a developing hurricane. In such cases, metadata about the event is also important for understanding the data. While coverage of satellite missions is highly predictable based on orbit characteristics, airborne missions feature complicated flight patterns where measurements can be spatially and temporally discontinuous. Therefore, existing metadata models will need to be expanded for airborne measurements and sampling strategies. An Airborne Metadata Working Group was established under the auspices of NASA's Earth Science Data Systems Working Group (ESDSWG) to identify specific features of airborne metadata that can not be currently represented in the UMM and to develop new recommendations. The group includes representation from airborne data users and providers. This presentation will discuss the challenges and recommendations in an effort to demonstrate how airborne metadata curation/management can be improved to streamline data ingest and discoverability to a broader user community.

  6. Metadata capture in an electronic notebook: How to make it as simple as possible?

    Directory of Open Access Journals (Sweden)

    Menzel, Julia

    2015-09-01

    Full Text Available In the last few years electronic laboratory notebooks (ELNs have become popular. ELNs offer the great possibility to capture metadata automatically. Due to the high documentation effort metadata documentation is neglected in science. To close the gap between good data documentation and high documentation effort for the scientists a first user-friendly solution to capture metadata in an easy way was developed.At first, different protocols for the Western Blot were collected within the Collaborative Research Center 1002 and analyzed. Together with existing metadata standards identified in a literature search a first version of the metadata scheme was developed. Secondly, the metadata scheme was customized for future users including the implementation of default values for automated metadata documentation.Twelve protocols for the Western Blot were used to construct one standard protocol with ten different experimental steps. Three already existing metadata standards were used as models to construct the first version of the metadata scheme consisting of 133 data fields in ten experimental steps. Through a revision with future users the final metadata scheme was shortened to 90 items in three experimental steps. Using individualized default values 51.1% of the metadata can be captured with present values in the ELN.This lowers the data documentation effort. At the same time, researcher could benefit by providing standardized metadata for data sharing and re-use.

  7. Architecture and evolution of an Early Permian carbonate complex on a tectonically active island in east-central California

    Science.gov (United States)

    Stevens, Calvin H.; Magginetti, Robert T.; Stone, Paul

    2015-01-01

    The newly named Upland Valley Limestone represents a carbonate complex that developed on and adjacent to a tectonically active island in east-central California during a brief interval of Early Permian (late Artinskian) time. This lithologically unique, relatively thin limestone unit lies within a thick sequence of predominantly siliciclastic rocks and is characterized by its high concentration of crinoidal debris, pronounced lateral changes in thickness and lithofacies, and a largely endemic fusulinid fauna. Most outcrops represent a carbonate platform and debris derived from it and shed downslope, but another group of outcrops represents one or possibly more isolated carbonate buildups that developed offshore from the platform. Tectonic activity in the area occurred before, probably during, and after deposition of this short-lived carbonate complex.

  8. Metadata Schema Used in OCLC Sampled Web Pages

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2005-12-01

    Full Text Available The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the others? To address these issues, this study analyzed 16,383 Web pages with meta tags extracted from 200,000 OCLC sampled Web pages in 2000. It found that only 8.19% Web pages used meta tags; description tags, keyword tags, and Dublin Core tags were the only three schemas used in the Web pages. This article revealed the use of meta tags in terms of their function distribution, syntax characteristics, granularity of the Web pages, and the length distribution and word number distribution of both description and keywords tags.

  9. Describing Geospatial Assets in the Web of Data: A Metadata Management Scenario

    Directory of Open Access Journals (Sweden)

    Cristiano Fugazza

    2016-12-01

    Full Text Available Metadata management is an essential enabling factor for geospatial assets because discovery, retrieval, and actual usage of the latter are tightly bound to the quality of these descriptions. Unfortunately, the multi-faceted landscape of metadata formats, requirements, and conventions makes it difficult to identify editing tools that can be easily tailored to the specificities of a given project, workgroup, and Community of Practice. Our solution is a template-driven metadata editing tool that can be customised to any XML-based schema. Its output is constituted by standards-compliant metadata records that also have a semantics-aware counterpart eliciting novel exploitation techniques. Moreover, external data sources can easily be plugged in to provide autocompletion functionalities on the basis of the data structures made available on the Web of Data. Beside presenting the essentials on customisation of the editor by means of two use cases, we extend the methodology to the whole life cycle of geospatial metadata. We demonstrate the novel capabilities enabled by RDF-based metadata representation with respect to traditional metadata management in the geospatial domain.

  10. Hydrologic Effects and Biogeographic Impacts of Coastal Fog, Channel Islands, California

    Science.gov (United States)

    Fischer, D. T.; Still, C. J.; Williams, A. P.

    2006-12-01

    Fog has long been recognized as an important component of the hydrological cycle in many ecosystems, including coastal desert fog belts, tropical cloud forests, and montane areas worldwide. Fog drip can be a major source of water, particularly during the dry season, and there is evidence in some ecosystems of direct fogwater uptake by foliar absorption. Fog and low clouds can also increase availability of water by reducing evaporative water losses. In the California Channel Islands, fog and low stratus clouds dramatically affect the water budget of coastal vegetation, particularly during the long summer drought. This work focuses on a population of Bishop pine (Pinus muricata D. Don) on Santa Cruz Island. This is the southernmost large stand of this species, and tree growth and survival appears to be strongly limited by water availability. We have used parallel measurement and modeling approaches to quantify the importance of fogwater inputs and persistent cloud cover to Bishop pine growth. We have modeled drought stress over the last century based on local climate records, calibrated against a dense network of 12 weather stations on a 7km coastal-inland elevation gradient. Water availability is highly variable year to year, with episodic droughts that are associated with widespread tree mortality. Frequent cloud cover near the coast reduces evapotranspiration relative to the inland site (on the order of 25%), thereby delaying the onset of, and moderating the severity of the annual summer drought. Substantial summer fog drip at higher elevations provides additional water inputs that also reduce drought severity. Beyond the theoretical availability of extra water from fog drip, tree ring analysis and xylem water isotopic data suggest that significant amounts of fog water are actually taken up by these trees. Stand boundaries appear to be driven by spatial patterns of mortality related to water availability and frequency of severe drought. These results suggest that

  11. Performance and Economics of a Wind-Diesel Hybrid Energy System: Naval Air Landing Field, San Clemente Island, California; TOPICAL

    International Nuclear Information System (INIS)

    McKenna, Ed; Olsen, Timothy

    1999-01-01

    This report provides an overview of the wind resource, economics and operation of the recently installed wind turbines in conjunction with diesel power for the Naval Air Landing Field (NALF), San Clemente Island (SCI), California Project. The primary goal of the SCI wind power system is to operate with the existing diesel power plant and provide equivalent or better power quality and system reliability than the existing diesel system. The wind system is also intended to reduce, as far as possible, the use of diesel fuel and the inherent generation of nitrogen-oxide emissions and other pollutants. The first two NM 225/30 225kW wind turbines were installed and started shake-down operations on February 5, 1998. This report describes the initial operational data gathered from February 1998 through January 1999, as well as the SCI wind resource and initial cost of energy provided by the wind turbines on SCI. In support of this objective, several years of data on the wind resources of San Clemente Island were collected and compared to historical data. The wind resource data were used as input to economic and feasibility studies for a wind-diesel hybrid installation for SCI

  12. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    Science.gov (United States)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial

  13. California sea otter (Enhydra lutris nereis) census results, Spring 2017

    Science.gov (United States)

    Tinker, M. Tim; Hatfield, Brian B.

    2017-09-29

    The 2017 census of southern sea otters (Enhydra lutris nereis) was conducted between late April and early July along the mainland coast of central California and in April at San Nicolas Island in southern California. The 3-year average of combined counts from the mainland range and San Nicolas Island was 3,186, down by 86 sea otters from the previous year. This is the second year that the official index has exceeded 3,090, the Endangered Species Act delisting threshold identified in the U.S. Fish and Wildlife Service’s Southern Sea Otter Recovery Plan (the threshold would need to be exceeded for 3 consecutive years before delisting consideration). The 5-year average trend in abundance, including both the mainland range and San Nicolas Island populations, remains positive at 2.3 percent per year. Continuing lack of growth in the range peripheries likely explains the cessation of range expansion.

  14. A document centric metadata registration tool constructing earth environmental data infrastructure

    Science.gov (United States)

    Ichino, M.; Kinutani, H.; Ono, M.; Shimizu, T.; Yoshikawa, M.; Masuda, K.; Fukuda, K.; Kawamoto, H.

    2009-12-01

    DIAS (Data Integration and Analysis System) is one of GEOSS activities in Japan. It is also a leading part of the GEOSS task with the same name defined in GEOSS Ten Year Implementation Plan. The main mission of DIAS is to construct data infrastructure that can effectively integrate earth environmental data such as observation data, numerical model outputs, and socio-economic data provided from the fields of climate, water cycle, ecosystem, ocean, biodiversity and agriculture. Some of DIAS's data products are available at the following web site of http://www.jamstec.go.jp/e/medid/dias. Most of earth environmental data commonly have spatial and temporal attributes such as the covering geographic scope or the created date. The metadata standards including these common attributes are published by the geographic information technical committee (TC211) in ISO (the International Organization for Standardization) as specifications of ISO 19115:2003 and 19139:2007. Accordingly, DIAS metadata is developed with basing on ISO/TC211 metadata standards. From the viewpoint of data users, metadata is useful not only for data retrieval and analysis but also for interoperability and information sharing among experts, beginners and nonprofessionals. On the other hand, from the viewpoint of data providers, two problems were pointed out after discussions. One is that data providers prefer to minimize another tasks and spending time for creating metadata. Another is that data providers want to manage and publish documents to explain their data sets more comprehensively. Because of solving these problems, we have been developing a document centric metadata registration tool. The features of our tool are that the generated documents are available instantly and there is no extra cost for data providers to generate metadata. Also, this tool is developed as a Web application. So, this tool does not demand any software for data providers if they have a web-browser. The interface of the tool

  15. Automated Metadata Extraction

    Science.gov (United States)

    2008-06-01

    Store [4]. The files purchased from the iTunes Music Store include the following metadata. • Name • Email address of purchaser • Year • Album ...6 3. Music : MP3 and AAC .........................................................................7 4. Tagged Image File Format...Expert Group (MPEG) set of standards for music encoding. Open Document Format (ODF) – an open, license-free, and clearly documented file format

  16. Learning Object Metadata in a Web-Based Learning Environment

    NARCIS (Netherlands)

    Avgeriou, Paris; Koutoumanos, Anastasios; Retalis, Symeon; Papaspyrou, Nikolaos

    2000-01-01

    The plethora and variance of learning resources embedded in modern web-based learning environments require a mechanism to enable their structured administration. This goal can be achieved by defining metadata on them and constructing a system that manages the metadata in the context of the learning

  17. Automating the Extraction of Metadata from Archaeological Data Using iRods Rules

    Directory of Open Access Journals (Sweden)

    David Walling

    2011-10-01

    Full Text Available The Texas Advanced Computing Center and the Institute for Classical Archaeology at the University of Texas at Austin developed a method that uses iRods rules and a Jython script to automate the extraction of metadata from digital archaeological data. The first step was to create a record-keeping system to classify the data. The record-keeping system employs file and directory hierarchy naming conventions designed specifically to maintain the relationship between the data objects and map the archaeological documentation process. The metadata implicit in the record-keeping system is automatically extracted upon ingest, combined with additional sources of metadata, and stored alongside the data in the iRods preservation environment. This method enables a more organized workflow for the researchers, helps them archive their data close to the moment of data creation, and avoids error prone manual metadata input. We describe the types of metadata extracted and provide technical details of the extraction process and storage of the data and metadata.

  18. OlyMPUS - The Ontology-based Metadata Portal for Unified Semantics

    Science.gov (United States)

    Huffer, E.; Gleason, J. L.

    2015-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support data consumers and data providers, enabling the latter to register their data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS leverages the semantics and reasoning capabilities of ODISEES to provide data producers with a semi-automated interface for producing the semantically rich metadata needed to support ODISEES' data discovery and access services. It integrates the ODISEES metadata search system with multiple NASA data delivery tools to enable data consumers to create customized data sets for download to their computers, or for NASA Advanced Supercomputing (NAS) facility registered users, directly to NAS storage resources for access by applications running on NAS supercomputers. A core function of NASA's Earth Science Division is research and analysis that uses the full spectrum of data products available in NASA archives. Scientists need to perform complex analyses that identify correlations and non-obvious relationships across all types of Earth System phenomena. Comprehensive analytics are hindered, however, by the fact that many Earth science data products are disparate and hard to synthesize. Variations in how data are collected, processed, gridded, and stored, create challenges for data interoperability and synthesis, which are exacerbated by the sheer volume of available data. Robust, semantically rich metadata can support tools for data discovery and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Such capabilities are critical to enabling the research activities integral to NASA's strategic plans. However, as metadata requirements increase and competing standards emerge

  19. Metadata Quality in Institutional Repositories May be Improved by Addressing Staffing Issues

    Directory of Open Access Journals (Sweden)

    Elizabeth Stovold

    2016-09-01

    Full Text Available A Review of: Moulaison, S. H., & Dykas, F. (2016. High-quality metadata and repository staffing: Perceptions of United States–based OpenDOAR participants. Cataloging & Classification Quarterly, 54(2, 101-116. http://dx.doi.org/10.1080/01639374.2015.1116480 Objective – To investigate the quality of institutional repository metadata, metadata practices, and identify barriers to quality. Design – Survey questionnaire. Setting – The OpenDOAR online registry of worldwide repositories. Subjects – A random sample of 50 from 358 administrators of institutional repositories in the United States of America listed in the OpenDOAR registry. Methods – The authors surveyed a random sample of administrators of American institutional repositories included in the OpenDOAR registry. The survey was distributed electronically. Recipients were asked to forward the email if they felt someone else was better suited to respond. There were questions about the demographics of the repository, the metadata creation environment, metadata quality, standards and practices, and obstacles to quality. Results were analyzed in Excel, and qualitative responses were coded by two researchers together. Main results – There was a 42% (n=21 response rate to the section on metadata quality, a 40% (n=20 response rate to the metadata creation section, and 40% (n=20 to the section on obstacles to quality. The majority of respondents rated their metadata quality as average (65%, n=13 or above average (30%, n=5. No one rated the quality as high or poor, while 10% (n=2 rated the quality as below average. The survey found that the majority of descriptive metadata was created by professional (84%, n=16 or paraprofessional (53%, n=10 library staff. Professional staff were commonly involved in creating administrative metadata, reviewing the metadata, and selecting standards and documentation. Department heads and advisory committees were also involved in standards and documentation

  20. Studies of Big Data metadata segmentation between relational and non-relational databases

    Science.gov (United States)

    Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.

    2015-12-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  1. Studies of Big Data metadata segmentation between relational and non-relational databases

    CERN Document Server

    Golosova, M V; Klimentov, A A; Ryabinkin, E A; Dimitrov, G; Potekhin, M

    2015-01-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  2. Chemical composition of volatiles from Opuntia littoralis, Opuntia ficus-indica, and Opuntia prolifera growing on Catalina Island, California.

    Science.gov (United States)

    Wright, Cynthia R; Setzer, William N

    2014-01-01

    The essential oils from the cladodes of Opuntia littoralis, Opuntia ficus-indica and Opuntia prolifera growing wild on Santa Catalina Island, California, were obtained by hydrodistillation and analysed by gas chromatography-mass spectrometry (GC-MS). Terpenoids were the dominant class of volatiles in O. littoralis, with the two main components being the furanoid forms of cis-linalool oxide (10.8%) and trans-linalool oxide (8.8%). Fatty acid-derived compounds dominated the essential oil of O. ficus-indica with linoleic acid (22.3%), palmitic acid (12.7%), lauric acid (10.5%) and myristic acid (4.2%) as major fatty acids. O. prolifera oil was composed of 46.6% alkanes and the primary hydrocarbon component was heptadecane (19.2%). Sixteen compounds were common to all the three Opuntia species.

  3. Integrated Array/Metadata Analytics

    Science.gov (United States)

    Misev, Dimitar; Baumann, Peter

    2015-04-01

    Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.

  4. Metadata and Ontologies in Learning Resources Design

    Science.gov (United States)

    Vidal C., Christian; Segura Navarrete, Alejandra; Menéndez D., Víctor; Zapata Gonzalez, Alfredo; Prieto M., Manuel

    Resource design and development requires knowledge about educational goals, instructional context and information about learner's characteristics among other. An important information source about this knowledge are metadata. However, metadata by themselves do not foresee all necessary information related to resource design. Here we argue the need to use different data and knowledge models to improve understanding the complex processes related to e-learning resources and their management. This paper presents the use of semantic web technologies, as ontologies, supporting the search and selection of resources used in design. Classification is done, based on instructional criteria derived from a knowledge acquisition process, using information provided by IEEE-LOM metadata standard. The knowledge obtained is represented in an ontology using OWL and SWRL. In this work we give evidence of the implementation of a Learning Object Classifier based on ontology. We demonstrate that the use of ontologies can support the design activities in e-learning.

  5. A case for user-generated sensor metadata

    Science.gov (United States)

    Nüst, Daniel

    2015-04-01

    Cheap and easy to use sensing technology and new developments in ICT towards a global network of sensors and actuators promise previously unthought of changes for our understanding of the environment. Large professional as well as amateur sensor networks exist, and they are used for specific yet diverse applications across domains such as hydrology, meteorology or early warning systems. However the impact this "abundance of sensors" had so far is somewhat disappointing. There is a gap between (community-driven) sensor networks that could provide very useful data and the users of the data. In our presentation, we argue this is due to a lack of metadata which allows determining the fitness of use of a dataset. Syntactic or semantic interoperability for sensor webs have made great progress and continue to be an active field of research, yet they often are quite complex, which is of course due to the complexity of the problem at hand. But still, we see the most generic information to determine fitness for use is a dataset's provenance, because it allows users to make up their own minds independently from existing classification schemes for data quality. In this work we will make the case how curated user-contributed metadata has the potential to improve this situation. This especially applies for scenarios in which an observed property is applicable in different domains, and for set-ups where the understanding about metadata concepts and (meta-)data quality differs between data provider and user. On the one hand a citizen does not understand the ISO provenance metadata. On the other hand a researcher might find issues in publicly accessible time series published by citizens, which the latter might not be aware of or care about. Because users will have to determine fitness for use for each application on their own anyway, we suggest an online collaboration platform for user-generated metadata based on an extremely simplified data model. In the most basic fashion

  6. FSA 2002 Digital Orthophoto Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the 2002 FSA Color Orthophotos Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the quarter-quad...

  7. ONEMercury: Towards Automatic Annotation of Earth Science Metadata

    Science.gov (United States)

    Tuarob, S.; Pouchard, L. C.; Noy, N.; Horsburgh, J. S.; Palanisamy, G.

    2012-12-01

    Earth sciences have become more data-intensive, requiring access to heterogeneous data collected from multiple places, times, and thematic scales. For example, research on climate change may involve exploring and analyzing observational data such as the migration of animals and temperature shifts across the earth, as well as various model-observation inter-comparison studies. Recently, DataONE, a federated data network built to facilitate access to and preservation of environmental and ecological data, has come to exist. ONEMercury has recently been implemented as part of the DataONE project to serve as a portal for discovering and accessing environmental and observational data across the globe. ONEMercury harvests metadata from the data hosted by multiple data repositories and makes it searchable via a common search interface built upon cutting edge search engine technology, allowing users to interact with the system, intelligently filter the search results on the fly, and fetch the data from distributed data sources. Linking data from heterogeneous sources always has a cost. A problem that ONEMercury faces is the different levels of annotation in the harvested metadata records. Poorly annotated records tend to be missed during the search process as they lack meaningful keywords. Furthermore, such records would not be compatible with the advanced search functionality offered by ONEMercury as the interface requires a metadata record be semantically annotated. The explosion of the number of metadata records harvested from an increasing number of data repositories makes it impossible to annotate the harvested records manually, urging the need for a tool capable of automatically annotating poorly curated metadata records. In this paper, we propose a topic-model (TM) based approach for automatic metadata annotation. Our approach mines topics in the set of well annotated records and suggests keywords for poorly annotated records based on topic similarity. We utilize the

  8. Metadata as a means for correspondence on digital media

    NARCIS (Netherlands)

    Stouffs, R.; Kooistra, J.; Tuncer, B.

    2004-01-01

    Metadata derive their action from their association to data and from the relationship they maintain with this data. An interpretation of this action is that the metadata lays claim to the data collection to which it is associated, where the claim is successful if the data collection gains quality as

  9. The sexual practices of Asian and Pacific Islander high school students.

    Science.gov (United States)

    Schuster, M A; Bell, R M; Nakajima, G A; Kanouse, D E

    1998-10-01

    To describe the sexual behaviors, beliefs, and attitudes of Asian and Pacific Islander California high school students and to compare them to other racial/ethnic groups. Data were collected from an anonymous self-administered survey of 2026 ninth to 12th graders in a Los Angeles County school district; 186 of the respondents described themselves as Asian and Pacific Islander. The survey was conducted in April 1992. A higher percentage of Asian and Pacific Islander adolescents (73%) compared with African-American (28%, p masturbation of or by a partner, fellatio with ejaculation, cunnilingus, and anal intercourse. Few students in any group reported homosexual genital sexual activities. Asians and Pacific Islanders who had had vaginal intercourse were more likely than most other groups to have used a condom at first vaginal intercourse, but Asians and Pacific Islanders had not used condoms more consistently over the prior year. Asians and Pacific Islanders were more likely to expect parental disapproval if they had vaginal intercourse and less likely to think that their peers had had vaginal intercourse. Asian and Pacific Islander high school students in one California school district appear to be at lower sexual risk than other racial/ethnic groups. However, a large minority are engaging in activities that can transmit disease and lead to unwanted pregnancy. Therefore, current efforts to develop culturally sensitive clinical and community-based approaches to sexual risk prevention should include Asians and Pacific Islanders.

  10. Finding Atmospheric Composition (AC) Metadata

    Science.gov (United States)

    Strub, Richard F..; Falke, Stefan; Fiakowski, Ed; Kempler, Steve; Lynnes, Chris; Goussev, Oleg

    2015-01-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all

  11. 33 CFR 334.1160 - San Pablo Bay, Calif.; target practice area, Mare Island Naval Shipyard, Vallejo.

    Science.gov (United States)

    2010-07-01

    ... practice area, Mare Island Naval Shipyard, Vallejo. 334.1160 Section 334.1160 Navigation and Navigable... REGULATIONS § 334.1160 San Pablo Bay, Calif.; target practice area, Mare Island Naval Shipyard, Vallejo. (a..., Mare Island Naval Shipyard, Vallejo, California, will conduct target practice in the area at intervals...

  12. Separation of metadata and pixel data to speed DICOM tag morphing.

    Science.gov (United States)

    Ismail, Mahmoud; Philbin, James

    2013-01-01

    The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing.

  13. Mercury- Distributed Metadata Management, Data Discovery and Access System

    Science.gov (United States)

    Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.

    2007-12-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  14. Cytometry metadata in XML

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.

    2016-04-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). CytometryML will serve as a common metadata standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). The datatypes are primarily based on the Flow Cytometry and the Digital Imaging and Communication (DICOM) standards. A small section of the code was formatted with standard HTML formatting elements (p, h1, h2, etc.). Results:1) The part of MIFlowCyt that describes the Experimental Overview including the specimen and substantial parts of several other major elements has been implemented as CytometryML XML schemas (www.cytometryml.org). 2) The feasibility of using MIFlowCyt to provide the combination of an overview, table of contents, and/or an index of a scientific paper or a report has been demonstrated. Previously, a sample electronic publication, EPUB, was created that could contain both MIFlowCyt metadata as well as the binary data. Conclusions: The use of CytometryML technology together with XHTML5 and CSS permits the metadata to be directly formatted and together with the binary data to be stored in an EPUB container. This will facilitate: formatting, data- mining, presentation, data verification, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate a publication's adherence to the MIFlowCyt standard, promote interoperability and should also result in the textual and numeric data being published using web technology without any change in composition.

  15. Prediabetes in California: Nearly Half of California Adults on Path to Diabetes.

    Science.gov (United States)

    Babey, Susan H; Wolstein, Joelle; Diamant, Allison L; Goldstein, Harold

    2016-03-01

    In California, more than 13 million adults (46 percent of all adults in the state) are estimated to have prediabetes or undiagnosed diabetes. An additional 2.5 million adults have diagnosed diabetes. Altogether, 15.5 million adults (55 percent of all California adults) have prediabetes or diabetes. Although rates of prediabetes increase with age, rates are also high among young adults, with one-third of those ages 18-39 having prediabetes. In addition, rates of prediabetes are disproportionately high among young adults of color, with more than one-third of Latino, Pacific Islander, American Indian, African-American, and multiracial Californians ages 18-39 estimated to have prediabetes. Policy efforts should focus on reducing the burden of prediabetes and diabetes through support for prevention and treatment.

  16. OAI-PMH repositories : quality issues regarding metadata and protocol compliance, tutorial 1

    CERN Multimedia

    CERN. Geneva; Cole, Tim

    2005-01-01

    This tutorial will provide an overview of emerging guidelines and best practices for OAI data providers and how they relate to expectations and needs of service providers. The audience should already be familiar with OAI protocol basics and have at least some experience with either data provider or service provider implementations. The speakers will present both protocol compliance best practices and general recommendations for creating and disseminating high-quality "shareable metadata". Protocol best practices discussion will include coverage of OAI identifiers, date-stamps, deleted records, sets, resumption tokens, about containers, branding, errors conditions, HTTP server issues, and repository lifecycle issues. Discussion of what makes for good, shareable metadata will cover topics including character encoding, namespace and XML schema issues, metadata crosswalk issues, support of multiple metadata formats, general metadata authoring recommendations, specific recommendations for use of Dublin Core elemen...

  17. Web Services and Data Enhancements at the Northern California Earthquake Data Center

    Science.gov (United States)

    Neuhauser, D. S.; Zuzlewski, S.; Lombard, P. N.; Allen, R. M.

    2013-12-01

    The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC offers the following web services that are compliant with the International Federation of Digital Seismograph Networks (FDSN) web services specifications: (1) fdsn-dataselect: time series data delivered in MiniSEED format, (2) fdsn-station: station and channel metadata and time series availability delivered in StationXML format, (3) fdsn-event: earthquake event information delivered in QuakeML format. In addition, the NCEDC offers the the following IRIS-compatible web services: (1) sacpz: provide channel gains, poles, and zeros in SAC format, (2) resp: provide channel response information in RESP format, (3) dataless: provide station and channel metadata in Dataless SEED format. The NCEDC is also developing a web service to deliver timeseries from pre-assembled event waveform gathers. The NCEDC has waveform gathers for ~750,000 northern and central California events from 1984 to the present, many of which were created by the USGS NCSN prior to the establishment of the joint NCSS (Northern California Seismic System). We are currently adding waveforms to these older event gathers with time series from the UCB networks and other networks with waveforms archived at the NCEDC, and ensuring that the waveform for each channel in the event gathers have the highest

  18. Provenance metadata gathering and cataloguing of EFIT++ code execution

    Energy Technology Data Exchange (ETDEWEB)

    Lupelli, I., E-mail: ivan.lupelli@ccfe.ac.uk [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Muir, D.G.; Appel, L.; Akers, R.; Carr, M. [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Abreu, P. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal)

    2015-10-15

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  19. Provenance metadata gathering and cataloguing of EFIT++ code execution

    International Nuclear Information System (INIS)

    Lupelli, I.; Muir, D.G.; Appel, L.; Akers, R.; Carr, M.; Abreu, P.

    2015-01-01

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  20. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format.

    Science.gov (United States)

    Ismail, Mahmoud; Philbin, James

    2015-04-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.

  1. The Use of Metadata Visualisation Assist Information Retrieval

    Science.gov (United States)

    2007-10-01

    album title, the track length and the genre of music . Again, any of these pieces of information can be used to quickly search and locate specific...that person. Music files also have metadata tags, in a format called ID3. This usually contains information such as the artist, the song title, the...tracks, to provide more information about the entire music collection, or to find similar or diverse tracks within the collection. Metadata is

  2. Chirp subbottom profile data collected in 2015 from the northern Chandeleur Islands, Louisiana

    Science.gov (United States)

    Forde, Arnell S.; DeWitt, Nancy T.; Fredericks, Jake J.; Miselis, Jennifer L.

    2018-01-30

    As part of the Barrier Island Evolution Research project, scientists from the U.S. Geological Survey (USGS) St. Petersburg Coastal and Marine Science Center conducted a nearshore geophysical survey around the northern Chandeleur Islands, Louisiana, in September 2015. The objective of the project is to improve the understanding of barrier island geomorphic evolution, particularly storm-related depositional and erosional processes that shape the islands over annual to interannual time scales (1–5 years). Collecting geophysical data can help researchers identify relations between the geologic history of the islands and their present day morphology and sediment distribution. High-resolution geophysical data collected along this rapidly changing barrier island system can provide a unique time-series dataset to further the analyses and geomorphological interpretations of this and other coastal systems, improving our understanding of coastal response and evolution over medium-term time scales (months to years). Subbottom profile data were collected in September 2015 offshore of the northern Chandeleur Islands, during USGS Field Activity Number 2015-331-FA. Data products, including raw digital chirp subbottom data, processed subbottom profile images, survey trackline map, navigation files, geographic information system data files and formal Federal Geographic Data Committee metadata, and Field Activity Collection System and operation logs are available for download.

  3. The sedimentological characteristics and geochronology of the marshes of Dauphin Island, Alabama

    Science.gov (United States)

    Ellis, Alisha M.; Smith, Christopher G.; Marot, Marci E.

    2018-03-22

    In August 2015, scientists from the U.S. Geological Survey, St. Petersburg Coastal and Marine Science Center collected 11 push cores from the marshes of Dauphin Island and Little Dauphin Island, Alabama. Sample site environments included high marshes, low salt marshes, and salt flats, and varied in distance from the shoreline. The sampling efforts were part of a larger study to assess the feasibility and sustainability of proposed restoration efforts for Dauphin Island, Alabama, and to identify trends in shoreline erosion and accretion. The data presented in this publication can provide a basis for assessing organic and inorganic sediment accumulation rates and temporal changes in accumulation rates over multiple decades at multiple locations across the island. This study was funded by the National Fish and Wildlife Foundation, via the Gulf Environmental Benefit Fund. This report serves as an archive for the sedimentological and geochemical data derived from the marsh cores. Downloadable data are available and include Microsoft Excel spreadsheets (.xlsx), comma-separated values (.csv) text files, JPEG files, and formal Federal Geographic Data Committee metadata in a U.S. Geological Survey data release.

  4. Linked data for libraries, archives and museums how to clean, link and publish your metadata

    CERN Document Server

    Hooland, Seth van

    2014-01-01

    This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Libraries, archives and museums are facing up to the challenge of providing access to fast growing collections whilst managing cuts to budgets. Key to this is the creation, linking and publishing of good quality metadata as Linked Data that will allow their collections to be discovered, accessed and disseminated in a sustainable manner. This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Metadata experts Seth van Hooland and Ruben Verborgh introduce the key concepts of metadata standards and Linked Data and how they can be practically applied to existing metadata, giving readers the tools and understanding to achieve maximum results with limited re...

  5. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  6. Shared Geospatial Metadata Repository for Ontario University Libraries: Collaborative Approaches

    Science.gov (United States)

    Forward, Erin; Leahey, Amber; Trimble, Leanne

    2015-01-01

    Successfully providing access to special collections of digital geospatial data in academic libraries relies upon complete and accurate metadata. Creating and maintaining metadata using specialized standards is a formidable challenge for libraries. The Ontario Council of University Libraries' Scholars GeoPortal project, which created a shared…

  7. Statistical Data Processing with R – Metadata Driven Approach

    Directory of Open Access Journals (Sweden)

    Rudi SELJAK

    2016-06-01

    Full Text Available In recent years the Statistical Office of the Republic of Slovenia has put a lot of effort into re-designing its statistical process. We replaced the classical stove-pipe oriented production system with general software solutions, based on the metadata driven approach. This means that one general program code, which is parametrized with process metadata, is used for data processing for a particular survey. Currently, the general program code is entirely based on SAS macros, but in the future we would like to explore how successfully statistical software R can be used for this approach. Paper describes the metadata driven principle for data validation, generic software solution and main issues connected with the use of statistical software R for this approach.

  8. Urban Heat Island and Park Cool Island Intensities in the Coastal City of Aracaju, North-Eastern Brazil

    Directory of Open Access Journals (Sweden)

    Max Anjos

    2017-08-01

    Full Text Available In this study, an evaluation of the Urban Heat Island (UHI and Park Cool Island (PCI intensities in Aracaju, North-Eastern Brazil, was performed. The basis of our evaluation is a 2-year dataset from the urban climatological network installed with the principles and concepts defined for urban areas related to climatic scales, sitting and exposure, urban morphology, and metadata. The current findings update UHI intensities in Aracaju refuting the trend registered in previous studies. On average, the UHI was more intense in the cool season (1.3 °C than in hot season (0.5 °C, which was caused by wind speed decrease. In relation to the PCI, mitigation of high air temperatures of 1.5–2 °C on average was registered in the city. However, the urban park is not always cooler than the surrounding built environment. Consistent long-term monitoring in the cities is very important to provide more accurate climatic information about the UHI and PCI to be applied in urban planning properly, e.g., to provide pleasant thermal comfort in urban spaces.

  9. 33 CFR 334.1100 - San Pablo Bay, Carquinez Strait, and Mare Island Strait in vicinity of U.S. Naval Shipyard, Mare...

    Science.gov (United States)

    2010-07-01

    ... part of the Navy Yard, Mare Island, south of the causeway between the City of Vallejo and Mare Island... Commander, Mare Island Naval Shipyard, Vallejo, California, shall navigate, anchor or moor in this area. [26...

  10. A Comparative Study on Metadata Scheme of Chinese and American Open Data Platforms

    Directory of Open Access Journals (Sweden)

    Yang Sinan

    2018-01-01

    Full Text Available [Purpose/significance] Open government data is conducive to the rational development and utilization of data resources. It can encourage social innovation and promote economic development. Besides, in order to ensure effective utilization and social increment of open government data, high-quality metadata schemes is necessary. [Method/process] Firstly, this paper analyzed the related research of open government data at home and abroad. Then, it investigated the open metadata schemes of some Chinese main local governments’ data platforms, and made a comparison with the metadata standard of American open government data. [Result/conclusion] This paper reveals that there are some disadvantages about Chinese local government open data affect the use effect of open data, which including that different governments use different data metadata schemes, the description of data set is too simple for further utilization and usually presented in HTML Web page format with lower machine-readable. Therefore, our government should come up with a standardized metadata schemes by drawing on the international mature and effective metadata standard, to ensure the social needs of high quality and high value data.

  11. EUDAT B2FIND : A Cross-Discipline Metadata Service and Discovery Portal

    Science.gov (United States)

    Widmann, Heinrich; Thiemann, Hannes

    2016-04-01

    The European Data Infrastructure (EUDAT) project aims at a pan-European environment that supports a variety of multiple research communities and individuals to manage the rising tide of scientific data by advanced data management technologies. This led to the establishment of the community-driven Collaborative Data Infrastructure that implements common data services and storage resources to tackle the basic requirements and the specific challenges of international and interdisciplinary research data management. The metadata service B2FIND plays a central role in this context by providing a simple and user-friendly discovery portal to find research data collections stored in EUDAT data centers or in other repositories. For this we store the diverse metadata collected from heterogeneous sources in a comprehensive joint metadata catalogue and make them searchable in an open data portal. The implemented metadata ingestion workflow consists of three steps. First the metadata records - provided either by various research communities or via other EUDAT services - are harvested. Afterwards the raw metadata records are converted and mapped to unified key-value dictionaries as specified by the B2FIND schema. The semantic mapping of the non-uniform, community specific metadata to homogenous structured datasets is hereby the most subtle and challenging task. To assure and improve the quality of the metadata this mapping process is accompanied by • iterative and intense exchange with the community representatives, • usage of controlled vocabularies and community specific ontologies and • formal and semantic validation. Finally the mapped and checked records are uploaded as datasets to the catalogue, which is based on the open source data portal software CKAN. CKAN provides a rich RESTful JSON API and uses SOLR for dataset indexing that enables users to query and search in the catalogue. The homogenization of the community specific data models and vocabularies enables not

  12. FSA 2003-2004 Digital Orthophoto Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the 2003-2004 FSA Color Orthophotos Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the...

  13. USGS Digital Orthophoto Quad (DOQ) Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the USGS DOQ Orthophoto Layer. Each orthophoto is represented by a Quarter 24k Quad tile polygon. The polygon attributes contain the quarter-quad tile...

  14. DESIGN AND PRACTICE ON METADATA SERVICE SYSTEM OF SURVEYING AND MAPPING RESULTS BASED ON GEONETWORK

    Directory of Open Access Journals (Sweden)

    Z. Zha

    2012-08-01

    Full Text Available Based on the analysis and research on the current geographic information sharing and metadata service,we design, develop and deploy a distributed metadata service system based on GeoNetwork covering more than 30 nodes in provincial units of China.. By identifying the advantages of GeoNetwork, we design a distributed metadata service system of national surveying and mapping results. It consists of 31 network nodes, a central node and a portal. Network nodes are the direct system metadata source, and are distributed arround the country. Each network node maintains a metadata service system, responsible for metadata uploading and management. The central node harvests metadata from network nodes using OGC CSW 2.0.2 standard interface. The portal shows all metadata in the central node, provides users with a variety of methods and interface for metadata search or querying. It also provides management capabilities on connecting the central node and the network nodes together. There are defects with GeoNetwork too. Accordingly, we made improvement and optimization on big-amount metadata uploading, synchronization and concurrent access. For metadata uploading and synchronization, by carefully analysis the database and index operation logs, we successfully avoid the performance bottlenecks. And with a batch operation and dynamic memory management solution, data throughput and system performance are significantly improved; For concurrent access, , through a request coding and results cache solution, query performance is greatly improved. To smoothly respond to huge concurrent requests, a web cluster solution is deployed. This paper also gives an experiment analysis and compares the system performance before and after improvement and optimization. Design and practical results have been applied in national metadata service system of surveying and mapping results. It proved that the improved GeoNetwork service architecture can effectively adaptive for

  15. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    Science.gov (United States)

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  16. MMI's Metadata and Vocabulary Solutions: 10 Years and Growing

    Science.gov (United States)

    Graybeal, J.; Gayanilo, F.; Rueda-Velasquez, C. A.

    2014-12-01

    The Marine Metadata Interoperability project (http://marinemetadata.org) held its public opening at AGU's 2004 Fall Meeting. For 10 years since that debut, the MMI guidance and vocabulary sites have served over 100,000 visitors, with 525 community members and continuous Steering Committee leadership. Originally funded by the National Science Foundation, over the years multiple organizations have supported the MMI mission: "Our goal is to support collaborative research in the marine science domain, by simplifying the incredibly complex world of metadata into specific, straightforward guidance. MMI encourages scientists and data managers at all levels to apply good metadata practices from the start of a project, by providing the best guidance and resources for data management, and developing advanced metadata tools and services needed by the community." Now hosted by the Harte Research Institute at Texas A&M University at Corpus Christi, MMI continues to provide guidance and services to the community, and is planning for marine science and technology needs for the next 10 years. In this presentation we will highlight our major accomplishments, describe our recent achievements and imminent goals, and propose a vision for improving marine data interoperability for the next 10 years, including Ontology Registry and Repository (http://mmisw.org/orr) advancements and applications (http://mmisw.org/cfsn).

  17. Advancements in Large-Scale Data/Metadata Management for Scientific Data.

    Science.gov (United States)

    Guntupally, K.; Devarakonda, R.; Palanisamy, G.; Frame, M. T.

    2017-12-01

    Scientific data often comes with complex and diverse metadata which are critical for data discovery and users. The Online Metadata Editor (OME) tool, which was developed by an Oak Ridge National Laboratory team, effectively manages diverse scientific datasets across several federal data centers, such as DOE's Atmospheric Radiation Measurement (ARM) Data Center and USGS's Core Science Analytics, Synthesis, and Libraries (CSAS&L) project. This presentation will focus mainly on recent developments and future strategies for refining OME tool within these centers. The ARM OME is a standard based tool (https://www.archive.arm.gov/armome) that allows scientists to create and maintain metadata about their data products. The tool has been improved with new workflows that help metadata coordinators and submitting investigators to submit and review their data more efficiently. The ARM Data Center's newly upgraded Data Discovery Tool (http://www.archive.arm.gov/discovery) uses rich metadata generated by the OME to enable search and discovery of thousands of datasets, while also providing a citation generator and modern order-delivery techniques like Globus (using GridFTP), Dropbox and THREDDS. The Data Discovery Tool also supports incremental indexing, which allows users to find new data as and when they are added. The USGS CSAS&L search catalog employs a custom version of the OME (https://www1.usgs.gov/csas/ome), which has been upgraded with high-level Federal Geographic Data Committee (FGDC) validations and the ability to reserve and mint Digital Object Identifiers (DOIs). The USGS's Science Data Catalog (SDC) (https://data.usgs.gov/datacatalog) allows users to discover a myriad of science data holdings through a web portal. Recent major upgrades to the SDC and ARM Data Discovery Tool include improved harvesting performance and migration using new search software, such as Apache Solr 6.0 for serving up data/metadata to scientific communities. Our presentation will highlight

  18. ncISO Facilitating Metadata and Scientific Data Discovery

    Science.gov (United States)

    Neufeld, D.; Habermann, T.

    2011-12-01

    Increasing the usability and availability climate and oceanographic datasets for environmental research requires improved metadata and tools to rapidly locate and access relevant information for an area of interest. Because of the distributed nature of most environmental geospatial data, a common approach is to use catalog services that support queries on metadata harvested from remote map and data services. A key component to effectively using these catalog services is the availability of high quality metadata associated with the underlying data sets. In this presentation, we examine the use of ncISO, and Geoportal as open source tools that can be used to document and facilitate access to ocean and climate data available from Thematic Realtime Environmental Distributed Data Services (THREDDS) data services. Many atmospheric and oceanographic spatial data sets are stored in the Network Common Data Format (netCDF) and served through the Unidata THREDDS Data Server (TDS). NetCDF and THREDDS are becoming increasingly accepted in both the scientific and geographic research communities as demonstrated by the recent adoption of netCDF as an Open Geospatial Consortium (OGC) standard. One important source for ocean and atmospheric based data sets is NOAA's Unified Access Framework (UAF) which serves over 3000 gridded data sets from across NOAA and NOAA-affiliated partners. Due to the large number of datasets, browsing the data holdings to locate data is impractical. Working with Unidata, we have created a new service for the TDS called "ncISO", which allows automatic generation of ISO 19115-2 metadata from attributes and variables in TDS datasets. The ncISO metadata records can be harvested by catalog services such as ESSI-labs GI-Cat catalog service, and ESRI's Geoportal which supports query through a number of services, including OpenSearch and Catalog Services for the Web (CSW). ESRI's Geoportal Server provides a number of user friendly search capabilities for end users

  19. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    OpenAIRE

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes...

  20. Improving Earth Science Metadata: Modernizing ncISO

    Science.gov (United States)

    O'Brien, K.; Schweitzer, R.; Neufeld, D.; Burger, E. F.; Signell, R. P.; Arms, S. C.; Wilcox, K.

    2016-12-01

    ncISO is a package of tools developed at NOAA's National Center for Environmental Information (NCEI) that facilitates the generation of ISO 19115-2 metadata from NetCDF data sources. The tool currently exists in two iterations: a command line utility and a web-accessible service within the THREDDS Data Server (TDS). Several projects, including NOAA's Unified Access Framework (UAF), depend upon ncISO to generate the ISO-compliant metadata from their data holdings and use the resulting information to populate discovery tools such as NCEI's ESRI Geoportal and NOAA's data.noaa.gov CKAN system. In addition to generating ISO 19115-2 metadata, the tool calculates a rubric score based on how well the dataset follows the Attribute Conventions for Dataset Discovery (ACDD). The result of this rubric calculation, along with information about what has been included and what is missing is displayed in an HTML document generated by the ncISO software package. Recently ncISO has fallen behind in terms of supporting updates to conventions such updates to the ACDD. With the blessing of the original programmer, NOAA's UAF has been working to modernize the ncISO software base. In addition to upgrading ncISO to utilize version1.3 of the ACDD, we have been working with partners at Unidata and IOOS to unify the tool's code base. In essence, we are merging the command line capabilities into the same software that will now be used by the TDS service, allowing easier updates when conventions such as ACDD are updated in the future. In this presentation, we will discuss the work the UAF project has done to support updated conventions within ncISO, as well as describe how the updated tool is helping to improve metadata throughout the earth and ocean sciences.

  1. Improvements to the Ontology-based Metadata Portal for Unified Semantics (OlyMPUS)

    Science.gov (United States)

    Linsinbigler, M. A.; Gleason, J. L.; Huffer, E.

    2016-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support Earth Science data consumers and data providers, enabling the latter to register data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS complements the ODISEES' data discovery system with an intelligent tool to enable data producers to auto-generate semantically enhanced metadata and upload it to the metadata repository that drives ODISEES. Like ODISEES, the OlyMPUS metadata provisioning tool leverages robust semantics, a NoSQL database and query engine, an automated reasoning engine that performs first- and second-order deductive inferencing, and uses a controlled vocabulary to support data interoperability and automated analytics. The ODISEES data discovery portal leverages this metadata to provide a seamless data discovery and access experience for data consumers who are interested in comparing and contrasting the multiple Earth science data products available across NASA data centers. Olympus will support scientists' services and tools for performing complex analyses and identifying correlations and non-obvious relationships across all types of Earth System phenomena using the full spectrum of NASA Earth Science data available. By providing an intelligent discovery portal that supplies users - both human users and machines - with detailed information about data products, their contents and their structure, ODISEES will reduce the level of effort required to identify and prepare large volumes of data for analysis. This poster will explain how OlyMPUS leverages deductive reasoning and other technologies to create an integrated environment for generating and exploiting semantically rich metadata.

  2. Predicting age groups of Twitter users based on language and metadata features.

    Directory of Open Access Journals (Sweden)

    Antonio A Morgan-Lopez

    Full Text Available Health organizations are increasingly using social media, such as Twitter, to disseminate health messages to target audiences. Determining the extent to which the target audience (e.g., age groups was reached is critical to evaluating the impact of social media education campaigns. The main objective of this study was to examine the separate and joint predictive validity of linguistic and metadata features in predicting the age of Twitter users. We created a labeled dataset of Twitter users across different age groups (youth, young adults, adults by collecting publicly available birthday announcement tweets using the Twitter Search application programming interface. We manually reviewed results and, for each age-labeled handle, collected the 200 most recent publicly available tweets and user handles' metadata. The labeled data were split into training and test datasets. We created separate models to examine the predictive validity of language features only, metadata features only, language and metadata features, and words/phrases from another age-validated dataset. We estimated accuracy, precision, recall, and F1 metrics for each model. An L1-regularized logistic regression model was conducted for each age group, and predicted probabilities between the training and test sets were compared for each age group. Cohen's d effect sizes were calculated to examine the relative importance of significant features. Models containing both Tweet language features and metadata features performed the best (74% precision, 74% recall, 74% F1 while the model containing only Twitter metadata features were least accurate (58% precision, 60% recall, and 57% F1 score. Top predictive features included use of terms such as "school" for youth and "college" for young adults. Overall, it was more challenging to predict older adults accurately. These results suggest that examining linguistic and Twitter metadata features to predict youth and young adult Twitter users may

  3. 2016 Summer California Current Ecosystem CPS Survey (RL1606, EK60)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The cruise sampled the California Current Ecosystem from San Diego, CA to Vancouver Island, BC, CA. Multi-frequency (18-, 38-, 70-, 120-, 200-, and 333-) General...

  4. 2016 Summer California Current Ecosystem CPS Survey (RL1606, EK80)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The cruise sampled the California Current Ecosystem from San Diego, CA to Vancouver Island, BC, CA. Multi-frequency (18-, 38-, 70-, 120-, 200-, and 333-) General...

  5. Document Classification in Support of Automated Metadata Extraction Form Heterogeneous Collections

    Science.gov (United States)

    Flynn, Paul K.

    2014-01-01

    A number of federal agencies, universities, laboratories, and companies are placing their documents online and making them searchable via metadata fields such as author, title, and publishing organization. To enable this, every document in the collection must be catalogued using the metadata fields. Though time consuming, the task of identifying…

  6. An Assistant for Loading Learning Object Metadata: An Ontology Based Approach

    Science.gov (United States)

    Casali, Ana; Deco, Claudia; Romano, Agustín; Tomé, Guillermo

    2013-01-01

    In the last years, the development of different Repositories of Learning Objects has been increased. Users can retrieve these resources for reuse and personalization through searches in web repositories. The importance of high quality metadata is key for a successful retrieval. Learning Objects are described with metadata usually in the standard…

  7. A metadata schema for data objects in clinical research.

    Science.gov (United States)

    Canham, Steve; Ohmann, Christian

    2016-11-24

    A large number of stakeholders have accepted the need for greater transparency in clinical research and, in the context of various initiatives and systems, have developed a diverse and expanding number of repositories for storing the data and documents created by clinical studies (collectively known as data objects). To make the best use of such resources, we assert that it is also necessary for stakeholders to agree and deploy a simple, consistent metadata scheme. The relevant data objects and their likely storage are described, and the requirements for metadata to support data sharing in clinical research are identified. Issues concerning persistent identifiers, for both studies and data objects, are explored. A scheme is proposed that is based on the DataCite standard, with extensions to cover the needs of clinical researchers, specifically to provide (a) study identification data, including links to clinical trial registries; (b) data object characteristics and identifiers; and (c) data covering location, ownership and access to the data object. The components of the metadata scheme are described. The metadata schema is proposed as a natural extension of a widely agreed standard to fill a gap not tackled by other standards related to clinical research (e.g., Clinical Data Interchange Standards Consortium, Biomedical Research Integrated Domain Group). The proposal could be integrated with, but is not dependent on, other moves to better structure data in clinical research.

  8. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    Science.gov (United States)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  9. Ontology-based Metadata Portal for Unified Semantics

    Data.gov (United States)

    National Aeronautics and Space Administration — The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS) will extend the prototype Ontology-Driven Interactive Search Environment for Earth Sciences...

  10. Urban Heat Islands and Their Mitigation vs. Local Impacts of Climate Change

    Science.gov (United States)

    Taha, H.

    2007-12-01

    Urban heat islands and their mitigation take on added significance, both negative and positive, when viewed from a climate-change perspective. In negative terms, urban heat islands can act as local exacerbating factors, or magnifying lenses, to the effects of regional and large-scale climate perturbations and change. They can locally impact meteorology, energy/electricity generation and use, thermal environment (comfort and heat waves), emissions of air pollutants, photochemistry, and air quality. In positive terms, on the other hand, mitigation of urban heat islands (via urban surface modifications and control of man-made heat, for example) can potentially have a beneficial effect of mitigating the local negative impacts of climate change. In addition, mitigation of urban heat islands can, in itself, contribute to preventing regional and global climate change, even if modestly, by helping reduce CO2 emissions from power plants and other sources as a result of decreased energy use for cooling (both direct and indirect) and reducing the rates of meteorology-dependent emissions of air pollutants. This presentation will highlight aspects and characteristics of heat islands, their mitigation, their modeling and quantification techniques, and recent advances in meso-urban modeling of California (funded by the California Energy Commission). In particular, the presentation will focus on results from quantitative, modeling-based analyses of the potential benefits of heat island mitigation in 1) reducing point- and area-source emissions of CO2, NOx, and VOC as a result of reduced cooling energy demand and ambient/surface temperatures, 2) reducing evaporative and fugitive hydrocarbon emissions as a result of lowered temperatures, 3) reducing biogenic hydrocarbon emissions from existing vegetative cover, 4) slowing the rates of tropospheric/ground-level ozone formation and/or accumulation in the urban boundary layer, and 5) helping improve air quality. Quantitative estimates

  11. Mercury, monomethyl mercury, and dissolved organic carbon concentrations in surface water entering and exiting constructed wetlands treated with metal-based coagulants, Twitchell Island, California

    Science.gov (United States)

    Stumpner, Elizabeth B.; Kraus, Tamara E.C.; Fleck, Jacob A.; Hansen, Angela M.; Bachand, Sandra M.; Horwath, William R.; DeWild, John F.; Krabbenhoft, David P.; Bachand, Philip A.M.

    2015-09-02

    Coagulation with metal-based salts is a practice commonly employed by drinking-water utilities to decrease particle and dissolved organic carbon concentrations in water. In addition to decreasing dissolved organic carbon concentrations, the effectiveness of iron- and aluminum-based coagulants for decreasing dissolved concentrations both of inorganic and monomethyl mercury in water was demonstrated in laboratory studies that used agricultural drainage water from the Sacramento–San Joaquin Delta of California. To test the effectiveness of this approach at the field scale, nine 15-by-40‑meter wetland cells were constructed on Twitchell Island that received untreated water from island drainage canals (control) or drainage water treated with polyaluminum chloride or ferric sulfate coagulants. Surface-water samples were collected approximately monthly during November 2012–September 2013 from the inlets and outlets of the wetland cells and then analyzed by the U.S. Geological Survey for total concentrations of mercury and monomethyl mercury in filtered (less than 0.3 micrometers) and suspended-particulate fractions and for concentrations of dissolved organic carbon.

  12. Southwest Regional Climate Hub and California Subsidiary Hub assessment of climate change vulnerability and adaptation and mitigation strategies

    Science.gov (United States)

    Emile Elias; Caiti Steele; Kris Havstad; Kerri Steenwerth; Jeanne Chambers; Helena Deswood; Amber Kerr; Albert Rango; Mark Schwartz; Peter Stine; Rachel Steele

    2015-01-01

    This report is a joint effort of the Southwest Regional Climate Hub and the California Subsidiary Hub (Sub Hub). The Southwest Regional Climate Hub covers Arizona, California, Hawai‘i and the U.S. affiliated Pacific Islands, Nevada, New Mexico, and Utah and contains vast areas of western rangeland, forests, and high-value specialty crops (Figure 1). The California Sub...

  13. Radiological dose and metadata management

    International Nuclear Information System (INIS)

    Walz, M.; Madsack, B.; Kolodziej, M.

    2016-01-01

    This article describes the features of management systems currently available in Germany for extraction, registration and evaluation of metadata from radiological examinations, particularly in the digital imaging and communications in medicine (DICOM) environment. In addition, the probable relevant developments in this area concerning radiation protection legislation, terminology, standardization and information technology are presented. (orig.) [de

  14. Meta-Data Objects as the Basis for System Evolution

    CERN Document Server

    Estrella, Florida; Tóth, N; Kovács, Z; Le Goff, J M; Clatchey, Richard Mc; Toth, Norbert; Kovacs, Zsolt; Goff, Jean-Marie Le

    2001-01-01

    One of the main factors driving object-oriented software development in the Web- age is the need for systems to evolve as user requirements change. A crucial factor in the creation of adaptable systems dealing with changing requirements is the suitability of the underlying technology in allowing the evolution of the system. A reflective system utilizes an open architecture where implicit system aspects are reified to become explicit first-class (meta-data) objects. These implicit system aspects are often fundamental structures which are inaccessible and immutable, and their reification as meta-data objects can serve as the basis for changes and extensions to the system, making it self- describing. To address the evolvability issue, this paper proposes a reflective architecture based on two orthogonal abstractions - model abstraction and information abstraction. In this architecture the modeling abstractions allow for the separation of the description meta-data from the system aspects they represent so that th...

  15. Marine subsidies of island communities in the Gulf of California: evidence from stable carbon and nitrogen isotopes

    International Nuclear Information System (INIS)

    Anderson, W.B.; Polis, G.A.

    1998-01-01

    Coastal sites support larger (2 to > 100 x) populations of many consumers than inland sites on islands in the Gulf of California. Previous data suggested that subsidies of energy and nutrients from the ocean allowed large coastal populations. Stable carbon and nitrogen isotopes are frequently used to analyse diet composition of organisms: they are particularly useful to distinguish between diet sources with distinct isotopic signatures, such as marine and terrestrial diets. We analyzed the 13 C and 15 N concentrations of coastal versus inland spiders and scorpions to test the hypothesis that coastal individuals exhibited more strongly marine-based diets than inland individuals. Coastal spiders and scorpions were significantly more enriched in 13 C and 15 N than inland spiders and scorpions, suggesting that the coastal individuals consumed more marine-based foods than their inland counterparts. These patterns existed in both drought years and wet El Nino years. However, the marine influence was stronger in drought years when terrestrial productivity was nearly non-existent, than in wet years when terrestrial productivity increased by an order of magnitude. (au)

  16. Batch metadata assignment to archival photograph collections using facial recognition software

    Directory of Open Access Journals (Sweden)

    Kyle Banerjee

    2013-07-01

    Full Text Available Useful metadata is essential to giving individual meaning and value within the context of a greater image collection as well as making them more discoverable. However, often little information is available about the photos themselves, so adding consistent metadata to large collections of digital and digitized photographs is a time consuming process requiring highly experienced staff. By using facial recognition software, staff can identify individuals more quickly and reliably. Knowledge of individuals in photos helps staff determine when and where photos are taken and also improves understanding of the subject matter. This article demonstrates simple techniques for using facial recognition software and command line tools to assign, modify, and read metadata for large archival photograph collections.

  17. phosphorus retention data and metadata

    Science.gov (United States)

    phosphorus retention in wetlands data and metadataThis dataset is associated with the following publication:Lane , C., and B. Autrey. Phosphorus retention of forested and emergent marsh depressional wetlands in differing land uses in Florida, USA. Wetlands Ecology and Management. Springer Science and Business Media B.V;Formerly Kluwer Academic Publishers B.V., GERMANY, 24(1): 45-60, (2016).

  18. NCPP's Use of Standard Metadata to Promote Open and Transparent Climate Modeling

    Science.gov (United States)

    Treshansky, A.; Barsugli, J. J.; Guentchev, G.; Rood, R. B.; DeLuca, C.

    2012-12-01

    The National Climate Predictions and Projections (NCPP) Platform is developing comprehensive regional and local information about the evolving climate to inform decision making and adaptation planning. This includes both creating and providing tools to create metadata about the models and processes used to create its derived data products. NCPP is using the Common Information Model (CIM), an ontology developed by a broad set of international partners in climate research, as its metadata language. This use of a standard ensures interoperability within the climate community as well as permitting access to the ecosystem of tools and services emerging alongside the CIM. The CIM itself is divided into a general-purpose (UML & XML) schema which structures metadata documents, and a project or community-specific (XML) Controlled Vocabulary (CV) which constraints the content of metadata documents. NCPP has already modified the CIM Schema to accommodate downscaling models, simulations, and experiments. NCPP is currently developing a CV for use by the downscaling community. Incorporating downscaling into the CIM will lead to several benefits: easy access to the existing CIM Documents describing CMIP5 models and simulations that are being downscaled, access to software tools that have been developed in order to search, manipulate, and visualize CIM metadata, and coordination with national and international efforts such as ES-DOC that are working to make climate model descriptions and datasets interoperable. Providing detailed metadata descriptions which include the full provenance of derived data products will contribute to making that data (and, the models and processes which generated that data) more open and transparent to the user community.

  19. Data catalog project—A browsable, searchable, metadata system

    International Nuclear Information System (INIS)

    Stillerman, Joshua; Fredian, Thomas; Greenwald, Martin; Manduchi, Gabriele

    2016-01-01

    Modern experiments are typically conducted by large, extended groups, where researchers rely on other team members to produce much of the data they use. The experiments record very large numbers of measurements that can be difficult for users to find, access and understand. We are developing a system for users to annotate their data products with structured metadata, providing data consumers with a discoverable, browsable data index. Machine understandable metadata captures the underlying semantics of the recorded data, which can then be consumed by both programs, and interactively by users. Collaborators can use these metadata to select and understand recorded measurements. The data catalog project is a data dictionary and index which enables users to record general descriptive metadata, use cases and rendering information as well as providing them a transparent data access mechanism (URI). Users describe their diagnostic including references, text descriptions, units, labels, example data instances, author contact information and data access URIs. The list of possible attribute labels is extensible, but limiting the vocabulary of names increases the utility of the system. The data catalog is focused on the data products and complements process-based systems like the Metadata Ontology Provenance project [Greenwald, 2012; Schissel, 2015]. This system can be coupled with MDSplus to provide a simple platform for data driven display and analysis programs. Sites which use MDSplus can describe tree branches, and if desired create ‘processed data trees’ with homogeneous node structures for measurements. Sites not currently using MDSplus can either use the database to reference local data stores, or construct an MDSplus tree whose leaves reference the local data store. A data catalog system can provide a useful roadmap of data acquired from experiments or simulations making it easier for researchers to find and access important data and understand the meaning of the

  20. Data catalog project—A browsable, searchable, metadata system

    Energy Technology Data Exchange (ETDEWEB)

    Stillerman, Joshua, E-mail: jas@psfc.mit.edu [MIT Plasma Science and Fusion Center, Cambridge, MA (United States); Fredian, Thomas; Greenwald, Martin [MIT Plasma Science and Fusion Center, Cambridge, MA (United States); Manduchi, Gabriele [Consorzio RFX, Euratom-ENEA Association, Corso Stati Uniti 4, Padova 35127 (Italy)

    2016-11-15

    Modern experiments are typically conducted by large, extended groups, where researchers rely on other team members to produce much of the data they use. The experiments record very large numbers of measurements that can be difficult for users to find, access and understand. We are developing a system for users to annotate their data products with structured metadata, providing data consumers with a discoverable, browsable data index. Machine understandable metadata captures the underlying semantics of the recorded data, which can then be consumed by both programs, and interactively by users. Collaborators can use these metadata to select and understand recorded measurements. The data catalog project is a data dictionary and index which enables users to record general descriptive metadata, use cases and rendering information as well as providing them a transparent data access mechanism (URI). Users describe their diagnostic including references, text descriptions, units, labels, example data instances, author contact information and data access URIs. The list of possible attribute labels is extensible, but limiting the vocabulary of names increases the utility of the system. The data catalog is focused on the data products and complements process-based systems like the Metadata Ontology Provenance project [Greenwald, 2012; Schissel, 2015]. This system can be coupled with MDSplus to provide a simple platform for data driven display and analysis programs. Sites which use MDSplus can describe tree branches, and if desired create ‘processed data trees’ with homogeneous node structures for measurements. Sites not currently using MDSplus can either use the database to reference local data stores, or construct an MDSplus tree whose leaves reference the local data store. A data catalog system can provide a useful roadmap of data acquired from experiments or simulations making it easier for researchers to find and access important data and understand the meaning of the

  1. Current and future plans for wind energy development on San Clemente Island, California

    Energy Technology Data Exchange (ETDEWEB)

    Hurley, P.J.F. [RLA Consulting, Inc., Bothell, WA (United States); Cable, S.B. [Naval Facilities Engineering Service Center, Port Hueneme, CA (United States)

    1996-12-31

    The Navy is considering possible ways to maximize the use of wind energy technology for power supply to their auxiliary landing field and other facilities on San Clemente Island. A summary of their past analysis and future considerations is presented. An analysis was performed regarding the technical and economic feasibility of installing and operating a sea-water pumped hydro/wind energy system to provide for all of the island`s electric power needs. Follow-on work to the feasibility study include wind resource monitoring as well as procurement and preliminary design activities for a first-phase wind-diesel installation. Future plans include the consideration of alternative siting arrangements and the introduction of on-island fresh water production. 3 refs., 4 figs.

  2. Title, Description, and Subject are the Most Important Metadata Fields for Keyword Discoverability

    Directory of Open Access Journals (Sweden)

    Laura Costello

    2016-09-01

    Full Text Available A Review of: Yang, L. (2016. Metadata effectiveness in internet discovery: An analysis of digital collection metadata elements and internet search engine keywords. College & Research Libraries, 77(1, 7-19. http://doi.org/10.5860/crl.77.1.7 Objective – To determine which metadata elements best facilitate discovery of digital collections. Design – Case study. Setting – A public research university serving over 32,000 graduate and undergraduate students in the Southwestern United States of America. Subjects – A sample of 22,559 keyword searches leading to the institution’s digital repository between August 1, 2013, and July 31, 2014. Methods – The author used Google Analytics to analyze 73,341 visits to the institution’s digital repository. He determined that 22,559 of these visits were due to keyword searches. Using Random Integer Generator, the author identified a random sample of 378 keyword searches. The author then matched the keywords with the Dublin Core and VRA Core metadata elements on the landing page in the digital repository to determine which metadata field had drawn the keyword searcher to that particular page. Many of these keywords matched to more than one metadata field, so the author also analyzed the metadata elements that generated unique keyword hits and those fields that were frequently matched together. Main Results – Title was the most matched metadata field with 279 matched keywords from searches. Description and Subject were also significant fields with 208 and 79 matches respectively. Slightly more than half of the results, 195 keywords, matched the institutional repository in one field only. Both Title and Description had significant match rates both independently and in conjunction with other elements, but Subject keywords were the sole match in only three of the sampled cases. Conclusion – The Dublin Core elements of Title, Description, and Subject were the most frequently matched fields in keyword

  3. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    Science.gov (United States)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  4. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    Science.gov (United States)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible

  5. 33 CFR 334.1170 - San Pablo Bay, Calif.; gunnery range, Naval Inshore Operations Training Center, Mare Island...

    Science.gov (United States)

    2010-07-01

    ... range, Naval Inshore Operations Training Center, Mare Island, Vallejo. 334.1170 Section 334.1170... Operations Training Center, Mare Island, Vallejo. (a) The Danger Zone. A sector in San Pablo Bay delineated..., Vallejo, California, will conduct gunnery practice in the area during the period April 1 through September...

  6. Leveraging Metadata to Create Interactive Images... Today!

    Science.gov (United States)

    Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.

    2011-01-01

    The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org

  7. Comparative Study of Metadata Elements Used in the Website of Central Library of Universities Subordinate to the Ministry of Science, Research and Technology with the Dublin Core Metadata Elements

    Directory of Open Access Journals (Sweden)

    Kobra Babaei

    2012-03-01

    Full Text Available This research has been carried out with the aim of studying the web sites of central libraries of universities subordinate to the Ministry of Science, Research and Technology usage of metadata elements and its comparison with Dublin Core standard elements. This study was a comparative survey, in which 40 websites of academic library by using Internet Explorer browser. Then the HTML pages of these websites were seen through the Source of View menu, and metadata elements of each websites were extracted and entered in the checklist. Then, with using descriptive statistics (frequency, percentage and mean analysis of data was discussed. Research findings showed that the reviewed websites did not use any Dublin Core metadata elements, general metadata Markup language used in design of all websites, the amount of metadata elements used in website, Central Library of Ferdowsi University of Mashhad and Iran Science and Industries with 57% in first ranked and Shahid Beheshti University with 49% in second ranked and the International University of Imam Khomeini with 40% was in third ranked. The approach to web designers was determined too that as follows: the content of source in first ranked and attention to physical appearance source in second ranked and also ownership of source in third position.

  8. Dealing with metadata quality: the legacy of digital library efforts

    OpenAIRE

    Tani, Alice; Candela, Leonardo; Castelli, Donatella

    2013-01-01

    In this work, we elaborate on the meaning of metadata quality by surveying efforts and experiences matured in the digital library domain. In particular, an overview of the frameworks developed to characterize such a multi-faceted concept is presented. Moreover, the most common quality-related problems affecting metadata both during the creation and the aggregation phase are discussed together with the approaches, technologies and tools developed to mitigate them. This survey on digital librar...

  9. Making Information Visible, Accessible, and Understandable: Meta-Data and Registries

    Science.gov (United States)

    2007-07-01

    the data created, the length of play time, album name, and the genre. Without resource metadata, portable digital music players would not be so...notion of a catalog card in a library. An example of metadata is the description of a music file specifying the creator, the artist that performed the song...describe struc- ture and formatting which are critical to interoperability and the management of databases. Going back to the portable music player example

  10. SM4AM: A Semantic Metamodel for Analytical Metadata

    DEFF Research Database (Denmark)

    Varga, Jovan; Romero, Oscar; Pedersen, Torben Bach

    2014-01-01

    Next generation BI systems emerge as platforms where traditional BI tools meet semi-structured and unstructured data coming from the Web. In these settings, the user-centric orientation represents a key characteristic for the acceptance and wide usage by numerous and diverse end users in their data....... We present SM4AM, a Semantic Metamodel for Analytical Metadata created as an RDF formalization of the Analytical Metadata artifacts needed for user assistance exploitation purposes in next generation BI systems. We consider the Linked Data initiative and its relevance for user assistance...

  11. Archive of digital chirp subbottom profile data collected during USGS cruise 11BIM01 Offshore of the Chandeleur Islands, Louisiana, June 2011

    Science.gov (United States)

    Forde, Arnell S.; Dadisman, Shawn V.; Miselis, Jennifer L.; Flocks, James G.; Wiese, Dana S.

    2013-01-01

    From June 3 to 13, 2011, the U.S. Geological Survey conducted a geophysical survey to investigate the geologic controls on barrier island framework and long-term sediment transport along the oil spill mitigation sand berm constructed at the north end and just offshore of the Chandeleur Islands, LA. This effort is part of a broader USGS study, which seeks to better understand barrier island evolution over medium time scales (months to years). This report serves as an archive of unprocessed digital chirp subbottom data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FGDC) metadata. Gained (showing a relative increase in signal amplitude) digital images of the seismic profiles are also provided.

  12. A Metadata Standard for Hydroinformatic Data Conforming to International Standards

    Science.gov (United States)

    Notay, Vikram; Carstens, Georg; Lehfeldt, Rainer

    2017-04-01

    The affordable availability of computing power and digital storage has been a boon for the scientific community. The hydroinformatics community has also benefitted from the so-called digital revolution, which has enabled the tackling of more and more complex physical phenomena using hydroinformatic models, instruments, sensors, etc. With models getting more and more complex, computational domains getting larger and the resolution of computational grids and measurement data getting finer, a large amount of data is generated and consumed in any hydroinformatics related project. The ubiquitous availability of internet also contributes to this phenomenon with data being collected through sensor networks connected to telecommunications networks and the internet long before the term Internet of Things existed. Although generally good, this exponential increase in the number of available datasets gives rise to the need to describe this data in a standardised way to not only be able to get a quick overview about the data but to also facilitate interoperability of data from different sources. The Federal Waterways Engineering and Research Institute (BAW) is a federal authority of the German Federal Ministry of Transport and Digital Infrastructure. BAW acts as a consultant for the safe and efficient operation of the German waterways. As part of its consultation role, BAW operates a number of physical and numerical models for sections of inland and marine waterways. In order to uniformly describe the data produced and consumed by these models throughout BAW and to ensure interoperability with other federal and state institutes on the one hand and with EU countries on the other, a metadata profile for hydroinformatic data has been developed at BAW. The metadata profile is composed in its entirety using the ISO 19115 international standard for metadata related to geographic information. Due to the widespread use of the ISO 19115 standard in the existing geodata infrastructure

  13. Structural Metadata Research in the Ears Program

    National Research Council Canada - National Science Library

    Liu, Yang; Shriberg, Elizabeth; Stolcke, Andreas; Peskin, Barbara; Ang, Jeremy; Hillard, Dustin; Ostendorf, Mari; Tomalin, Marcus; Woodland, Phil; Harper, Mary

    2005-01-01

    Both human and automatic processing of speech require recognition of more than just words. In this paper we provide a brief overview of research on structural metadata extraction in the DARPA EARS rich transcription program...

  14. The Earthscope USArray Array Network Facility (ANF): Metadata, Network and Data Monitoring, Quality Assurance During the Second Year of Operations

    Science.gov (United States)

    Eakins, J. A.; Vernon, F. L.; Martynov, V.; Newman, R. L.; Cox, T. A.; Lindquist, K. L.; Hindley, A.; Foley, S.

    2005-12-01

    The Array Network Facility (ANF) for the Earthscope USArray Transportable Array seismic network is responsible for: the delivery of all Transportable Array stations (400 at full deployment) and telemetered Flexible Array stations (up to 200) to the IRIS Data Management Center; station command and control; verification and distribution of metadata; providing useful remotely accessible world wide web interfaces for personnel at the Array Operations Facility (AOF) to access state of health information; and quality control for all data. To meet these goals, we use the Antelope software package to facilitate data collection and transfer, generation and merging of the metadata, real-time monitoring of dataloggers, generation of station noise spectra, and analyst review of individual events. Recently, an Antelope extension to the PHP scripting language has been implemented which facilitates the dynamic presentation of the real-time data to local web pages. Metadata transfers have been simplified by the use of orb transfer technologies at the ANF and receiver end points. Web services are being investigated as a means to make a potentially complicated set of operations easy to follow and reproduce for each newly installed or decommissioned station. As part of the quality control process, daily analyst review has highlighted areas where neither the regional network bulletins nor the USGS global bulletin have published solutions. Currently four regional networks (Anza, BDSN, SCSN, and UNR) contribute data to the Transportable Array with additional contributors expected. The first 100 stations (42 new Earthscope stations) were operational by September 2005 with all but one of the California stations installed. By year's end, weather permitting, the total number of stations deployed is expected to be around 145. Visit http://anf.ucsd.edu for more information on the project and current status.

  15. Embedding Metadata and Other Semantics in Word Processing Documents

    Directory of Open Access Journals (Sweden)

    Peter Sefton

    2009-10-01

    Full Text Available This paper describes a technique for embedding document metadata, and potentially other semantic references inline in word processing documents, which the authors have implemented with the help of a software development team. Several assumptions underly the approach; It must be available across computing platforms and work with both Microsoft Word (because of its user base and OpenOffice.org (because of its free availability. Further the application needs to be acceptable to and usable by users, so the initial implementation covers only small number of features, which will only be extended after user-testing. Within these constraints the system provides a mechanism for encoding not only simple metadata, but for inferring hierarchical relationships between metadata elements from a ‘flat’ word processing file.The paper includes links to open source code implementing the techniques as part of a broader suite of tools for academic writing. This addresses tools and software, semantic web and data curation, integrating curation into research workflows and will provide a platform for integrating work on ontologies, vocabularies and folksonomies into word processing tools.

  16. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata.

    Science.gov (United States)

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-16

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.

  17. Towards an Interoperable Field Spectroscopy Metadata Standard with Extended Support for Marine Specific Applications

    Directory of Open Access Journals (Sweden)

    Barbara A. Rasaiah

    2015-11-01

    Full Text Available This paper presents an approach to developing robust metadata standards for specific applications that serves to ensure a high level of reliability and interoperability for a spectroscopy dataset. The challenges of designing a metadata standard that meets the unique requirements of specific user communities are examined, including in situ measurement of reflectance underwater, using coral as a case in point. Metadata schema mappings from seven existing metadata standards demonstrate that they consistently fail to meet the needs of field spectroscopy scientists for general and specific applications (μ = 22%, σ = 32% conformance with the core metadata requirements and μ = 19%, σ = 18% for the special case of a benthic (e.g., coral reflectance metadataset. Issues such as field measurement methods, instrument calibration, and data representativeness for marine field spectroscopy campaigns are investigated within the context of submerged benthic measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. A hybrid standard that serves as a “best of breed” incorporating useful modules and parameters within the standards is proposed. This paper is Part 3 in a series of papers in this journal, examining the issues central to a metadata standard for field spectroscopy datasets. The results presented in this paper are an important step towards field spectroscopy metadata standards that address the specific needs of field spectroscopy data stakeholders while facilitating dataset documentation, quality assurance, discoverability and data exchange within large-scale information sharing platforms.

  18. Diet patterns of island foxes on San Nicolas Island relative to feral cat removal

    Science.gov (United States)

    Cypher, Brian L.; Kelly, Erica C.; Ferrara, Francesca J.; Drost, Charles A.; Westall, Tory L.; Hudgens, Brian

    2017-01-01

    Island foxes (Urocyon littoralis) are a species of conservation concern that occur on six of the Channel Islands off the coast of southern California. We analysed island fox diet on San Nicolas Island during 2006–12 to assess the influence of the removal of feral cats (Felis catus) on the food use by foxes. Our objective was to determine whether fox diet patterns shifted in response to the cat removal conducted during 2009–10, thus indicating that cats were competing with foxes for food items. We also examined the influence of annual precipitation patterns and fox abundance on fox diet. On the basis of an analysis of 1975 fox scats, use of vertebrate prey – deer mice (Peromyscus maniculatus), birds, and lizards – increased significantly during and after the complete removal of cats (n = 66) from the island. Deer mouse abundance increased markedly during and after cat removal and use of mice by foxes was significantly related to mouse abundance. The increase in mice and shift in item use by the foxes was consistent with a reduction in exploitative competition associated with the cat removal. However, fox abundance declined markedly coincident with the removal of cats and deer mouse abundance was negatively related to fox numbers. Also, annual precipitation increased markedly during and after cat removal and deer mouse abundance closely tracked precipitation. Thus, our results indicate that other confounding factors, particularly precipitation, may have had a greater influence on fox diet patterns.

  19. Testing Metadata Existence of Web Map Services

    Directory of Open Access Journals (Sweden)

    Jan Růžička

    2011-05-01

    Full Text Available For a general user is quite common to use data sources available on WWW. Almost all GIS software allow to use data sources available via Web Map Service (ISO/OGC standard interface. The opportunity to use different sources and combine them brings a lot of problems that were discussed many times on conferences or journal papers. One of the problem is based on non existence of metadata for published sources. The question was: were the discussions effective? The article is partly based on comparison of situation for metadata between years 2007 and 2010. Second part of the article is focused only on 2010 year situation. The paper is created in a context of research of intelligent map systems, that can be used for an automatic or a semi-automatic map creation or a map evaluation.

  20. Decision Making for Pap Testing among Pacific Islander Women

    Science.gov (United States)

    Weiss, Jie W.; Mouttapa, Michele; Sablan-Santos, Lola; DeGuzman Lacsamana, Jasmine; Quitugua, Lourdes; Park Tanjasiri, Sora

    2016-01-01

    This study employed a Multi-Attribute Utility (MAU) model to examine the Pap test decision-making process among Pacific Islanders (PI) residing in Southern California. A total of 585 PI women were recruited through social networks from Samoan and Tongan churches, and Chamorro family clans. A questionnaire assessed Pap test knowledge, beliefs and…

  1. Geo-Enrichment and Semantic Enhancement of Metadata Sets to Augment Discovery in Geoportals

    Directory of Open Access Journals (Sweden)

    Bernhard Vockner

    2014-03-01

    Full Text Available Geoportals are established to function as main gateways to find, evaluate, and start “using” geographic information. Still, current geoportal implementations face problems in optimizing the discovery process due to semantic heterogeneity issues, which leads to low recall and low precision in performing text-based searches. Therefore, we propose an enhanced semantic discovery approach that supports multilingualism and information domain context. Thus, we present workflow that enriches existing structured metadata with synonyms, toponyms, and translated terms derived from user-defined keywords based on multilingual thesauri and ontologies. To make the results easier and understandable, we also provide automated translation capabilities for the resource metadata to support the user in conceiving the thematic content of the descriptive metadata, even if it has been documented using a language the user is not familiar with. In addition, to text-enable spatial filtering capabilities, we add additional location name keywords to metadata sets. These are based on the existing bounding box and shall tweak discovery scores when performing single text line queries. In order to improve the user’s search experience, we tailor faceted search strategies presenting an enhanced query interface for geo-metadata discovery that are transparently leveraging the underlying thesauri and ontologies.

  2. Building capacity for HIV/AIDS prevention among Asian Pacific Islander organizations: the experience of a culturally appropriate capacity-building program in Southern California.

    Science.gov (United States)

    Takahashi, Lois M; Candelario, Jury; Young, Tim; Mediano, Elizabeth

    2007-01-01

    This article has two goals: (1) to outline a conceptual model for culturally appropriate HIV prevention capacity building; (2) to present the experiences from a 3-year program provided by Asian Pacific AIDS Intervention Team to Asian Pacific Islander (API) organizations in southern California. The participating organizations were of two types: lesbian, gay, bisexual, transgender, and questioning (LGBTQ) social organizations and social service agencies not targeting LGBTQ. These organizations were selected for participation because of their commitment to HIV/AIDS issues in API communities. An organizational survey and staff observations were used to explore changes in capacity. The organizations were mostly small, targeted diverse populations, served a large geographic area (southern California as a region), and were knowledgeable about HIV. Organizations became more viable (more capacity in human resources, financial, external relations, and strategic management), but also more unstable (large growth in paid staff and board members), and showed more capacity in HIV knowledge environments (especially less stigma and more sensitivity to diverse populations). The results suggest that capacity can expand over a short period of time, but as capacity increases, organizational viability/stability and HIV knowledge environments change, meaning that different types of technical assistance would be needed for sustainability.

  3. Assessing marine microbial induced corrosion at Santa Catalina Island, California

    Directory of Open Access Journals (Sweden)

    Gustavo Antonio Ramírez

    2016-10-01

    Full Text Available High iron and eutrophic conditions are reported as environmental factors leading to accelerated low-water corrosion, an enhanced form of near-shore microbial-induced corrosion. To explore this hypothesis, we deployed flow-through colonization systems in laboratory-based aquarium tanks under a continuous flow of surface seawater from Santa Catalina Island, California, USA, for periods of two and six months. Substrates consisted of mild steel – a major constituent of maritime infrastructure – and the naturally occurring iron sulfide mineral pyrite. Four conditions were tested: free-venting high-flux conditions; a stagnant condition; an active flow-through condition with seawater slowly pumped over the substrates; and an enrichment condition where the slow pumping of seawater was supplemented with nutrient rich medium. Electron microscopy analyses of the two-month high flux incubations document coating of substrates with twisted stalks, resembling iron oxyhydroxide bioprecipitates made by marine neutrophilic Fe-oxidizing bacteria. Six-month incubations exhibit increased biofilm and substrate corrosion in the active flow and nutrient enriched conditions relative to the stagnant condition. A scarcity of twisted stalks was observed for all six month slow-flow conditions compared to the high-flux condition, which may be attributable to oxygen concentrations in the slow-flux conditions being prohibitively low for sustained growth of stalk-producing bacteria. All substrates developed microbial communities reflective of the original seawater input, as based on 16S rRNA gene sequencing. Deltaproteobacteria sequences increased in relative abundance in the active flow and nutrient enrichment conditions, whereas Gammaproteobacteria sequences were relatively more abundant in the stagnant condition. These results indicate that i high-flux incubations with higher oxygen availability favor the development of biofilms with twisted stalks resembling those of

  4. ORGANIZATION OF DIGITAL RESOURCES IN REPEC THROUGH REDIF METADATA

    Directory of Open Access Journals (Sweden)

    Salvador Enrique Vazquez Moctezuma

    2018-06-01

    Full Text Available Introduction: The disciplinary repository RePEc (Research Papers in Economics provides access to a wide range of preprints, journal articles, books, book chapters and software about economic and administrative sciences. This repository adds bibliographic records produced by different universities, institutes, editors and authors that work collaboratively following the norms of the documentary organization. Objective: In this paper, mainly, we identify and analyze the functioning of RePEc, which includes the organization of the files, which is characterized using the protocol Guildford and metadata ReDIF (Research Documentation Information Format templates own for the documentary description. Methodology: Part of this research was studied theoretically in the literature; another part was carried out by observing a series of features visible on the RePEc website and in the archives of a journal that collaborates in this repository. Results: The repository is a decentralized collaborative project and it also provides several services derived from the metadata analysis. Conclusions: We conclude that the ReDIF templates and the Guildford communication protocol are key elements for organizing records in RePEc, and there is a similarity with the Dublin Core metadata

  5. Stop the Bleeding: the Development of a Tool to Streamline NASA Earth Science Metadata Curation Efforts

    Science.gov (United States)

    le Roux, J.; Baker, A.; Caltagirone, S.; Bugbee, K.

    2017-12-01

    The Common Metadata Repository (CMR) is a high-performance, high-quality repository for Earth science metadata records, and serves as the primary way to search NASA's growing 17.5 petabytes of Earth science data holdings. Released in 2015, CMR has the capability to support several different metadata standards already being utilized by NASA's combined network of Earth science data providers, or Distributed Active Archive Centers (DAACs). The Analysis and Review of CMR (ARC) Team located at Marshall Space Flight Center is working to improve the quality of records already in CMR with the goal of making records optimal for search and discovery. This effort entails a combination of automated and manual review, where each NASA record in CMR is checked for completeness, accuracy, and consistency. This effort is highly collaborative in nature, requiring communication and transparency of findings amongst NASA personnel, DAACs, the CMR team and other metadata curation teams. Through the evolution of this project it has become apparent that there is a need to document and report findings, as well as track metadata improvements in a more efficient manner. The ARC team has collaborated with Element 84 in order to develop a metadata curation tool to meet these needs. In this presentation, we will provide an overview of this metadata curation tool and its current capabilities. Challenges and future plans for the tool will also be discussed.

  6. Development of an open metadata schema for prospective clinical research (openPCR) in China.

    Science.gov (United States)

    Xu, W; Guan, Z; Sun, J; Wang, Z; Geng, Y

    2014-01-01

    In China, deployment of electronic data capture (EDC) and clinical data management system (CDMS) for clinical research (CR) is in its very early stage, and about 90% of clinical studies collected and submitted clinical data manually. This work aims to build an open metadata schema for Prospective Clinical Research (openPCR) in China based on openEHR archetypes, in order to help Chinese researchers easily create specific data entry templates for registration, study design and clinical data collection. Singapore Framework for Dublin Core Application Profiles (DCAP) is used to develop openPCR and four steps such as defining the core functional requirements and deducing the core metadata items, developing archetype models, defining metadata terms and creating archetype records, and finally developing implementation syntax are followed. The core functional requirements are divided into three categories: requirements for research registration, requirements for trial design, and requirements for case report form (CRF). 74 metadata items are identified and their Chinese authority names are created. The minimum metadata set of openPCR includes 3 documents, 6 sections, 26 top level data groups, 32 lower data groups and 74 data elements. The top level container in openPCR is composed of public document, internal document and clinical document archetypes. A hierarchical structure of openPCR is established according to Data Structure of Electronic Health Record Architecture and Data Standard of China (Chinese EHR Standard). Metadata attributes are grouped into six parts: identification, definition, representation, relation, usage guides, and administration. OpenPCR is an open metadata schema based on research registration standards, standards of the Clinical Data Interchange Standards Consortium (CDISC) and Chinese healthcare related standards, and is to be publicly available throughout China. It considers future integration of EHR and CR by adopting data structure and data

  7. Ground penetrating radar and differential global positioning system data collected in April 2016 from Fire Island, New York

    Science.gov (United States)

    Forde, Arnell S.; Bernier, Julie C.; Miselis, Jennifer L.

    2018-02-22

    Researchers from the U.S. Geological Survey (USGS) conducted a long-term coastal morphologic-change study at Fire Island, New York, prior to and after Hurricane Sandy impacted the area in October 2012. The Fire Island Coastal Change project objectives include understanding the morphologic evolution of the barrier island system on a variety of time scales (months to centuries) and resolving storm-related impacts, post-storm beach response, and recovery. In April 2016, scientists from the USGS St. Petersburg Coastal and Marine Science Center conducted geophysical and sediment sampling surveys on Fire Island to characterize and quantify spatial variability in the subaerial geology with the goal of subsequently integrating onshore geology with other surf zone and nearshore datasets.  This report, along with the associated USGS data release, serves as an archive of ground penetrating radar (GPR) and post-processed differential global positioning system (DGPS) data collected from beach and back-barrier environments on Fire Island, April 6–13, 2016 (USGS Field Activity Number 2016-322-FA). Data products, including unprocessed GPR trace data, processed DGPS data, elevation-corrected subsurface profile images, geographic information system files, and accompanying Federal Geographic Data Committee metadata are available for download.

  8. The Benefits and Future of Standards: Metadata and Beyond

    Science.gov (United States)

    Stracke, Christian M.

    This article discusses the benefits and future of standards and presents the generic multi-dimensional Reference Model. First the importance and the tasks of interoperability as well as quality development and their relationship are analyzed. Especially in e-Learning their connection and interdependence is evident: Interoperability is one basic requirement for quality development. In this paper, it is shown how standards and specifications are supporting these crucial issues. The upcoming ISO metadata standard MLR (Metadata for Learning Resource) will be introduced and used as example for identifying the requirements and needs for future standardization. In conclusion a vision of the challenges and potentials for e-Learning standardization is outlined.

  9. Crowd-sourced BMS point matching and metadata maintenance with Babel

    DEFF Research Database (Denmark)

    Fürst, Jonathan; Chen, Kaifei; Katz, Randy H.

    2016-01-01

    Cyber-physical applications, deployed on top of Building Management Systems (BMS), promise energy saving and comfort improvement in non-residential buildings. Such applications are so far mainly deployed as research prototypes. The main roadblock to widespread adoption is the low quality of BMS...... systems. Such applications access sensors and actuators through BMS metadata in form of point labels. The naming of labels is however often inconsistent and incomplete. To tackle this problem, we introduce Babel, a crowd-sourced approach to the creation and maintenance of BMS metadata. In our system...

  10. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    Science.gov (United States)

    Lambert, F.; Odier, J.; Fulachier, J.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  11. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins.

    CERN Document Server

    AUTHOR|(SzGeCERN)637120; The ATLAS collaboration; Odier, Jerome; Fulachier, Jerome

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  12. Inconsistencies between Academic E-Book Platforms: A Comparison of Metadata and Search Results

    Science.gov (United States)

    Wiersma, Gabrielle; Tovstiadi, Esta

    2017-01-01

    This article presents the results of a study of academic e-books that compared the metadata and search results from major academic e-book platforms. The authors collected data and performed a series of test searches designed to produce the same result regardless of platform. Testing, however, revealed metadata-related errors and significant…

  13. PERANCANGAN SISTEM METADATA UNTUK DATA WAREHOUSE DENGAN STUDI KASUS REVENUE TRACKING PADA PT. TELKOM DIVRE V JAWA TIMUR

    Directory of Open Access Journals (Sweden)

    Yudhi Purwananto

    2004-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Data warehouse merupakan media penyimpanan data dalam perusahaan yang diambil dari berbagai sistem dan dapat digunakan untuk berbagai keperluan seperti analisis dan pelaporan. Di PT Telkom Divre V Jawa Timur telah dibangun sebuah data warehouse yang disebut dengan Regional Database. Di Regional Database memerlukan sebuah komponen penting dalam data warehouse yaitu metadata. Definisi metadata secara sederhana adalah "data tentang data". Dalam penelitian ini dirancang sistem metadata dengan studi kasus Revenue Tracking sebagai komponen analisis dan pelaporan pada Regional Database. Metadata sangat perlu digunakan dalam pengelolaan dan memberikan informasi tentang data warehouse. Proses - proses di dalam data warehouse serta komponen - komponen yang berkaitan dengan data warehouse harus saling terintegrasi untuk mewujudkan karakteristik data warehouse yang subject-oriented, integrated, time-variant, dan non-volatile. Karena itu metadata juga harus memiliki kemampuan mempertukarkan informasi (exchange antar komponen dalam data warehouse tersebut. Web service digunakan sebagai mekanisme pertukaran ini. Web service menggunakan teknologi XML dan protokol HTTP dalam berkomunikasi. Dengan web service, setiap komponen

  14. The Value of Data and Metadata Standardization for Interoperability in Giovanni

    Science.gov (United States)

    Smit, C.; Hegde, M.; Strub, R. F.; Bryant, K.; Li, A.; Petrenko, M.

    2017-12-01

    Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization. This poster gives examples of how our metadata and data standards, both external and internal, have both simplified our code base and improved our users' experiences.

  15. Improving the accessibility and re-use of environmental models through provision of model metadata - a scoping study

    Science.gov (United States)

    Riddick, Andrew; Hughes, Andrew; Harpham, Quillon; Royse, Katherine; Singh, Anubha

    2014-05-01

    There has been an increasing interest both from academic and commercial organisations over recent years in developing hydrologic and other environmental models in response to some of the major challenges facing the environment, for example environmental change and its effects and ensuring water resource security. This has resulted in a significant investment in modelling by many organisations both in terms of financial resources and intellectual capital. To capitalise on the effort on producing models, then it is necessary for the models to be both discoverable and appropriately described. If this is not undertaken then the effort in producing the models will be wasted. However, whilst there are some recognised metadata standards relating to datasets these may not completely address the needs of modellers regarding input data for example. Also there appears to be a lack of metadata schemes configured to encourage the discovery and re-use of the models themselves. The lack of an established standard for model metadata is considered to be a factor inhibiting the more widespread use of environmental models particularly the use of linked model compositions which fuse together hydrologic models with models from other environmental disciplines. This poster presents the results of a Natural Environment Research Council (NERC) funded scoping study to understand the requirements of modellers and other end users for metadata about data and models. A user consultation exercise using an on-line questionnaire has been undertaken to capture the views of a wide spectrum of stakeholders on how they are currently managing metadata for modelling. This has provided a strong confirmation of our original supposition that there is a lack of systems and facilities to capture metadata about models. A number of specific gaps in current provision for data and model metadata were also identified, including a need for a standard means to record detailed information about the modelling

  16. Flexible Authoring of Metadata for Learning : Assembling forms from a declarative data and view model

    OpenAIRE

    Enoksson, Fredrik

    2011-01-01

    With the vast amount of information in various formats that is produced today it becomes necessary for consumers ofthis information to be able to judge if it is relevant for them. One way to enable that is to provide information abouteach piece of information, i.e. provide metadata. When metadata is to be edited by a human being, a metadata editorneeds to be provided. This thesis describes the design and practical use of a configuration mechanism for metadataeditors called annotation profiles...

  17. Mitochondrial Analysis of the Most Basal Canid Reveals Deep Divergence between Eastern and Western North American Gray Foxes (Urocyon spp.) and Ancient Roots in Pleistocene California.

    Science.gov (United States)

    Goddard, Natalie S; Statham, Mark J; Sacks, Benjamin N

    2015-01-01

    Pleistocene aridification in central North America caused many temperate forest-associated vertebrates to split into eastern and western lineages. Such divisions can be cryptic when Holocene expansions have closed the gaps between once-disjunct ranges or when local morphological variation obscures deeper regional divergences. We investigated such cryptic divergence in the gray fox (Urocyon cinereoargenteus), the most basal extant canid in the world. We also investigated the phylogeography of this species and its diminutive relative, the island fox (U. littoralis), in California. The California Floristic Province was a significant source of Pleistocene diversification for a wide range of taxa and, we hypothesized, for the gray fox as well. Alternatively, gray foxes in California potentially reflected a recent Holocene expansion from further south. We sequenced mitochondrial DNA from 169 gray foxes from the southeastern and southwestern United States and 11 island foxes from three of the Channel Islands. We estimated a 1.3% sequence divergence in the cytochrome b gene between eastern and western foxes and used coalescent simulations to date the divergence to approximately 500,000 years before present (YBP), which is comparable to that between recognized sister species within the Canidae. Gray fox samples collected from throughout California exhibited high haplotype diversity, phylogeographic structure, and genetic signatures of a late-Holocene population decline. Bayesian skyline analysis also indicated an earlier population increase dating to the early Wisconsin glaciation (~70,000 YBP) and a root height extending back to the previous interglacial (~100,000 YBP). Together these findings support California's role as a long-term Pleistocene refugium for western Urocyon. Lastly, based both on our results and re-interpretation of those of another study, we conclude that island foxes of the Channel Islands trace their origins to at least 3 distinct female founders from

  18. USGS 24k Digital Raster Graphic (DRG) Metadata

    Data.gov (United States)

    Minnesota Department of Natural Resources — Metadata for the scanned USGS 24k Topograpic Map Series (also known as 24k Digital Raster Graphic). Each scanned map is represented by a polygon in the layer and the...

  19. Evaluation of Semi-Automatic Metadata Generation Tools: A Survey of the Current State of the Art

    Directory of Open Access Journals (Sweden)

    Jung-ran Park

    2015-09-01

    Full Text Available Assessment of the current landscape of semi-automatic metadata generation tools is particularly important considering the rapid development of digital repositories and the recent explosion of big data. Utilization of (semiautomatic metadata generation is critical in addressing these environmental changes and may be unavoidable in the future considering the costly and complex operation of manual metadata creation. To address such needs, this study examines the range of semi-automatic metadata generation tools (n=39 while providing an analysis of their techniques, features, and functions. The study focuses on open-source tools that can be readily utilized in libraries and other memory institutions.  The challenges and current barriers to implementation of these tools were identified. The greatest area of difficulty lies in the fact that  the piecemeal development of most semi-automatic generation tools only addresses part of the issue of semi-automatic metadata generation, providing solutions to one or a few metadata elements but not the full range elements.  This indicates that significant local efforts will be required to integrate the various tools into a coherent set of a working whole.  Suggestions toward such efforts are presented for future developments that may assist information professionals with incorporation of semi-automatic tools within their daily workflows.

  20. A renaissance in library metadata? The importance of community collaboration in a digital world

    Directory of Open Access Journals (Sweden)

    Sarah Bull

    2016-07-01

    Full Text Available This article summarizes a presentation given by Sarah Bull as part of the Association of Learned and Professional Society Publishers (ALPSP seminar ‘Setting the Standard’ in November 2015. Representing the library community at the wide-ranging seminar, Sarah was tasked with making the topic of library metadata an engaging and informative one for a largely publisher audience. With help from co-author Amanda Quimby, this article is an attempt to achieve the same aim! It covers the importance of library metadata and standards in the supply chain and also reflects on the role of the community in successful standards development and maintenance. Special emphasis is given to the importance of quality in e-book metadata and the need for publisher and library collaboration to improve discovery, usage and the student experience. The article details the University of Birmingham experience of e-book metadata from a workflow perspective to highlight the complex integration issues which remain between content procurement and discovery.

  1. The 2010 Southern California Ocean Bottom Seismometer Deployment

    Science.gov (United States)

    Booth, C. M.; Kohler, M. D.; Weeraratne, D. S.

    2010-12-01

    Subduction, mid-ocean ridge spreading, and transpressional deformation are all processes that played important roles in the evolution of the diffuse Pacific-North America plate boundary offshore Southern California. Existing seismic data for the boundary typically end at the coastline due to the fact that onshore data collection is easier and more feasible. As a result, current models for plate boundary deformation and mantle flow lack data from nearly half the plate boundary offshore. In August 2010, twenty-four broadband and ten short period ocean bottom seismometers (OBS) were deployed on a research cruise as part of a year-long passive OBS experiment off the coast of Southern California. The Asthenospheric and Lithospheric Broadband Architecture from the California Offshore Region Experiment (ALBACORE) will study local seismicity, and crustal and upper mantle seismic structure. Studies using onshore data have shown a high velocity anomaly that exists in the region of convergence under the Transverse Ranges. The Transverse Ranges belong to a large crustal block that experienced clockwise rotation of at least ninety degrees. Geologic studies indicate that the entire Channel Islands on the western end belongs to the region of convergence and have been a part of this rotation. In anticipation of OBS data analysis, a hypothetical velocity model is being developed for the crust and uppermost mantle for the region under the Channel Islands. P-wave arrival times are predicted by propagating teleseismic waves through the model. Different possible P-wave arrival patterns are explored by varying the lithospheric thickness. The long-term goal for developing this model will be to compare it with the actual OBS travel-time residual data to assess the best-fitting model. In preparation for the ALBACORE cruise, existing gravity data near the Channel Island region were examined for correlations with geologic features. Gravity data collected during the ALBACORE cruise will help

  2. Building a semantic web-based metadata repository for facilitating detailed clinical modeling in cancer genome studies.

    Science.gov (United States)

    Sharma, Deepak K; Solbrig, Harold R; Tao, Cui; Weng, Chunhua; Chute, Christopher G; Jiang, Guoqian

    2017-06-05

    Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.

  3. iLOG: A Framework for Automatic Annotation of Learning Objects with Empirical Usage Metadata

    Science.gov (United States)

    Miller, L. D.; Soh, Leen-Kiat; Samal, Ashok; Nugent, Gwen

    2012-01-01

    Learning objects (LOs) are digital or non-digital entities used for learning, education or training commonly stored in repositories searchable by their associated metadata. Unfortunately, based on the current standards, such metadata is often missing or incorrectly entered making search difficult or impossible. In this paper, we investigate…

  4. Archive of Side Scan Sonar and Swath Bathymetry Data collected during USGS Cruise 10CCT02 Offshore of Petit Bois Island Including Petit Bois Pass, Gulf Islands National Seashore, Mississippi, March 2010

    Science.gov (United States)

    Pfeiffer, William R.; Flocks, James G.; DeWitt, Nancy T.; Forde, Arnell S.; Kelso, Kyle; Thompson, Phillip R.; Wiese, Dana S.

    2011-01-01

    In March of 2010, the U.S. Geological Survey (USGS) conducted geophysical surveys offshore of Petit Bois Island, Mississippi, and Dauphin Island, Alabama (fig. 1). These efforts were part of the USGS Gulf of Mexico Science Coordination partnership with the U.S. Army Corps of Engineers (USACE) to assist the Mississippi Coastal Improvements Program (MsCIP) and the Northern Gulf of Mexico (NGOM) Ecosystem Change and Hazards Susceptibility Project by mapping the shallow geologic stratigraphic framework of the Mississippi Barrier Island Complex. These geophysical surveys will provide the data necessary for scientists to define, interpret, and provide baseline bathymetry and seafloor habitat for this area and to aid scientists in predicting future geomorphological changes of the islands with respect to climate change, storm impact, and sea-level rise. Furthermore, these data will provide information for barrier island restoration, particularly in Camille Cut, and protection for the historical Fort Massachusetts on Ship Island, Mississippi. For more information please refer to http://ngom.usgs.gov/gomsc/mscip/index.html. This report serves as an archive of the processed swath bathymetry and side scan sonar data (SSS). Data products herein include gridded and interpolated surfaces, seabed backscatter images, and ASCII x,y,z data products for both swath bathymetry and side scan sonar imagery. Additional files include trackline maps, navigation files, GIS files, Field Activity Collection System (FACS) logs, and formal FGDC metadata. Scanned images of the handwritten and digital FACS logs are also provided as PDF files. Refer to the Acronyms page for expansion of acronyms and abbreviations used in this report.

  5. A metadata catalog for organization and systemization of fusion simulation data

    International Nuclear Information System (INIS)

    Greenwald, M.; Fredian, T.; Schissel, D.; Stillerman, J.

    2012-01-01

    Highlights: ► We find that modeling and simulation data need better systemization. ► Workflow, data provenance and relations among data items need to be captured. ► We have begun a design for a simulation metadata catalog that meets these needs. ► The catalog design also supports creation of science notebooks for simulation. - Abstract: Careful management of data and associated metadata is a critical part of any scientific enterprise. Unfortunately, most current fusion simulation efforts lack systematic, project-wide organization of their data. This paper describes an approach to managing simulation data through creation of a comprehensive metadata catalog, currently under development. The catalog is intended to document all past and current simulation activities (including data provenance); to enable global data location and to facilitate data access, analysis and visualization through uniform provision of metadata. The catalog will capture workflow, holding entries for each simulation activity including, at least, data importing and staging, data pre-processing and input preparation, code execution, data storage, post-processing and exporting. The overall aim is that between the catalog and the main data archive, the system would hold a complete and accessible description of the data, all of its attributes and the processes used to generate the data. The catalog will describe data collections, including those representing simulation workflows as well as any other useful groupings. Finally it would be populated with user supplied comments to explain the motivation and results of any activity documented by the catalog.

  6. Archive of digital Chirp subbottom profile data collected during USGS cruise 08CCT01, Mississippi Gulf Islands, July 2008

    Science.gov (United States)

    Forde, Arnell S.; Dadisman, Shawn V.; Flocks, James G.; Worley, Charles R.

    2011-01-01

    In July of 2008, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the geologic controls on island framework from Ship Island to Horn Island, Mississippi, for the Northern Gulf of Mexico (NGOM) Ecosystem Change and Hazard Susceptibility project. Funding was provided through the Geologic Framework and Holocene Coastal Evolution of the Mississippi-Alabama Region Subtask (http://ngom.er.usgs.gov/task2_2/index.php); this project is also part of a broader USGS study on Coastal Change and Transport (CCT). This report serves as an archive of unprocessed digital Chirp seismic reflection data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, observer's logbook, and formal Federal Geographic Data Committee (FGDC) metadata. Gained (a relative increase in signal amplitude) digital images of the seismic profiles are also provided. Refer to the Acronyms page for expansion of acronyms and abbreviations used in this report.

  7. Mapping process and age of Quaternary deposits on Santa Rosa Island, Channel Islands National Park, California

    Science.gov (United States)

    Schmidt, K. M.; Minor, S. A.; Bedford, D.

    2016-12-01

    Employing a geomorphic process-age classification scheme, we mapped the Quaternary surficial geology of Santa Rosa (SRI) within the Channel Islands National Park. This detailed (1:12,000 scale) map represents upland erosional transport processes and alluvial, fluvial, eolian, beach, marine terrace, mass wasting, and mixed depositional processes. Mapping was motivated through an agreement with the National Park Service and is intended to aid natural resource assessments, including post-grazing disturbance recovery and identification of mass wasting and tectonic hazards. We obtained numerous detailed geologic field observations, fossils for faunal identification as age control, and materials for numeric dating. This GPS-located field information provides ground truth for delineating map units and faults using GIS-based datasets- high-resolution (sub-meter) aerial imagery, LiDAR-based DEMs and derivative raster products. Mapped geologic units denote surface processes and Quaternary faults constrain deformation kinematics and rates, which inform models of landscape change. Significant findings include: 1) Flights of older Pleistocene (>120 ka) and possibly Pliocene marine terraces were identified beneath younger alluvial and eolian deposits at elevations as much as 275 m above modern sea level. Such elevated terraces suggest that SRI was a smaller, more submerged island in the late Neogene and (or) early Pleistocene prior to tectonic uplift. 2) Structural and geomorphic observations made along the potentially seismogenic SRI fault indicate a protracted slip history during the late Neogene and Quaternary involving early normal slip, later strike slip, and recent reverse slip. These changes in slip mode explain a marked contrast in island physiography across the fault. 3) Many of the steeper slopes are dramatically stripped of regolith, with exposed bedrock and deeply incised gullies, presumably due effects related to past grazing practices. 4) Surface water presence is

  8. Data Bookkeeping Service 3 - Providing event metadata in CMS

    CERN Document Server

    Giffels, Manuel; Riley, Daniel

    2014-01-01

    The Data Bookkeeping Service 3 provides a catalog of event metadata for Monte Carlo and recorded data of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN, Geneva. It comprises all necessary information for tracking datasets, their processing history and associations between runs, files and datasets, on a large scale of about $200,000$ datasets and more than $40$ million files, which adds up in around $700$ GB of metadata. The DBS is an essential part of the CMS Data Management and Workload Management (DMWM) systems, all kind of data-processing like Monte Carlo production, processing of recorded event data as well as physics analysis done by the users are heavily relying on the information stored in DBS.

  9. Data Bookkeeping Service 3 - Providing Event Metadata in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Giffels, Manuel [CERN; Guo, Y. [Fermilab; Riley, Daniel [Cornell U.

    2014-01-01

    The Data Bookkeeping Service 3 provides a catalog of event metadata for Monte Carlo and recorded data of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN, Geneva. It comprises all necessary information for tracking datasets, their processing history and associations between runs, files and datasets, on a large scale of about 200, 000 datasets and more than 40 million files, which adds up in around 700 GB of metadata. The DBS is an essential part of the CMS Data Management and Workload Management (DMWM) systems [1], all kind of data-processing like Monte Carlo production, processing of recorded event data as well as physics analysis done by the users are heavily relying on the information stored in DBS.

  10. Evolution of Web Services in EOSDIS: Search and Order Metadata Registry (ECHO)

    Science.gov (United States)

    Mitchell, Andrew; Ramapriyan, Hampapuram; Lowe, Dawn

    2009-01-01

    During 2005 through 2008, NASA defined and implemented a major evolutionary change in it Earth Observing system Data and Information System (EOSDIS) to modernize its capabilities. This implementation was based on a vision for 2015 developed during 2005. The EOSDIS 2015 Vision emphasizes increased end-to-end data system efficiency and operability; increased data usability; improved support for end users; and decreased operations costs. One key feature of the Evolution plan was achieving higher operational maturity (ingest, reconciliation, search and order, performance, error handling) for the NASA s Earth Observing System Clearinghouse (ECHO). The ECHO system is an operational metadata registry through which the scientific community can easily discover and exchange NASA's Earth science data and services. ECHO contains metadata for 2,726 data collections comprising over 87 million individual data granules and 34 million browse images, consisting of NASA s EOSDIS Data Centers and the United States Geological Survey's Landsat Project holdings. ECHO is a middleware component based on a Service Oriented Architecture (SOA). The system is comprised of a set of infrastructure services that enable the fundamental SOA functions: publish, discover, and access Earth science resources. It also provides additional services such as user management, data access control, and order management. The ECHO system has a data registry and a services registry. The data registry enables organizations to publish EOS and other Earth-science related data holdings to a common metadata model. These holdings are described through metadata in terms of datasets (types of data) and granules (specific data items of those types). ECHO also supports browse images, which provide a visual representation of the data. The published metadata can be mapped to and from existing standards (e.g., FGDC, ISO 19115). With ECHO, users can find the metadata stored in the data registry and then access the data either

  11. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    CERN Document Server

    Lambert, Fabian; The ATLAS collaboration

    2016-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring system and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand. Moreover, we describe how to switch to a distant replica in case of downtime.

  12. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    Science.gov (United States)

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-01-01

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. PMID:25310473

  13. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    Directory of Open Access Journals (Sweden)

    Min Huang

    2014-10-01

    Full Text Available While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it’s critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB pages which are more reliable than least significant bit (LSB pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme.

  14. AFSC/NMML/CCEP: Food Habits of Pinnipeds at San Miguel Island, California

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The National Marine Mammal Laboratories' California Current Ecosystem Program (AFSC/NOAA) collects fecal samples to examine the diet of pinnipeds, including...

  15. California State Waters Map Series--Offshore of Ventura, California

    Science.gov (United States)

    Johnson, Samuel Y.; Dartnell, Peter; Cochrane, Guy R.; Golden, Nadine E.; Phillips, Eleyne L.; Ritchie, Andrew C.; Kvitek, Rikk G.; Greene, H. Gary; Krigsman, Lisa M.; Endris, Charles A.; Seitz, Gordon G.; Gutierrez, Carlos I.; Sliter, Ray W.; Erdey, Mercedes D.; Wong, Florence L.; Yoklavich, Mary M.; Draut, Amy E.; Hart, Patrick E.; Johnson, Samuel Y.; Cochran, Susan A.

    2013-01-01

    In 2007, the California Ocean Protection Council initiated the California Seafloor Mapping Program (CSMP), designed to create a comprehensive seafloor map of high-resolution bathymetry, marine benthic habitats, and geology within the 3-nautical-mile limit of California’s State Waters. The CSMP approach is to create highly detailed seafloor maps through collection, integration, interpretation, and visualization of swath sonar data, acoustic backscatter, seafloor video, seafloor photography, high-resolution seismic-reflection profiles, and bottom-sediment sampling data. The map products display seafloor morphology and character, identify potential marine benthic habitats, and illustrate both the surficial seafloor geology and shallow (to about 100 m) subsurface geology. The Offshore of Ventura map area lies within the Santa Barbara Channel region of the Southern California Bight. This geologically complex region forms a major biogeographic transition zone, separating the cold-temperate Oregonian province north of Point Conception from the warm-temperate California province to the south. The map area is in the Ventura Basin, in the southern part of the Western Transverse Ranges geologic province, which is north of the California Continental Borderland. Significant clockwise rotation—at least 90°—since the early Miocene has been proposed for the Western Transverse Ranges, and the region is presently undergoing north-south shortening. The city of Ventura is the major cultural center in the map area. The Ventura River cuts through Ventura, draining the Santa Ynez Mountains and the coastal hills north of Ventura. Northwest of Ventura, the coastal zone is a narrow strip containing highway and railway transportation corridors and a few small residential clusters. Rincon Island, an island constructed for oil and gas production, lies offshore of Punta Gorda. Southeast of Ventura, the coastal zone consists of the mouth and broad, alluvial plains of the Santa Clara River

  16. Scalable Metadata Management for a Large Multi-Source Seismic Data Repository

    Energy Technology Data Exchange (ETDEWEB)

    Gaylord, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dodge, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Magana-Zook, S. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barno, J. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Knapp, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-11

    In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity. We began the effort with an assessment of open source data flow tools from the Hadoop ecosystem. We then began the construction of a layered architecture that is specifically designed to address many of the scalability and data quality issues we experience with our current pipeline. This included implementing basic functionality in each of the layers, such as establishing a data lake, designing a unified metadata schema, tracking provenance, and calculating data quality metrics. Our original intent was to test and validate the new ingestion framework with data from a large-scale field deployment in a temporary network. This delivered somewhat unsatisfying results, since the new system immediately identified fatal flaws in the data relatively early in the pipeline. Although this is a correct result it did not allow us to sufficiently exercise the whole framework. We then widened our scope to process all available metadata from over a dozen online seismic data sources to further test the implementation and validate the design. This experiment also uncovered a higher than expected frequency of certain types of metadata issues that challenged us to further tune our data management strategy to handle them. Our result from this project is a greatly improved understanding of real world data issues, a validated design, and prototype implementations of major components of an eventual production framework. This successfully forms the basis of future development for the Geophysical Monitoring Program data pipeline, which is a critical asset supporting multiple programs. It also positions us very well to deliver valuable metadata management expertise to our sponsors, and has already resulted in an NNSA Office of Defense Nuclear Nonproliferation

  17. INSPIRE: Managing Metadata in a Global Digital Library for High-Energy Physics

    CERN Document Server

    Martin Montull, Javier

    2011-01-01

    Four leading laboratories in the High-Energy Physics (HEP) field are collaborating to roll-out the next-generation scientific information portal: INSPIRE. The goal of this project is to replace the popular 40 year-old SPIRES database. INSPIRE already provides access to about 1 million records and includes services such as fulltext search, automatic keyword assignment, ingestion and automatic display of LaTeX, citation analysis, automatic author disambiguation, metadata harvesting, extraction of figures from fulltext and search in figure captions. In order to achieve high quality metadata both automatic processing and manual curation are needed. The different tools available in the system use modern web technologies to provide the curators of the maximum efficiency, while dealing with the MARC standard format. The project is under heavy development in order to provide new features including semantic analysis, crowdsourcing of metadata curation, user tagging, recommender systems, integration of OAIS standards a...

  18. Parallel file system with metadata distributed across partitioned key-value store c

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  19. Standardizing metadata and taxonomic identification in metabarcoding studies

    NARCIS (Netherlands)

    Tedersoo, Leho; Ramirez, Kelly; Nilsson, R; Kaljuvee, Aivi; Koljalg, Urmas; Abarenkov, Kessy

    2015-01-01

    High-throughput sequencing-based metabarcoding studies produce vast amounts of ecological data, but a lack of consensus on standardization of metadata and how to refer to the species recovered severely hampers reanalysis and comparisons among studies. Here we propose an automated workflow covering

  20. Integrating Semantic Information in Metadata Descriptions for a Geoscience-wide Resource Inventory.

    Science.gov (United States)

    Zaslavsky, I.; Richard, S. M.; Gupta, A.; Valentine, D.; Whitenack, T.; Ozyurt, I. B.; Grethe, J. S.; Schachne, A.

    2016-12-01

    Integrating semantic information into legacy metadata catalogs is a challenging issue and so far has been mostly done on a limited scale. We present experience of CINERGI (Community Inventory of Earthcube Resources for Geoscience Interoperability), an NSF Earthcube Building Block project, in creating a large cross-disciplinary catalog of geoscience information resources to enable cross-domain discovery. The project developed a pipeline for automatically augmenting resource metadata, in particular generating keywords that describe metadata documents harvested from multiple geoscience information repositories or contributed by geoscientists through various channels including surveys and domain resource inventories. The pipeline examines available metadata descriptions using text parsing, vocabulary management and semantic annotation and graph navigation services of GeoSciGraph. GeoSciGraph, in turn, relies on a large cross-domain ontology of geoscience terms, which bridges several independently developed ontologies or taxonomies including SWEET, ENVO, YAGO, GeoSciML, GCMD, SWO, and CHEBI. The ontology content enables automatic extraction of keywords reflecting science domains, equipment used, geospatial features, measured properties, methods, processes, etc. We specifically focus on issues of cross-domain geoscience ontology creation, resolving several types of semantic conflicts among component ontologies or vocabularies, and constructing and managing facets for improved data discovery and navigation. The ontology and keyword generation rules are iteratively improved as pipeline results are presented to data managers for selective manual curation via a CINERGI Annotator user interface. We present lessons learned from applying CINERGI metadata augmentation pipeline to a number of federal agency and academic data registries, in the context of several use cases that require data discovery and integration across multiple earth science data catalogs of varying quality

  1. Revision of IRIS/IDA Seismic Station Metadata

    Science.gov (United States)

    Xu, W.; Davis, P.; Auerbach, D.; Klimczak, E.

    2017-12-01

    Trustworthy data quality assurance has always been one of the goals of seismic network operators and data management centers. This task is considerably complex and evolving due to the huge quantities as well as the rapidly changing characteristics and complexities of seismic data. Published metadata usually reflect instrument response characteristics and their accuracies, which includes zero frequency sensitivity for both seismometer and data logger as well as other, frequency-dependent elements. In this work, we are mainly focused studying the variation of the seismometer sensitivity with time of IRIS/IDA seismic recording systems with a goal to improve the metadata accuracy for the history of the network. There are several ways to measure the accuracy of seismometer sensitivity for the seismic stations in service. An effective practice recently developed is to collocate a reference seismometer in proximity to verify the in-situ sensors' calibration. For those stations with a secondary broadband seismometer, IRIS' MUSTANG metric computation system introduced a transfer function metric to reflect two sensors' gain ratios in the microseism frequency band. In addition, a simulation approach based on M2 tidal measurements has been proposed and proven to be effective. In this work, we compare and analyze the results from three different methods, and concluded that the collocated-sensor method is most stable and reliable with the minimum uncertainties all the time. However, for epochs without both the collocated sensor and secondary seismometer, we rely on the analysis results from tide method. For the data since 1992 on IDA stations, we computed over 600 revised seismometer sensitivities for all the IRIS/IDA network calibration epochs. Hopefully further revision procedures will help to guarantee that the data is accurately reflected by the metadata of these stations.

  2. Metadata and network API aspects of a framework for storing and retrieving civil infrastructure monitoring data

    Science.gov (United States)

    Wong, John-Michael; Stojadinovic, Bozidar

    2005-05-01

    A framework has been defined for storing and retrieving civil infrastructure monitoring data over a network. The framework consists of two primary components: metadata and network communications. The metadata component provides the descriptions and data definitions necessary for cataloging and searching monitoring data. The communications component provides Java classes for remotely accessing the data. Packages of Enterprise JavaBeans and data handling utility classes are written to use the underlying metadata information to build real-time monitoring applications. The utility of the framework was evaluated using wireless accelerometers on a shaking table earthquake simulation test of a reinforced concrete bridge column. The NEESgrid data and metadata repository services were used as a backend storage implementation. A web interface was created to demonstrate the utility of the data model and provides an example health monitoring application.

  3. The relevance of music information representation metadata from the perspective of expert users

    Directory of Open Access Journals (Sweden)

    Camila Monteiro de Barros

    Full Text Available The general goal of this research was to verify which metadata elements of music information representation are relevant for its retrieval from the perspective of expert music users. Based on a bibliographical research, a comprehensive metadata set of music information representation was developed and transformed into a questionnaire for data collection, which was applied to students and professors of the Graduate Program in Music at the Federal University of Rio Grande do Sul. The results show that the most relevant information for expert music users is related to identification and authorship responsibilities. The respondents from Composition and Interpretative Practice areas agree with these results, while the respondents from Musicology/Ethnomusicology and Music Education areas also consider the metadata related to the historical context of composition relevant.

  4. Examining Influence of Fog and Stratus Clouds on Bishop Pine Water Budgets, Channel Islands, CA

    Science.gov (United States)

    Fischer, D. T.; Still, C. J.; Williams, A. P.

    2004-12-01

    We present the first results from a project whose goal is to advance our basic understanding of the role that fog and persistent stratus clouds play in ecological processes in the California Channel Islands. Our work is focused on a population of Bishop Pines (Pinus muricata) on Santa Cruz Island (SCI), the largest, most topographically complex and most biologically diverse island along the California coast. This is the southernmost population (except for an outlier stand near San Vicente, Baja California), and tree growth appears to be water-limited in such a marginal habitat. We hypothesize that persistent fog and low stratus clouds enhance the water balance of these trees via direct water inputs (fog drip and foliar absorption) and reduced solar heating. To assess these possible effects, we have established weather stations and fog and rain collectors throughout the largest Bishop pine stand on SCI. Initial analysis of weather data shows dramatic differences in solar loading over short distances. We present data on the isotopic content (oxygen-18 and hydrogen-2) of water samples collected from winter 2003 to summer 2004. The samples we collected include fogwater, rainfall, water vapor, soil water, leaf and xylem water, and stream water. We also collected and analyzed leaf biomass and soil organic matter samples at periodic intervals for carbon-13 content. These latter data are evaluated in light of extensive leaf-level ecophysiological data collected in the field and as part of a parallel greenhouse study.

  5. Practical management of heterogeneous neuroimaging metadata by global neuroimaging data repositories.

    Science.gov (United States)

    Neu, Scott C; Crawford, Karen L; Toga, Arthur W

    2012-01-01

    Rapidly evolving neuroimaging techniques are producing unprecedented quantities of digital data at the same time that many research studies are evolving into global, multi-disciplinary collaborations between geographically distributed scientists. While networked computers have made it almost trivial to transmit data across long distances, collecting and analyzing this data requires extensive metadata if the data is to be maximally shared. Though it is typically straightforward to encode text and numerical values into files and send content between different locations, it is often difficult to attach context and implicit assumptions to the content. As the number of and geographic separation between data contributors grows to national and global scales, the heterogeneity of the collected metadata increases and conformance to a single standardization becomes implausible. Neuroimaging data repositories must then not only accumulate data but must also consolidate disparate metadata into an integrated view. In this article, using specific examples from our experiences, we demonstrate how standardization alone cannot achieve full integration of neuroimaging data from multiple heterogeneous sources and why a fundamental change in the architecture of neuroimaging data repositories is needed instead.

  6. ETICS meta-data software editing - from check out to commit operations

    International Nuclear Information System (INIS)

    Begin, M-E; Sancho, G D-A; Ronco, S D; Gentilini, M; Ronchieri, E; Selmi, M

    2008-01-01

    People involved in modular projects need to improve the build software process, planning the correct execution order and detecting circular dependencies. The lack of suitable tools may cause delays in the development, deployment and maintenance of the software. Experience in such projects has shown that the use of version control and build systems is not able to support the development of the software efficiently, due to a large number of errors each of which causes the breaking of the build process. Common causes of errors are for example the adoption of new libraries, libraries incompatibility, the extension of the current project in order to support new software modules. In this paper, we describe a possible solution implemented in ETICS, an integrated infrastructure for the automated configuration, build and test of Grid and distributed software. ETICS has defined meta-data software abstractions, from which it is possible to download, build and test software projects, setting for instance dependencies, environment variables and properties. Furthermore, the meta-data information is managed by ETICS reflecting the version control system philosophy, because of the existence of a meta-data repository and the handling of a list of operations, such as check out and commit. All the information related to a specific software are stored in the repository only when they are considered to be correct. By means of this solution, we introduce a sort of flexibility inside the ETICS system, allowing users to work accordingly to their needs. Moreover, by introducing this functionality, ETICS will be a version control system like for the management of the meta-data

  7. Biomedical word sense disambiguation with ontologies and metadata: automation meets accuracy

    Directory of Open Access Journals (Sweden)

    Hakenberg Jörg

    2009-01-01

    Full Text Available Abstract Background Ontology term labels can be ambiguous and have multiple senses. While this is no problem for human annotators, it is a challenge to automated methods, which identify ontology terms in text. Classical approaches to word sense disambiguation use co-occurring words or terms. However, most treat ontologies as simple terminologies, without making use of the ontology structure or the semantic similarity between terms. Another useful source of information for disambiguation are metadata. Here, we systematically compare three approaches to word sense disambiguation, which use ontologies and metadata, respectively. Results The 'Closest Sense' method assumes that the ontology defines multiple senses of the term. It computes the shortest path of co-occurring terms in the document to one of these senses. The 'Term Cooc' method defines a log-odds ratio for co-occurring terms including co-occurrences inferred from the ontology structure. The 'MetaData' approach trains a classifier on metadata. It does not require any ontology, but requires training data, which the other methods do not. To evaluate these approaches we defined a manually curated training corpus of 2600 documents for seven ambiguous terms from the Gene Ontology and MeSH. All approaches over all conditions achieve 80% success rate on average. The 'MetaData' approach performed best with 96%, when trained on high-quality data. Its performance deteriorates as quality of the training data decreases. The 'Term Cooc' approach performs better on Gene Ontology (92% success than on MeSH (73% success as MeSH is not a strict is-a/part-of, but rather a loose is-related-to hierarchy. The 'Closest Sense' approach achieves on average 80% success rate. Conclusion Metadata is valuable for disambiguation, but requires high quality training data. Closest Sense requires no training, but a large, consistently modelled ontology, which are two opposing conditions. Term Cooc achieves greater 90

  8. Metadata Quality Improvement : DASISH deliverable 5.2A

    NARCIS (Netherlands)

    L'Hours, Hervé; Offersgaard, Lene; Wittenberg, M.; Wloka, Bartholomäus

    2014-01-01

    The aim of this task was to analyse and compare the different metadata strategies of CLARIN, DARIAH and CESSDA, and to identify possibilities of cross-fertilization to take profit from each other solutions where possible. To have a better understanding in which stages of the research lifecycle

  9. Content-aware network storage system supporting metadata retrieval

    Science.gov (United States)

    Liu, Ke; Qin, Leihua; Zhou, Jingli; Nie, Xuejun

    2008-12-01

    Nowadays, content-based network storage has become the hot research spot of academy and corporation[1]. In order to solve the problem of hit rate decline causing by migration and achieve the content-based query, we exploit a new content-aware storage system which supports metadata retrieval to improve the query performance. Firstly, we extend the SCSI command descriptor block to enable system understand those self-defined query requests. Secondly, the extracted metadata is encoded by extensible markup language to improve the universality. Thirdly, according to the demand of information lifecycle management (ILM), we store those data in different storage level and use corresponding query strategy to retrieval them. Fourthly, as the file content identifier plays an important role in locating data and calculating block correlation, we use it to fetch files and sort query results through friendly user interface. Finally, the experiments indicate that the retrieval strategy and sort algorithm have enhanced the retrieval efficiency and precision.

  10. Conditions and configuration metadata for the ATLAS experiment

    International Nuclear Information System (INIS)

    Gallas, E J; Pachal, K E; Tseng, J C L; Albrand, S; Fulachier, J; Lambert, F; Zhang, Q

    2012-01-01

    In the ATLAS experiment, a system called COMA (Conditions/Configuration Metadata for ATLAS), has been developed to make globally important run-level metadata more readily accessible. It is based on a relational database storing directly extracted, refined, reduced, and derived information from system-specific database sources as well as information from non-database sources. This information facilitates a variety of unique dynamic interfaces and provides information to enhance the functionality of other systems. This presentation will give an overview of the components of the COMA system, enumerate its diverse data sources, and give examples of some of the interfaces it facilitates. We list important principles behind COMA schema and interface design, and how features of these principles create coherence and eliminate redundancy among the components of the overall system. In addition, we elucidate how interface logging data has been used to refine COMA content and improve the value and performance of end-user reports and browsers.

  11. Conditions and configuration metadata for the ATLAS experiment

    CERN Document Server

    Gallas, E J; Albrand, S; Fulachier, J; Lambert, F; Pachal, K E; Tseng, J C L; Zhang, Q

    2012-01-01

    In the ATLAS experiment, a system called COMA (Conditions/Configuration Metadata for ATLAS), has been developed to make globally important run-level metadata more readily accessible. It is based on a relational database storing directly extracted, refined, reduced, and derived information from system-specific database sources as well as information from non-database sources. This information facilitates a variety of unique dynamic interfaces and provides information to enhance the functionality of other systems. This presentation will give an overview of the components of the COMA system, enumerate its diverse data sources, and give examples of some of the interfaces it facilitates. We list important principles behind COMA schema and interface design, and how features of these principles create coherence and eliminate redundancy among the components of the overall system. In addition, we elucidate how interface logging data has been used to refine COMA content and improve the value and performance of end-user...

  12. Effects of Age, Colony, and Sex on Mercury Concentrations in California Sea Lions.

    Science.gov (United States)

    McHuron, Elizabeth A; Peterson, Sarah H; Ackerman, Joshua T; Melin, Sharon R; Harris, Jeffrey D; Costa, Daniel P

    2016-01-01

    We measured total mercury (THg) concentrations in California sea lions (Zalophus californianus) and examined how concentrations varied with age class, colony, and sex. Because Hg exposure is primarily via diet, we used nitrogen (δ (15)N) and carbon (δ (13)C) stable isotopes to determine if intraspecific differences in THg concentrations could be explained by feeding ecology. Blood and hair were collected from 21 adult females and 57 juveniles from three colonies in central and southern California (San Nicolas, San Miguel, and Año Nuevo Islands). Total Hg concentrations ranged from 0.01 to 0.31 μg g(-1) wet weight (ww) in blood and 0.74 to 21.00 μg g(-1) dry weight (dw) in hair. Adult females had greater mean THg concentrations than juveniles in blood (0.15 vs. 0.03 μg(-1) ww) and hair (10.10 vs. 3.25 μg(-1) dw). Age class differences in THg concentrations did not appear to be driven by trophic level or habitat type because there were no differences in δ (15)N or δ (13)C values between adults and juveniles. Total Hg concentrations in adult females were 54 % (blood) and 24 % (hair) greater in females from San Miguel than females from San Nicolas Island, which may have been because sea lions from the two islands foraged in different areas. For juveniles, we detected some differences in THg concentrations with colony and sex, although these were likely due to sampling effects and not ecological differences. Overall, THg concentrations in California sea lions were within the range documented for other marine mammals and were generally below toxicity benchmarks for fish-eating wildlife.

  13. The physical characteristics of the sediments on and surrounding Dauphin Island, Alabama

    Science.gov (United States)

    Ellis, Alisha M.; Marot, Marci E.; Smith, Christopher G.; Wheaton, Cathryn J.

    2017-06-20

    Scientists from the U.S. Geological Survey, St. Petersburg Coastal and Marine Science Center collected 303 surface sediment samples from Dauphin Island, Alabama, and the surrounding water bodies in August 2015. These sediments were processed to determine physical characteristics such as organic content, bulk density, and grain-size. The environments where the sediments were collected include high and low salt marshes, washover deposits, dunes, beaches, sheltered bays, and open water. Sampling by the USGS was part of a larger study to assess the feasibility and sustainability of proposed restoration efforts for Dauphin Island, Alabama, and assess the island’s resilience to rising sea level and storm events. The data presented in this publication can be used by modelers to attempt validation of hindcast models and create predictive forecast models for both baseline conditions and storms. This study was funded by the National Fish and Wildlife Foundation, via the Gulf Environmental Benefit Fund.This report serves as an archive for sedimentological data derived from surface sediments. Downloadable data are available as Excel spreadsheets, JPEG files, and formal Federal Geographic Data Committee metadata.

  14. Benefits of Record Management For Scientific Writing (Study of Metadata Reception of Zotero Reference Management Software in UIN Malang

    Directory of Open Access Journals (Sweden)

    Moch Fikriansyah Wicaksono

    2018-01-01

    Full Text Available Record creation and management by individuals or organizations grows rapidly, particularly the change from print to electronics, and the smallest part of record (metadata. Therefore, there is a need to perform record management metadata, particularly for students who have the needs of recording references and citation. Reference management software (RMS is a software to help reference management, one of them named zotero. The purpose of this article is to describe the benefits of record management for the writing of scientific papers for students, especially on biology study program in UIN Malik Ibrahim Malang. The type of research used is descriptive with quantitative approach. To increase the depth of respondents' answers, we used additional data by conducting interviews. The selected population is 322 students, class of 2012 to 2014, using random sampling. The selection criteria were chosen because the introduction and use of reference management software, zotero have started since three years ago.  Respondents in this study as many as 80 people, which is obtained from the formula Yamane. The results showed that 70% agreed that using reference management software saved time and energy in managing digital file metadata, 71% agreed that if digital metadata can be quickly stored into RMS, 65% agreed on the ease of storing metadata into the reference management software, 70% agreed when it was easy to configure metadata to quote and bibliography, 56.6% agreed that the metadata stored in reference management software could be edited, 73.8% agreed that using metadata will make it easier to write quotes and bibliography.

  15. Multiple Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California

    Science.gov (United States)

    Pike, Richard J.; Graymer, Russell W.

    2008-01-01

    Sobieszczyk *Plate 3 Susceptibility to Shallow Landsliding Modeled for the Cities of Oakland and Piedmont Northern California by Kevin M. Schmidt and Steven Sobieszczyk *Plate 4 Seismic Landslide Hazard Modeled for the Cities of Oakland, Piedmont, and Berkeley, Northern California by Scott B. Miles and David K. Keefer III The relative hazard for each of several landslide scenarios is presented as a geospatial database. This publication includes ARC/INFO (Environmental Systems Research Institute, http://www.esri.com) version 8.1.2 grids and associated tables and four text files of FGDC-compliant metadata for each grid.

  16. Local Extinction and Unintentional Rewilding of Bighorn Sheep (Ovis canadensis) on a Desert Island

    Science.gov (United States)

    Wilder, Benjamin T.; Betancourt, Julio L.; Epps, Clinton W.; Crowhurst, Rachel S.; Mead, Jim I.; Ezcurra, Exequiel

    2014-01-01

    Bighorn sheep (Ovis canadensis) were not known to live on Tiburón Island, the largest island in the Gulf of California and Mexico, prior to the surprisingly successful introduction of 20 individuals as a conservation measure in 1975. Today, a stable island population of ∼500 sheep supports limited big game hunting and restocking of depleted areas on the Mexican mainland. We discovered fossil dung morphologically similar to that of bighorn sheep in a dung mat deposit from Mojet Cave, in the mountains of Tiburón Island. To determine the origin of this cave deposit we compared pellet shape to fecal pellets of other large mammals, and extracted DNA to sequence mitochondrial DNA fragments at the 12S ribosomal RNA and control regions. The fossil dung was 14C-dated to 1476–1632 calendar years before present and was confirmed as bighorn sheep by morphological and ancient DNA (aDNA) analysis. 12S sequences closely or exactly matched known bighorn sheep sequences; control region sequences exactly matched a haplotype described in desert bighorn sheep populations in southwest Arizona and southern California and showed subtle differentiation from the extant Tiburón population. Native desert bighorn sheep previously colonized this land-bridge island, most likely during the Pleistocene, when lower sea levels connected Tiburón to the mainland. They were extirpated sometime in the last ∼1500 years, probably due to inherent dynamics of isolated populations, prolonged drought, and (or) human overkill. The reintroduced population is vulnerable to similar extinction risks. The discovery presented here refutes conventional wisdom that bighorn sheep are not native to Tiburón Island, and establishes its recent introduction as an example of unintentional rewilding, defined here as the introduction of a species without knowledge that it was once native and has since gone locally extinct. PMID:24646515

  17. Local extinction and unintentional rewilding of bighorn sheep (Ovis canadensis on a desert island.

    Directory of Open Access Journals (Sweden)

    Benjamin T Wilder

    Full Text Available Bighorn sheep (Ovis canadensis were not known to live on Tiburón Island, the largest island in the Gulf of California and Mexico, prior to the surprisingly successful introduction of 20 individuals as a conservation measure in 1975. Today, a stable island population of ∼500 sheep supports limited big game hunting and restocking of depleted areas on the Mexican mainland. We discovered fossil dung morphologically similar to that of bighorn sheep in a dung mat deposit from Mojet Cave, in the mountains of Tiburón Island. To determine the origin of this cave deposit we compared pellet shape to fecal pellets of other large mammals, and extracted DNA to sequence mitochondrial DNA fragments at the 12S ribosomal RNA and control regions. The fossil dung was 14C-dated to 1476-1632 calendar years before present and was confirmed as bighorn sheep by morphological and ancient DNA (aDNA analysis. 12S sequences closely or exactly matched known bighorn sheep sequences; control region sequences exactly matched a haplotype described in desert bighorn sheep populations in southwest Arizona and southern California and showed subtle differentiation from the extant Tiburón population. Native desert bighorn sheep previously colonized this land-bridge island, most likely during the Pleistocene, when lower sea levels connected Tiburón to the mainland. They were extirpated sometime in the last ∼1500 years, probably due to inherent dynamics of isolated populations, prolonged drought, and (or human overkill. The reintroduced population is vulnerable to similar extinction risks. The discovery presented here refutes conventional wisdom that bighorn sheep are not native to Tiburón Island, and establishes its recent introduction as an example of unintentional rewilding, defined here as the introduction of a species without knowledge that it was once native and has since gone locally extinct.

  18. Local extinction and unintentional rewilding of bighorn sheep (Ovis canadensis) on a desert island

    Science.gov (United States)

    Wilder, Benjamin T.; Betancourt, Julio L.; Epps, Clinton W.; Crowhurst, Rachel S.; Mead, Jim I.; Ezcurra, Exequiel

    2014-01-01

    Bighorn sheep (Ovis canadensis) were not known to live on Tiburón Island, the largest island in the Gulf of California and Mexico, prior to the surprisingly successful introduction of 20 individuals as a conservation measure in 1975. Today, a stable island population of ~500 sheep supports limited big game hunting and restocking of depleted areas on the Mexican mainland. We discovered fossil dung morphologically similar to that of bighorn sheep in a dung mat deposit from Mojet Cave, in the mountains of Tiburón Island. To determine the origin of this cave deposit we compared pellet shape to fecal pellets of other large mammals, and extracted DNA to sequence mitochondrial DNA fragments at the 12S ribosomal RNA and control regions. The fossil dung was 14C-dated to 1476–1632 calendar years before present and was confirmed as bighorn sheep by morphological and ancient DNA (aDNA) analysis. 12S sequences closely or exactly matched known bighorn sheep sequences; control region sequences exactly matched a haplotype described in desert bighorn sheep populations in southwest Arizona and southern California and showed subtle differentiation from the extant Tiburón population. Native desert bighorn sheep previously colonized this land-bridge island, most likely during the Pleistocene, when lower sea levels connected Tiburón to the mainland. They were extirpated sometime in the last ~1500 years, probably due to inherent dynamics of isolated populations, prolonged drought, and (or) human overkill. The reintroduced population is vulnerable to similar extinction risks. The discovery presented here refutes conventional wisdom that bighorn sheep are not native to Tiburón Island, and establishes its recent introduction as an example of unintentional rewilding, defined here as the introduction of a species without knowledge that it was once native and has since gone locally extinct.

  19. CCR+: Metadata Based Extended Personal Health Record Data Model Interoperable with the ASTM CCR Standard.

    Science.gov (United States)

    Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong; Kim, Ju Han

    2014-01-01

    Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models.

  20. Comparative biology of Uncinaria spp. in the California sea lion (Zalophus californianus) and the northern fur seal (Callorhinus ursinus) in California.

    Science.gov (United States)

    Lyons, E T; DeLong, R L; Gulland, F M; Melin, S R; Tolliver, S C; Spraker, T R

    2000-12-01

    Studies on several aspects of the life cycle of hookworms (Uncinaria spp.) in the California sea lion (Zalophus californianus) and northern fur seal (Callorhinus ursinus) were conducted on material collected on San Miguel Island (SMI), California and at The Marine Mammal Center, Sausalito, California in 1997, 1998, and 1999. Examination of Z. californianus intestines for adult hookworms and feces for eggs revealed that longevity of these parasites in pups is about 6-8 mo, and infections are probably not present in older sea lions. Parasitic third-stage larvae (L3) were recovered from the ventral abdominal tissue of Z. californianus, suggesting transmammary transmission. Callorhinus ursinus pups had no hookworm eggs in their feces or adult worms (except for 1 probable contaminant) in their intestines in the fall and early winter, revealing that adult Uncinaria spp. are spontaneously lost at <3 mo of age of the pups. Sand samples from rookeries, used by both Z. californianus and C. ursinus, on SMI were negative for free-living, L3 in summer months but positive in fall and winter months, indicating seasonality occurred.

  1. The evolution of chondrichthyan research through a metadata ...

    African Journals Online (AJOL)

    We compiled metadata from Sharks Down Under (1991) and the two Sharks International conferences (2010 and 2014), spanning 23 years. Analysis of the data highlighted taxonomic biases towards charismatic species, a declining number of studies in fundamental science such as those related to taxonomy and basic life ...

  2. Investigations of peritoneal and intestinal infections of adult hookworms (Uncinaria spp.) in northern fur seal (Callorhinus ursinus) and California sea lion (Zalophus californianus) pups on San Miguel Island, California (2003).

    Science.gov (United States)

    Lyons, Eugene T; Delong, R L; Nadler, S A; Laake, J L; Orr, A J; Delong, B L; Pagan, C

    2011-09-01

    The peritoneal cavity (PNC) and intestine of northern fur seal (Callorhinus ursinus) pups and California sea lion (Zalophus californianus) pups that died in late July and early August, 2003, on San Miguel Island, California, were examined for hookworms. Prevalence and morphometric studies were done with the hookworms in addition to molecular characterization. Based on this and previous molecular studies, hookworms from fur seals are designated as Uncinaria lucasi and the species from sea lions as Uncinaria species A. Adult hookworms were found in the PNC of 35 of 57 (61.4%) fur seal pups and of 13 of 104 (12.5%) sea lion pups. The number of hookworms located in the PNC ranged from 1 to 33 (median = 3) for the infected fur seal pups and 1 to 16 (median = 2) for the infected sea lion pups. In addition to the PNC, intestines of 43 fur seal and 32 sea lion pups were examined. All of these pups were positive for adult hookworms. The worms were counted from all but one of the sea lion pups. Numbers of these parasites in the intestine varied from 3 to 2,344 (median = 931) for the fur seal pups and 39 to 2,766 (median = 643) for the sea lion pups. Sea lion pups with peritoneal infections had higher intensity infections in the intestines than did pups without peritoneal infections, lending some support for the hypothesis that peritoneal infections result from high-intensity infections of adult worms. There was no difference in intestinal infection intensities between fur seal pups with and without peritoneal infections. Female adult hookworms in the intestines of both host species were significantly larger than males, and sea lion hookworms were larger than those in fur seals. Worms in the intestine also were larger than worms found in the PNC. Gene sequencing and (RFLP) analysis of (PCR) amplified (ITS) ribosomal DNA were used to diagnose the species of 172 hookworms recovered from the PNC and intestine of 18 C. ursinus and seven Z. californianus hosts

  3. Scaling the walls of discovery: using semantic metadata for integrative problem solving.

    Science.gov (United States)

    Manning, Maurice; Aggarwal, Amit; Gao, Kevin; Tucker-Kellogg, Greg

    2009-03-01

    Current data integration approaches by bioinformaticians frequently involve extracting data from a wide variety of public and private data repositories, each with a unique vocabulary and schema, via scripts. These separate data sets must then be normalized through the tedious and lengthy process of resolving naming differences and collecting information into a single view. Attempts to consolidate such diverse data using data warehouses or federated queries add significant complexity and have shown limitations in flexibility. The alternative of complete semantic integration of data requires a massive, sustained effort in mapping data types and maintaining ontologies. We focused instead on creating a data architecture that leverages semantic mapping of experimental metadata, to support the rapid prototyping of scientific discovery applications with the twin goals of reducing architectural complexity while still leveraging semantic technologies to provide flexibility, efficiency and more fully characterized data relationships. A metadata ontology was developed to describe our discovery process. A metadata repository was then created by mapping metadata from existing data sources into this ontology, generating RDF triples to describe the entities. Finally an interface to the repository was designed which provided not only search and browse capabilities but complex query templates that aggregate data from both RDF and RDBMS sources. We describe how this approach (i) allows scientists to discover and link relevant data across diverse data sources and (ii) provides a platform for development of integrative informatics applications.

  4. Combined use of semantics and metadata to manage Research Data Life Cycle in Environmental Sciences

    Science.gov (United States)

    Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Pertinez, Esther; Palacio, Aida

    2017-04-01

    The use of metadata to contextualize datasets is quite extended in Earth System Sciences. There are some initiatives and available tools to help data managers to choose the best metadata standard that fit their use cases, like the DCC Metadata Directory (http://www.dcc.ac.uk/resources/metadata-standards). In our use case, we have been gathering physical, chemical and biological data from a water reservoir since 2010. A well metadata definition is crucial not only to contextualize our own data but also to integrate datasets from other sources like satellites or meteorological agencies. That is why we have chosen EML (Ecological Metadata Language), which integrates many different elements to define a dataset, including the project context, instrumentation and parameters definition, and the software used to process, provide quality controls and include the publication details. Those metadata elements can contribute to help both human and machines to understand and process the dataset. However, the use of metadata is not enough to fully support the data life cycle, from the Data Management Plan definition to the Publication and Re-use. To do so, we need to define not only metadata and attributes but also the relationships between them, so semantics are needed. Ontologies, being a knowledge representation, can contribute to define the elements of a research data life cycle, including DMP, datasets, software, etc. They also can define how the different elements are related between them and how they interact. The first advantage of developing an ontology of a knowledge domain is that they provide a common vocabulary hierarchy (i.e. a conceptual schema) that can be used and standardized by all the agents interested in the domain (either humans or machines). This way of using ontologies is one of the basis of the Semantic Web, where ontologies are set to play a key role in establishing a common terminology between agents. To develop an ontology we are using a graphical tool

  5. Climate change and the northern elephant seal (Mirounga angustirostris population in Baja California, Mexico.

    Directory of Open Access Journals (Sweden)

    María C García-Aguilar

    Full Text Available The Earth's climate is warming, especially in the mid- and high latitudes of the Northern Hemisphere. The northern elephant seal (Mirounga angustirostris breeds and haul-outs on islands and the mainland of Baja California, Mexico, and California, U.S.A. At the beginning of the 21st century, numbers of elephant seals in California are increasing, but the status of Baja California populations is unknown, and some data suggest they may be decreasing. We hypothesize that the elephant seal population of Baja California is experiencing a decline because the animals are not migrating as far south due to warming sea and air temperatures. Here we assessed population trends of the Baja California population, and climate change in the region. The numbers of northern elephant seals in Baja California colonies have been decreasing since the 1990s, and both the surface waters off Baja California and the local air temperatures have warmed during the last three decades. We propose that declining population sizes may be attributable to decreased migration towards the southern portions of the range in response to the observed temperature increases. Further research is needed to confirm our hypothesis; however, if true, it would imply that elephant seal colonies of Baja California and California are not demographically isolated which would pose challenges to environmental and management policies between Mexico and the United States.

  6. A DDI3.2 Style for Data and Metadata Extracted from SAS

    OpenAIRE

    Hoyle, Larry

    2014-01-01

    Earlier work by Wackerow and Hoyle has shown that DDI can be a useful medium for interchange of data and metadata among statistical packages. DDI 3.2 has new features which enhance this capability, such as the ability to use UserAttributePairs to represent custom attributes. The metadata from a statistical package can also be represented in DDI3.2 using several different styles – embedded in a StudyUnit, in a Resource Package, or in a set of Fragments. The DDI Documentation for a Fragment sta...

  7. A Flexible Online Metadata Editing and Management System

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Raul [Arizona State University; Pan, Jerry Yun [ORNL; Gries, Corinna [Arizona State University; Inigo, Gil San [University of New Mexico, Albuquerque; Palanisamy, Giri [ORNL

    2010-01-01

    A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fields user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to

  8. Using a linked data approach to aid development of a metadata portal to support Marine Strategy Framework Directive (MSFD) implementation

    Science.gov (United States)

    Wood, Chris

    2016-04-01

    Under the Marine Strategy Framework Directive (MSFD), EU Member States are mandated to achieve or maintain 'Good Environmental Status' (GES) in their marine areas by 2020, through a series of Programme of Measures (PoMs). The Celtic Seas Partnership (CSP), an EU LIFE+ project, aims to support policy makers, special-interest groups, users of the marine environment, and other interested stakeholders on MSFD implementation in the Celtic Seas geographical area. As part of this support, a metadata portal has been built to provide a signposting service to datasets that are relevant to MSFD within the Celtic Seas. To ensure that the metadata has the widest possible reach, a linked data approach was employed to construct the database. Although the metadata are stored in a traditional RDBS, the metadata are exposed as linked data via the D2RQ platform, allowing virtual RDF graphs to be generated. SPARQL queries can be executed against the end-point allowing any user to manipulate the metadata. D2RQ's mapping language, based on turtle, was used to map a wide range of relevant ontologies to the metadata (e.g. The Provenance Ontology (prov-o), Ocean Data Ontology (odo), Dublin Core Elements and Terms (dc & dcterms), Friend of a Friend (foaf), and Geospatial ontologies (geo)) allowing users to browse the metadata, either via SPARQL queries or by using D2RQ's HTML interface. The metadata were further enhanced by mapping relevant parameters to the NERC Vocabulary Server, itself built on a SPARQL endpoint. Additionally, a custom web front-end was built to enable users to browse the metadata and express queries through an intuitive graphical user interface that requires no prior knowledge of SPARQL. As well as providing means to browse the data via MSFD-related parameters (Descriptor, Criteria, and Indicator), the metadata records include the dataset's country of origin, the list of organisations involved in the management of the data, and links to any relevant INSPIRE

  9. SWB Groundwater Recharge Analysis, Catalina Island, California: Assessing Spatial and Temporal Recharge Patterns Within a Mediterranean Climate Zone

    Science.gov (United States)

    Harlow, J.

    2017-12-01

    Groundwater recharge quantification is a key parameter for sustainable groundwater management. Many recharge quantification techniques have been devised, each with advantages and disadvantages. A free, GIS based recharge quantification tool - the Soil Water Balance (SWB) model - was developed by the USGS to produce fine-tuned recharge constraints in watersheds and illuminate spatial and temporal dynamics of recharge. The subject of this research is to examine SWB within a Mediterranean climate zone, focusing on the Catalina Island, California. This project relied on publicly available online resources with the exception the geospatial processing software, ArcGIS. Daily climate station precipitation and temperature data was obtained from the Desert Research Institute for the years 2008-2014. Precipitation interpolations were performed with ArcGIS using the Natural Neighbor method. The USGS-National Map Viewer (NMV) website provided a 30-meter DEM - to interpolate high and low temperature ASCII grids using the Temperature Lapse Rate (TLR) method, to construct a D-8 flow direction grid for downhill redirection of soil-moisture saturated runoff toward non-saturated cells, and for aesthetic map creation. NMV also provided a modified Anderson land cover classification raster. The US Department of Agriculture-National Resource Conservation Service (NRCS) Web Soil Survey website provided shapefiles of soil water capacity and hydrologic soil groups. The Hargreaves and Samani method was implemented to determine evapotranspiration rates. The resulting SWB output data, in the form of ASCII grids are easily added to ArcGIS for quick visualization and data analysis (Figure 1). Calculated average recharge for 2008-2014 was 3537 inches/year, or 0.0174 acre feet/year. Recharge was 10.2% of the islands gross precipitation. The spatial distribution of the most significant recharge is in hotspots which dominate the residential hills above Avalon, followed by grassy/unvegetated areas

  10. Where was the 1898 Mare Island Earthquake? Insights from the 2014 South Napa Earthquake

    Science.gov (United States)

    Hough, S. E.

    2014-12-01

    The 2014 South Napa earthquake provides an opportunity to reconsider the Mare Island earthquake of 31 March 1898, which caused severe damage to buildings at a Navy yard on the island. Revising archival accounts of the 1898 earthquake, I estimate a lower intensity magnitude, 5.8, than the value in the current Uniform California Earthquake Rupture Forecast (UCERF) catalog (6.4). However, I note that intensity magnitude can differ from Mw by upwards of half a unit depending on stress drop, which for a historical earthquake is unknowable. In the aftermath of the 2014 earthquake, there has been speculation that apparently severe effects on Mare Island in 1898 were due to the vulnerability of local structures. No surface rupture has ever been identified from the 1898 event, which is commonly associated with the Hayward-Rodgers Creek fault system, some 10 km west of Mare Island (e.g., Parsons et al., 2003). Reconsideration of detailed archival accounts of the 1898 earthquake, together with a comparison of the intensity distributions for the two earthquakes, points to genuinely severe, likely near-field ground motions on Mare Island. The 2014 earthquake did cause significant damage to older brick buildings on Mare Island, but the level of damage does not match the severity of documented damage in 1898. The high intensity files for the two earthquakes are more over spatially shifted, with the centroid of the 2014 distribution near the town of Napa and that of the 1898 distribution near Mare Island, east of the Hayward-Rodgers Creek system. I conclude that the 1898 Mare Island earthquake was centered on or near Mare Island, possibly involving rupture of one or both strands of the Franklin fault, a low-slip-rate fault sub-parallel to the Rodgers Creek fault to the west and the West Napa fault to the east. I estimate Mw5.8 assuming an average stress drop; data are also consistent with Mw6.4 if stress drop was a factor of ≈3 lower than average for California earthquakes. I

  11. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and UrbIS

    Science.gov (United States)

    Crow, M. C.; Devarakonda, R.; Hook, L.; Killeffer, T.; Krassovski, M.; Boden, T.; King, A. W.; Wullschleger, S. D.

    2016-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This discussion describes tools being used in two different projects at Oak Ridge National Laboratory (ORNL), but at different stages of the data lifecycle. The Metadata Entry and Data Search Tool is being used for the documentation, archival, and data discovery stages for the Next Generation Ecosystem Experiment - Arctic (NGEE Arctic) project while the Urban Information Systems (UrbIS) Data Catalog is being used to support indexing, cataloging, and searching. The NGEE Arctic Online Metadata Entry Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The UrbIS Data Catalog is a data discovery tool supported by the Mercury cataloging framework [2] which aims to compile urban environmental data from around the world into one location, and be searchable via a user-friendly interface. Each data record conveniently displays its title, source, and date range, and features: (1) a button for a quick view of the metadata, (2) a direct link to the data and, for some data sets, (3) a button for visualizing the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for searching by area. References: [1] Devarakonda, Ranjeet, et al. "Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example." Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. [2] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery

  12. Obsidian hydration rate for the klamath basin of california and Oregon.

    Science.gov (United States)

    Johnson, L

    1969-09-26

    A hydration rate for obsidian of 3.5(4) microns squared per 1000 radio-carbon years has been established at the Nightfire Island archeological site in northern California and provides a means to date other prehistoric Klamath Basin sites. The new rate follows the form of the hydration equation formulated by Friedman and helps to refute claims made for other hydration equations.

  13. MetaRNA-Seq: An Interactive Tool to Browse and Annotate Metadata from RNA-Seq Studies

    Directory of Open Access Journals (Sweden)

    Pankaj Kumar

    2015-01-01

    Full Text Available The number of RNA-Seq studies has grown in recent years. The design of RNA-Seq studies varies from very simple (e.g., two-condition case-control to very complicated (e.g., time series involving multiple samples at each time point with separate drug treatments. Most of these publically available RNA-Seq studies are deposited in NCBI databases, but their metadata are scattered throughout four different databases: Sequence Read Archive (SRA, Biosample, Bioprojects, and Gene Expression Omnibus (GEO. Although the NCBI web interface is able to provide all of the metadata information, it often requires significant effort to retrieve study- or project-level information by traversing through multiple hyperlinks and going to another page. Moreover, project- and study-level metadata lack manual or automatic curation by categories, such as disease type, time series, case-control, or replicate type, which are vital to comprehending any RNA-Seq study. Here we describe “MetaRNA-Seq,” a new tool for interactively browsing, searching, and annotating RNA-Seq metadata with the capability of semiautomatic curation at the study level.

  14. Proactive conservation management of an island-endemic bird species in the face of global change

    Science.gov (United States)

    Morrison, S.A.; Sillett, T. Scott; Ghalambor, Cameron K.; Fitzpatrick, J.W.; Graber, D.M.; Bakker, V.J.; Bowman, R.; Collins, C.T.; Collins, P.W.; Delaney, K.S.; Doak, D.F.; Koenig, Walter D.; Laughrin, L.; Lieberman, A.A.; Marzluff, J.M.; Reynolds, M.D.; Scott, J.M.; Stallcup, J.A.; Vickers, W.; Boyce, W.M.

    2011-01-01

    Biodiversity conservation in an era of global change and scarce funding benefits from approaches that simultaneously solve multiple problems. Here, we discuss conservation management of the island scrub-jay (Aphelocoma insularis), the only island-endemic passerine species in the continental United States, which is currently restricted to 250-square-kilometer Santa Cruz Island, California. Although the species is not listed as threatened by state or federal agencies, its viability is nonetheless threatened on multiple fronts. We discuss management actions that could reduce extinction risk, including vaccination, captive propagation, biosecurity measures, and establishing a second free-living population on a neighboring island. Establishing a second population on Santa Rosa Island may have the added benefit of accelerating the restoration and enhancing the resilience of that island's currently highly degraded ecosystem. The proactive management framework for island scrub-jays presented here illustrates how strategies for species protection, ecosystem restoration, and adaptation to and mitigation of climate change can converge into an integrated solution. ?? 2011 by American Institute of Biological Sciences. All rights reserved.

  15. Semantic web technologies for video surveillance metadata

    OpenAIRE

    Poppe, Chris; Martens, Gaëtan; De Potter, Pieterjan; Van de Walle, Rik

    2012-01-01

    Video surveillance systems are growing in size and complexity. Such systems typically consist of integrated modules of different vendors to cope with the increasing demands on network and storage capacity, intelligent video analytics, picture quality, and enhanced visual interfaces. Within a surveillance system, relevant information (like technical details on the video sequences, or analysis results of the monitored environment) is described using metadata standards. However, different module...

  16. Overfishing Drivers and Opportunities for Recovery in Small-Scale Fisheries of the Midriff Islands Region, Gulf of California, Mexico: the Roles of Land and Sea Institutions in Fisheries Sustainability

    Directory of Open Access Journals (Sweden)

    Ana Cinti

    2014-03-01

    Full Text Available Institutions play an important role in shaping individual incentives in complex social-ecological systems, by encouraging or discouraging resource overuse. In the Gulf of California, Mexico, there is widespread evidence of declines in small-scale fishery stocks, largely attributed to policy failures. We investigated formal and informal rules-in-use regulating access and resource use by small-scale fishers in the two most important fishing communities of the Midriff Islands region in the Gulf of California, which share several target species and fishing grounds. The Midriff Islands region is a highly productive area where sustainable use of fisheries resources has been elusive. Our study aimed to inform policy by providing information on how management and conservation policies perform in this unique environment. In addition, we contrast attributes of the enabling conditions for sustainability on the commons in an effort to better understand why these communities, albeit showing several contrasting attributes of the above conditions, have not developed sustainable fishing practices. We take a novel, comprehensive institutional approach that includes formal and informal institutions, incorporating links between land (i.e., communal land rights and sea institutions (i.e., fisheries and conservation policies and their effects on stewardship of fishery resources, a theme that is practically unaddressed in the literature. Insufficient government support in provision of secure rights, enforcement and sanctioning, and recognition and incorporation of local arrangements and capacities for management arose as important needs to address in both cases. We highlight the critical role of higher levels of governance, that when disconnected from local practices, realities, and needs, can be a major impediment to achieving sustainability in small-scale fisheries, even in cases where several facilitating conditions are met.

  17. Metadata: A user`s view

    Energy Technology Data Exchange (ETDEWEB)

    Bretherton, F.P. [Univ. of Wisconsin, Madison, WI (United States); Singley, P.T. [Oak Ridge National Lab., TN (United States)

    1994-12-31

    An analysis is presented of the uses of metadata from four aspects of database operations: (1) search, query, retrieval, (2) ingest, quality control, processing, (3) application to application transfer; (4) storage, archive. Typical degrees of database functionality ranging from simple file retrieval to interdisciplinary global query with metadatabase-user dialog and involving many distributed autonomous databases, are ranked in approximate order of increasing sophistication of the required knowledge representation. An architecture is outlined for implementing such functionality in many different disciplinary domains utilizing a variety of off the shelf database management subsystems and processor software, each specialized to a different abstract data model.

  18. The ATLAS Eventlndex: data flow and inclusion of other metadata

    Science.gov (United States)

    Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration

    2016-10-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information collected from the ATLAS detector. The basic unit of this information is the event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex is event picking, as well as data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the Grid, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalogue AMI and the Rucio data management system and information on production jobs from the ATLAS production system. The ATLAS production system is also used for the collection of event information from the Grid jobs. EventIndex developments started in 2012 and in the middle of 2015 the system was commissioned and started collecting event metadata, as a part of ATLAS Distributed Computing operations.

  19. Playing the Metadata Game: Technologies and Strategies Used by Climate Diagnostics Center for Cataloging and Distributing Climate Data.

    Science.gov (United States)

    Schweitzer, R. H.

    2001-05-01

    The Climate Diagnostics Center maintains a collection of gridded climate data primarily for use by local researchers. Because this data is available on fast digital storage and because it has been converted to netCDF using a standard metadata convention (called COARDS), we recognize that this data collection is also useful to the community at large. At CDC we try to use technology and metadata standards to reduce our costs associated with making these data available to the public. The World Wide Web has been an excellent technology platform for meeting that goal. Specifically we have developed Web-based user interfaces that allow users to search, plot and download subsets from the data collection. We have also been exploring use of the Pacific Marine Environment Laboratory's Live Access Server (LAS) as an engine for this task. This would result in further savings by allowing us to concentrate on customizing the LAS where needed, rather that developing and maintaining our own system. One such customization currently under development is the use of Java Servlets and JavaServer pages in conjunction with a metadata database to produce a hierarchical user interface to LAS. In addition to these Web-based user interfaces all of our data are available via the Distributed Oceanographic Data System (DODS). This allows other sites using LAS and individuals using DODS-enabled clients to use our data as if it were a local file. All of these technology systems are driven by metadata. When we began to create netCDF files, we collaborated with several other agencies to develop a netCDF convention (COARDS) for metadata. At CDC we have extended that convention to incorporate additional metadata elements to make the netCDF files as self-describing as possible. Part of the local metadata is a set of controlled names for the variable, level in the atmosphere and ocean, statistic and data set for each netCDF file. To allow searching and easy reorganization of these metadata, we loaded

  20. Indexing of ATLAS data management and analysis system metadata

    CERN Document Server

    Grigoryeva, Maria; The ATLAS collaboration

    2017-01-01

    This manuscript is devoted to the development of the system to manage metainformation of modern HENP experiments. The main purpose of the system is to provide scientists with transparent access to the actual and historical metadata related to data analysis, processing and modeling. The system design addresses the following goals : providing a flexible and fast search for metadata on various combinations of keywords, generating aggregated reports, categorized according to selected parameters, such as the studied physical process, scientific topic, physical group, etc. The article presents the architecture of the developed indexing and search system, as well as the results of performance tests. The comparison of the query execution speed within the developed system and in case of querying the original relational databases showed that the developed system provides results faster. Also the new system allows much more complex search requests, than the original storages.

  1. ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond

    CERN Document Server

    van Gemmeren, Peter; The ATLAS collaboration; Malon, David; Vaniachine, Alexandre

    2015-01-01

    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework’s state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires ...

  2. Automated Atmospheric Composition Dataset Level Metadata Discovery. Difficulties and Surprises

    Science.gov (United States)

    Strub, R. F.; Falke, S. R.; Kempler, S.; Fialkowski, E.; Goussev, O.; Lynnes, C.

    2015-12-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System - CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not

  3. Big Earth Data Initiative: Metadata Improvement: Case Studies

    Science.gov (United States)

    Kozimor, John; Habermann, Ted; Farley, John

    2016-01-01

    Big Earth Data Initiative (BEDI) The Big Earth Data Initiative (BEDI) invests in standardizing and optimizing the collection, management and delivery of U.S. Government's civil Earth observation data to improve discovery, access use, and understanding of Earth observations by the broader user community. Complete and consistent standard metadata helps address all three goals.

  4. Building a scalable event-level metadata service for ATLAS

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Goosens, L; Viegas, F T A; McGlone, H

    2008-01-01

    The ATLAS TAG Database is a multi-terabyte event-level metadata selection system, intended to allow discovery, selection of and navigation to events of interest to an analysis. The TAG Database encompasses file- and relational-database-resident event-level metadata, distributed across all ATLAS Tiers. An oracle hosted global TAG relational database, containing all ATLAS events, implemented in Oracle, will exist at Tier O. Implementing a system that is both performant and manageable at this scale is a challenge. A 1 TB relational TAG Database has been deployed at Tier 0 using simulated tag data. The database contains one billion events, each described by two hundred event metadata attributes, and is currently undergoing extensive testing in terms of queries, population and manageability. These 1 TB tests aim to demonstrate and optimise the performance and scalability of an Oracle TAG Database on a global scale. Partitioning and indexing strategies are crucial to well-performing queries and manageability of the database and have implications for database population and distribution, so these are investigated. Physics query patterns are anticipated, but a crucial feature of the system must be to support a broad range of queries across all attributes. Concurrently, event tags from ATLAS Computing System Commissioning distributed simulations are accumulated in an Oracle-hosted database at CERN, providing an event-level selection service valuable for user experience and gathering information about physics query patterns. In this paper we describe the status of the Global TAG relational database scalability work and highlight areas of future direction

  5. Effects of age, colony, and sex on mercury concentrations in California sea lions

    Science.gov (United States)

    McHuron, Elizibeth A; Peterson, Sarah H.; Ackerman, Joshua T.; Melin, Sharon R.; Harris, Jeffrey D.; Costa, Daniel P.

    2016-01-01

    We measured total mercury (THg) concentrations in California sea lions (Zalophus californianus) and examined how concentrations varied with age class, colony, and sex. Because Hg exposure is primarily via diet, we used nitrogen (δ 15N) and carbon (δ 13C) stable isotopes to determine if intraspecific differences in THg concentrations could be explained by feeding ecology. Blood and hair were collected from 21 adult females and 57 juveniles from three colonies in central and southern California (San Nicolas, San Miguel, and Año Nuevo Islands). Total Hg concentrations ranged from 0.01 to 0.31 μg g−1 wet weight (ww) in blood and 0.74 to 21.00 μg g−1 dry weight (dw) in hair. Adult females had greater mean THg concentrations than juveniles in blood (0.15 vs. 0.03 μg−1 ww) and hair (10.10 vs. 3.25 μg−1 dw). Age class differences in THg concentrations did not appear to be driven by trophic level or habitat type because there were no differences in δ 15N or δ 13C values between adults and juveniles. Total Hg concentrations in adult females were 54 % (blood) and 24 % (hair) greater in females from San Miguel than females from San Nicolas Island, which may have been because sea lions from the two islands foraged in different areas. For juveniles, we detected some differences in THg concentrations with colony and sex, although these were likely due to sampling effects and not ecological differences. Overall, THg concentrations in California sea lions were within the range documented for other marine mammals and were generally below toxicity benchmarks for fish-eating wildlife.

  6. ­The Geospatial Metadata Manager’s Toolbox: Three Techniques for Maintaining Records

    Directory of Open Access Journals (Sweden)

    Bruce Godfrey

    2015-07-01

    Full Text Available Managing geospatial metadata records requires a range of techniques. At the University of Idaho Library, we have tens of thousands of records which need to be maintained as well as the addition of new records which need to be normalized and added to the collections. We show a graphical user interface (GUI tool that was developed to make simple modifications, a simple XSLT that operates on complex metadata, and a Python script with enables parallel processing to make maintenance tasks more efficient. Throughout, we compare these techniques and discuss when they may be useful.

  7. Leveraging Python to improve ebook metadata selection, ingest, and management

    Directory of Open Access Journals (Sweden)

    Kelly Thompson

    2017-10-01

    Full Text Available Libraries face many challenges in managing descriptive metadata for ebooks, including quality control, completeness of coverage, and ongoing management. The recent emergence of library management systems that automatically provide descriptive metadata for e-resources activated in system knowledge bases means that ebook management models are moving toward both greater efficiency and more complex implementation and maintenance choices. Automated and data-driven processes for ebook management have always been desirable, but in the current environment, they become necessary. In addition to initial selection of a record source, automation can be applied to quality control processes and ongoing maintenance in order to keep manual, eyes-on work to a minimum while providing the best possible discovery and access. In this article, we describe how we are using Python scripts to address these challenges.

  8. Metafier - a Tool for Annotating and Structuring Building Metadata

    DEFF Research Database (Denmark)

    Holmegaard, Emil; Johansen, Aslak; Kjærgaard, Mikkel Baun

    2017-01-01

    in achieving this goal, but often they work as silos. Improving at scale the energy performance of buildings depends on applications breaking these silos and being portable among buildings. To enable portable building applications, the building instrumentation should be supported by a metadata layer...

  9. SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.

    Directory of Open Access Journals (Sweden)

    Benjamin C Hitz

    Full Text Available The Encyclopedia of DNA elements (ENCODE project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data has been released as a separate Python package.

  10. Towards a best practice of modeling unit of measure and related statistical metadata

    CERN Document Server

    Grossmann, Wilfried

    2011-01-01

    Data and metadata exchange between organizations requires a common language for describing structure and content of statistical data and metadata. The SDMX consortium develops content oriented guidelines (COG) recommending harmonized cross-domain concepts and terminology to increase the efficiency of (meta-) data exchange. A recent challenge is a recommended code list for the unit of measure. Based on examples from SDMX sponsor organizations this paper analyses the diversity of ""unit of measure"" as used in practice, including potential breakdowns and interdependencies of the respective meta-

  11. Making Information Visible, Accessible, and Understandable: Meta-Data and Registries

    National Research Council Canada - National Science Library

    Robinson, Clay

    2007-01-01

    ... the interoperability, discovery, and utility of data assets throughout the Department of Defense (DoD). Proper use and understanding of metadata can substantially enhance the utility of data by making it more visible, accessible, and understandable...

  12. Where the wild things are: Predicting hotspots of seabird aggregations in the California Current System

    Science.gov (United States)

    Nur, N.; Jahncke, J.; Herzog, M.P.; Howar, J.; Hyrenbach, K.D.; Zamon, J.E.; Ainley, D.G.; Wiens, J.A.; Morgan, K.; Balance, L.T.; Stralberg, D.

    2011-01-01

    Marine Protected Areas (MPAs) provide an important tool for conservation of marine ecosystems. To be most effective, these areas should be strategically located in a manner that supports ecosystem function. To inform marine spatial planning and support strategic establishment of MPAs within the California Current System, we identified areas predicted to support multispecies aggregations of seabirds ("hotspot????). We developed habitat-association models for 16 species using information from at-sea observations collected over an 11-year period (1997-2008), bathymetric data, and remotely sensed oceanographic data for an area from north of Vancouver Island, Canada, to the USA/Mexico border and seaward 600 km from the coast. This approach enabled us to predict distribution and abundance of seabirds even in areas of few or no surveys. We developed single-species predictive models using a machine-learning algorithm: bagged decision trees. Single-species predictions were then combined to identify potential hotspots of seabird aggregation, using three criteria: (1) overall abundance among species, (2) importance of specific areas ("core area????) to individual species, and (3) predicted persistence of hotspots across years. Model predictions were applied to the entire California Current for four seasons (represented by February, May, July, and October) in each of 11 years. Overall, bathymetric variables were often important predictive variables, whereas oceanographic variables derived from remotely sensed data were generally less important. Predicted hotspots often aligned with currently protected areas (e.g., National Marine Sanctuaries), but we also identified potential hotspots in Northern California/Southern Oregon (from Cape Mendocino to Heceta Bank), Southern California (adjacent to the Channel Islands), and adjacent to Vancouver Island, British Columbia, that are not currently included in protected areas. Prioritization and identification of multispecies hotspots

  13. Radionuclides in fishes and mussels from the Farallon Islands Nuclear Waste Dump Site, California.

    Science.gov (United States)

    Suchanek, T H; Lagunas-Solar, M C; Raabe, O G; Helm, R C; Gielow, F; Peek, N; Carvacho, O

    1996-08-01

    The Farallon Islands Nuclear Waste Dump Site (FINWDS), approximately 30 miles west of San Francisco, California, received at least 500 TBq encapsulated in more than 47,500 containers from approximately 1945 to 1970. During several seasons in 1986/87 deep-sea bottom feeding fishes (Dover sole = Microstomus pacificus; sablefish = Anoplopoma fimbria; thornyheads = Sebastolobus spp.) and intertidal mussels (Mytilus californianus) were collected from the vicinity of the FINWDS and from comparable depths at a reference site near Point Arena, CA. Tissues were analyzed for several radionuclides (137Cs, 238Pu, 239+240Pu, and 241Am). Radionuclide concentrations for fish mussel tissue ranged from non-detectable to 4,340 mBq kg(-1) wet weight, with the following means for Farallon fishes: 137Cs = 1,110 mBq kg(-1); 238Pu = 390 mBq kg(-1); 239+240Pu = 130 mBq kg(-1); and 241Am = 1,350 mBq kg(-1). There were no statistically significant differences in the radionuclide concentrations observed in samples from the Farallon Islands compared to reference samples from Point Arena, CA. Concentrations of both 238Pu and 241Am in fish tissues (from both sites) were notably higher than those reported in literature from any other sites world-wide, including potentially contaminated sites. Concentrations of 239+24OPu from both sites were typical of low values found at some contaminated sites worldwide. These results show approximately 10 times higher concentrations of 239+240Pu and approximately 40-50 times higher concentrations of 238Pu than those values reported for identical fish species from 1977 collections at the FINWDS. Radionuclide concentrations were converted to a hypothetical per capita annual radionuclide intake for adults, yielding the following values of annual Committed Effective Dose Equivalent (CEDE) from ionizing radiation emitted from these radionuclides: 0.000 mSv y(-1) for 137Cs, 0.009 mSv Y(-1) for 228Pu, and 0.003 mSv y(-1) for 239+240Pu. For 241Am, projected CEDE for

  14. Bathymetry and acoustic backscatter-outer mainland shelf, eastern Santa Barbara Channel, California

    Science.gov (United States)

    Dartnell, Peter; Finlayson, David P.; Ritchie, Andrew C.; Cochrane, Guy R.; Erdey, Mercedes D.

    2012-01-01

    In 2010 and 2011, scientists from the U.S. Geological Survey (USGS), Pacific Coastal and Marine Science Center (PCMSC), acquired bathymetry and acoustic-backscatter data from the outer shelf region of the eastern Santa Barbara Channel, California. These surveys were conducted in cooperation with the Bureau of Ocean Energy Management (BOEM). BOEM is interested in maps of hard-bottom substrates, particularly natural outcrops that support reef communities in areas near oil and gas extraction activity. The surveys were conducted using the USGS R/V Parke Snavely, outfitted with an interferometric sidescan sonar for swath mapping and real-time kinematic navigation equipment. This report provides the bathymetry and backscatter data acquired during these surveys in several formats, a summary of the mapping mission, maps of bathymetry and backscatter, and Federal Geographic Data Committee (FGDC) metadata.

  15. Automated Creation of Datamarts from a Clinical Data Warehouse, Driven by an Active Metadata Repository

    Science.gov (United States)

    Rogerson, Charles L.; Kohlmiller, Paul H.; Stutman, Harris

    1998-01-01

    A methodology and toolkit are described which enable the automated metadata-driven creation of datamarts from clinical data warehouses. The software uses schema-to-schema transformation driven by an active metadata repository. Tools for assessing datamart data quality are described, as well as methods for assessing the feasibility of implementing specific datamarts. A methodology for data remediation and the re-engineering of operational data capture is described.

  16. Metadata Laws, Journalism and Resistance in Australia

    Directory of Open Access Journals (Sweden)

    Benedetta Brevini

    2017-03-01

    Full Text Available The intelligence leaks from Edward Snowden in 2013 unveiled the sophistication and extent of data collection by the United States’ National Security Agency and major global digital firms prompting domestic and international debates about the balance between security and privacy, openness and enclosure, accountability and secrecy. It is difficult not to see a clear connection with the Snowden leaks in the sharp acceleration of new national security legislations in Australia, a long term member of the Five Eyes Alliance. In October 2015, the Australian federal government passed controversial laws that require telecommunications companies to retain the metadata of their customers for a period of two years. The new acts pose serious threats for the profession of journalism as they enable government agencies to easily identify and pursue journalists’ sources. Bulk data collections of this type of information deter future whistleblowers from approaching journalists, making the performance of the latter’s democratic role a challenge. After situating this debate within the scholarly literature at the intersection between surveillance studies and communication studies, this article discusses the political context in which journalists are operating and working in Australia; assesses how metadata laws have affected journalism practices and addresses the possibility for resistance.

  17. CHARMe Commentary metadata for Climate Science: collecting, linking and sharing user feedback on climate datasets

    Science.gov (United States)

    Blower, Jon; Lawrence, Bryan; Kershaw, Philip; Nagni, Maurizio

    2014-05-01

    The research process can be thought of as an iterative activity, initiated based on prior domain knowledge, as well on a number of external inputs, and producing a range of outputs including datasets, studies and peer reviewed publications. These outputs may describe the problem under study, the methodology used, the results obtained, etc. In any new publication, the author may cite or comment other papers or datasets in order to support their research hypothesis. However, as their work progresses, the researcher may draw from many other latent channels of information. These could include for example, a private conversation following a lecture or during a social dinner; an opinion expressed concerning some significant event such as an earthquake or for example a satellite failure. In addition, other sources of information of grey literature are important public such as informal papers such as the arxiv deposit, reports and studies. The climate science community is not an exception to this pattern; the CHARMe project, funded under the European FP7 framework, is developing an online system for collecting and sharing user feedback on climate datasets. This is to help users judge how suitable such climate data are for an intended application. The user feedback could be comments about assessments, citations, or provenance of the dataset, or other information such as descriptions of uncertainty or data quality. We define this as a distinct category of metadata called Commentary or C-metadata. We link C-metadata with target climate datasets using a Linked Data approach via the Open Annotation data model. In the context of Linked Data, C-metadata plays the role of a resource which, depending on its nature, may be accessed as simple text or as more structured content. The project is implementing a range of software tools to create, search or visualize C-metadata including a JavaScript plugin enabling this functionality to be integrated in situ with data provider portals

  18. Coastal bathymetry and backscatter data collected in 2012 from the Chandeleur Islands, Louisiana

    Science.gov (United States)

    DeWitt, Nancy T.; Bernier, Julie C.; Pfeiffer, William R.; Miselis, Jennifer L.; Reynolds, B.J.; Wiese, Dana S.; Kelso, Kyle W.

    2014-01-01

    As part of the Barrier Island Evolution Research Project, scientists from the U.S. Geological Survey St. Petersburg Coastal and Marine Science Center conducted nearshore geophysical surveys off the northern Chandeleur Islands, Louisiana, in July and August of 2012. The objective of the study is to better understand barrier island geomorphic evolution, particularly storm-related depositional and erosional processes that shape the islands over annual to interannual timescales (1-5 years). Collecting geophysical data will allow us to identify relationships between the geologic history of the island and its present day morphology and sediment distribution. This mapping effort was the second in a series of three planned surveys in this area. High resolution geophysical data collected in each of 3 consecutive years along this rapidly changing barrier island system will provide a unique time-series dataset that will significantly further the analyses and geomorphological interpretations of this and other coastal systems, improving our understanding of coastal response and evolution over short time scales (1-5 years). This Data Series report includes the geophysical data that were collected during two cruises (USGS Field Activity Numbers 12BIM03 and 12BIM04) aboard the RV Survey Cat and the RV Twin Vee along the northern portion of the Chandeleur Islands, Breton National Wildlife Refuge, Louisiana. Data were acquired with the following equipment: a Systems Engineering and Assessment, Ltd., SWATHplus interferometric sonar (468 kilohertz (kHz)), an EdgeTech 424 (4-24 kHz) chirp sub-bottom profiling system, and a Knudsen 320BP (210 kHz) echosounder. This report serves as an archive of processed interferometric swath and single-beam bathymetry data. Geographic information system data products include an interpolated digital elevation model, an acoustic backscatter mosaic, trackline maps, and point data files. Additional files include error analysis maps, Field Activity

  19. GEO Label Web Services for Dynamic and Effective Communication of Geospatial Metadata Quality

    Science.gov (United States)

    Lush, Victoria; Nüst, Daniel; Bastin, Lucy; Masó, Joan; Lumsden, Jo

    2014-05-01

    We present demonstrations of the GEO label Web services and their integration into a prototype extension of the GEOSS portal (http://scgeoviqua.sapienzaconsulting.com/web/guest/geo_home), the GMU portal (http://gis.csiss.gmu.edu/GADMFS/) and a GeoNetwork catalog application (http://uncertdata.aston.ac.uk:8080/geonetwork/srv/eng/main.home). The GEO label is designed to communicate, and facilitate interrogation of, geospatial quality information with a view to supporting efficient and effective dataset selection on the basis of quality, trustworthiness and fitness for use. The GEO label which we propose was developed and evaluated according to a user-centred design (UCD) approach in order to maximise the likelihood of user acceptance once deployed. The resulting label is dynamically generated from producer metadata in ISO or FDGC format, and incorporates user feedback on dataset usage, ratings and discovered issues, in order to supply a highly informative summary of metadata completeness and quality. The label was easily incorporated into a community portal as part of the GEO Architecture Implementation Programme (AIP-6) and has been successfully integrated into a prototype extension of the GEOSS portal, as well as the popular metadata catalog and editor, GeoNetwork. The design of the GEO label was based on 4 user studies conducted to: (1) elicit initial user requirements; (2) investigate initial user views on the concept of a GEO label and its potential role; (3) evaluate prototype label visualizations; and (4) evaluate and validate physical GEO label prototypes. The results of these studies indicated that users and producers support the concept of a label with drill-down interrogation facility, combining eight geospatial data informational aspects, namely: producer profile, producer comments, lineage information, standards compliance, quality information, user feedback, expert reviews, and citations information. These are delivered as eight facets of a wheel

  20. Latest developments for the IAGOS database: Interoperability and metadata

    Science.gov (United States)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for

  1. Training and Best Practice Guidelines: Implications for Metadata Creation

    Science.gov (United States)

    Chuttur, Mohammad Y.

    2012-01-01

    In response to the rapid development of digital libraries over the past decade, researchers have focused on the use of metadata as an effective means to support resource discovery within online repositories. With the increasing involvement of libraries in digitization projects and the growing number of institutional repositories, it is anticipated…

  2. Archive of Digital Chirp Subbottom Profile Data Collected During USGS Cruise 14BIM05 Offshore of Breton Island, Louisiana, August 2014

    Science.gov (United States)

    Forde, Arnell S.; Flocks, James G.; Wiese, Dana S.; Fredericks, Jake J.

    2016-03-29

    From August 11 to 31, 2014, the U.S. Geological Survey (USGS), in cooperation with the U.S. Fish and Wildlife Service (USFWS), conducted a geophysical survey to investigate the geologic controls on barrier island framework and long-term sediment transport offshore of Breton Island, Louisiana as part of a broader USGS study on Barrier Island Mapping (BIM). Additional details related to this activity can be found by searching the USGS's Coastal and Marine Geoscience Data System (CMGDS), for field activity 2014-317-FA (also known as 14BIM05). These surveys were funded through the USGS Coastal and Marine Geology Program (CMGP) and the Louisiana Outer Coast Early Restoration Project. This report serves as an archive of unprocessed digital chirp subbottom data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FGDC) metadata. Gained digital images of the seismic profiles are also provided. Refer to the Abbreviations page for explanations of acronyms and abbreviations used in this report.

  3. Archive of digital chirp subbottom profile data collected during USGS cruises 13BIM02 and 13BIM07 offshore of the Chandeleur Islands, Louisiana, 2013

    Science.gov (United States)

    Forde, Arnell S.; Miselis, Jennifer L.; Flocks, James G.; Bernier, Julie C.; Wiese, Dana S.

    2014-01-01

    On July 5–19 (cruise 13BIM02) and August 22–September 1 (cruise 13BIM07), 2013, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the geologic controls on barrier island evolution and medium-term and interannual sediment transport along the oil spill mitigation sand berm constructed at the north end and offshore of the Chandeleur Islands, Louisiana. This investigation is part of a broader USGS study, which seeks to understand barrier island evolution better over medium time scales (months to years). This report serves as an archive of unprocessed digital chirp subbottom data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FGDC) metadata. Gained–showing a relative increase in signal amplitude–digital images of the seismic profiles are provided. Refer to the Abbreviations page for explanations of acronyms and abbreviations used in this report.

  4. Magnetic and gravity studies of Mono Lake, east-central, California

    Science.gov (United States)

    Athens, Noah D.; Ponce, David A.; Jayko, Angela S.; Miller, Matt; McEvoy, Bobby; Marcaida, Mae; Mangan, Margaret T.; Wilkinson, Stuart K.; McClain, James S.; Chuchel, Bruce A.; Denton, Kevin M.

    2014-01-01

    From August 26 to September 5, 2011, the U.S. Geological Survey (USGS) collected more than 600 line-kilometers of shipborne magnetic data on Mono Lake, 20 line-kilometers of ground magnetic data on Paoha Island, 50 gravity stations on Paoha and Negit Islands, and 28 rock samples on Paoha and Negit Islands, in east-central California. Magnetic and gravity investigations were undertaken in Mono Lake to study regional crustal structures and to aid in understanding the geologic framework, in particular regarding potential geothermal resources and volcanic hazards throughout Mono Basin. Furthermore, shipborne magnetic data illuminate local structures in the upper crust beneath Mono Lake where geologic exposure is absent. Magnetic and gravity methods, which sense contrasting physical properties of the subsurface, are ideal for studying Mono Lake. Exposed rock units surrounding Mono Lake consist mainly of Quaternary alluvium, lacustrine sediment, aeolian deposits, basalt, and Paleozoic granitic and metasedimentary rocks (Bailey, 1989). At Black Point, on the northwest shore of Mono Lake, there is a mafic cinder cone that was produced by a subaqueous eruption around 13.3 ka. Within Mono Lake there are several small dacite cinder cones and flows, forming Negit Island and part of Paoha Island, which also host deposits of Quaternary lacustrine sediments. The typical density and magnetic properties of young volcanic rocks contrast with those of the lacustrine sediment, enabling us to map their subsurface extent.

  5. The national assessment of shoreline change: a GIS compilation of vector cliff edges and associated cliff erosion data for the California coast

    Science.gov (United States)

    Hapke, Cheryl; Reid, David; Borrelli, Mark

    2007-01-01

    coastline at http://pubs.usgs.gov/of/2007/1133/ for additional information regarding methods and results (Hapke and others, 2007). Data in this report are organized into downloadable layers by region (Northern, Central and Southern California) and are provided as vector datasets with accompanying metadata. Vector cliff edges may represent a compilation of data from one or more sources and the sources used are included in the dataset metadata. This project employs the Environmental Systems Research Institute's (ESRI) ArcGIS as it's Geographic Information System (GIS) mapping tool and contains several data layers (shapefiles) that are used to create a geographic view of the California coast. The vector data form a basemap comprising polygon and line themes that include a U.S. coastline (1:80,000), U.S. cities, and state boundaries.

  6. Generation of Multiple Metadata Formats from a Geospatial Data Repository

    Science.gov (United States)

    Hudspeth, W. B.; Benedict, K. K.; Scott, S.

    2012-12-01

    The Earth Data Analysis Center (EDAC) at the University of New Mexico is partnering with the CYBERShARE and Environmental Health Group from the Center for Environmental Resource Management (CERM), located at the University of Texas, El Paso (UTEP), the Biodiversity Institute at the University of Kansas (KU), and the New Mexico Geo- Epidemiology Research Network (GERN) to provide a technical infrastructure that enables investigation of a variety of climate-driven human/environmental systems. Two significant goals of this NASA-funded project are: a) to increase the use of NASA Earth observational data at EDAC by various modeling communities through enabling better discovery, access, and use of relevant information, and b) to expose these communities to the benefits of provenance for improving understanding and usability of heterogeneous data sources and derived model products. To realize these goals, EDAC has leveraged the core capabilities of its Geographic Storage, Transformation, and Retrieval Engine (Gstore) platform, developed with support of the NSF EPSCoR Program. The Gstore geospatial services platform provides general purpose web services based upon the REST service model, and is capable of data discovery, access, and publication functions, metadata delivery functions, data transformation, and auto-generated OGC services for those data products that can support those services. Central to the NASA ACCESS project is the delivery of geospatial metadata in a variety of formats, including ISO 19115-2/19139, FGDC CSDGM, and the Proof Markup Language (PML). This presentation details the extraction and persistence of relevant metadata in the Gstore data store, and their transformation into multiple metadata formats that are increasingly utilized by the geospatial community to document not only core library catalog elements (e.g. title, abstract, publication data, geographic extent, projection information, and database elements), but also the processing steps used to

  7. User interface development and metadata considerations for the Atmospheric Radiation Measurement (ARM) archive

    Science.gov (United States)

    Singley, P. T.; Bell, J. D.; Daugherty, P. F.; Hubbs, C. A.; Tuggle, J. G.

    1993-01-01

    This paper will discuss user interface development and the structure and use of metadata for the Atmospheric Radiation Measurement (ARM) Archive. The ARM Archive, located at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, is the data repository for the U.S. Department of Energy's (DOE's) ARM Project. After a short description of the ARM Project and the ARM Archive's role, we will consider the philosophy and goals, constraints, and prototype implementation of the user interface for the archive. We will also describe the metadata that are stored at the archive and support the user interface.

  8. Competence Based Educational Metadata for Supporting Lifelong Competence Development Programmes

    NARCIS (Netherlands)

    Sampson, Demetrios; Fytros, Demetrios

    2008-01-01

    Sampson, D., & Fytros, D. (2008). Competence Based Educational Metadata for Supporting Lifelong Competence Development Programmes. In P. Diaz, Kinshuk, I. Aedo & E. Mora (Eds.), Proceedings of the 8th IEEE International Conference on Advanced Learning Technologies (ICALT 2008), pp. 288-292. July,

  9. Sustained Assessment Metadata as a Pathway to Trustworthiness of Climate Science Information

    Science.gov (United States)

    Champion, S. M.; Kunkel, K.

    2017-12-01

    The Sustained Assessment process has produced a suite of climate change reports: The Third National Climate Assessment (NCA3), Regional Surface Climate Conditions in CMIP3 and CMIP5 for the United States: Differences, Similarities, and Implications for the U.S. National Climate Assessment, Impacts of Climate Change on Human Health in the United States: A Scientific Assessment, The State Climate Summaries, as well as the anticipated Climate Science Special Report and Fourth National Climate Assessment. Not only are these groundbreaking reports of climate change science, they are also the first suite of climate science reports to provide access to complex metadata directly connected to the report figures and graphics products. While the basic metadata documentation requirement is federally mandated through a series of federal guidelines as a part of the Information Quality Act, Sustained Assessment products are also deemed Highly Influential Scientific Assessments, which further requires demonstration of the transparency and reproducibility of the content. To meet these requirements, the Technical Support Unit (TSU) for the Sustained Assessment embarked on building a system for not only collecting and documenting metadata to the required standards, but one that also provides consumers unprecedented access to the underlying data and methods. As our process and documentation have evolved, the value of both continue to grow in parallel with the consumer expectation of quality, accessible climate science information. This presentation will detail the how the TSU accomplishes the mandated requirements with their metadata collection and documentation process, as well as the technical solution designed to demonstrate compliance while also providing access to the content for the general public. We will also illustrate how our accessibility platforms guide consumers through the Assessment science at a level of transparency that builds trust and confidence in the report

  10. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    Science.gov (United States)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  11. Logic programming and metadata specifications

    Science.gov (United States)

    Lopez, Antonio M., Jr.; Saacks, Marguerite E.

    1992-01-01

    Artificial intelligence (AI) ideas and techniques are critical to the development of intelligent information systems that will be used to collect, manipulate, and retrieve the vast amounts of space data produced by 'Missions to Planet Earth.' Natural language processing, inference, and expert systems are at the core of this space application of AI. This paper presents logic programming as an AI tool that can support inference (the ability to draw conclusions from a set of complicated and interrelated facts). It reports on the use of logic programming in the study of metadata specifications for a small problem domain of airborne sensors, and the dataset characteristics and pointers that are needed for data access.

  12. NCI's national environmental research data collection: metadata management built on standards and preparing for the semantic web

    Science.gov (United States)

    Wang, Jingbo; Bastrakova, Irina; Evans, Ben; Gohar, Kashif; Santana, Fabiana; Wyborn, Lesley

    2015-04-01

    National Computational Infrastructure (NCI) manages national environmental research data collections (10+ PB) as part of its specialized high performance data node of the Research Data Storage Infrastructure (RDSI) program. We manage 40+ data collections using NCI's Data Management Plan (DMP), which is compatible with the ISO 19100 metadata standards. We utilize ISO standards to make sure our metadata is transferable and interoperable for sharing and harvesting. The DMP is used along with metadata from the data itself, to create a hierarchy of data collection, dataset and time series catalogues that is then exposed through GeoNetwork for standard discoverability. This hierarchy catalogues are linked using a parent-child relationship. The hierarchical infrastructure of our GeoNetwork catalogues system aims to address both discoverability and in-house administrative use-cases. At NCI, we are currently improving the metadata interoperability in our catalogue by linking with standardized community vocabulary services. These emerging vocabulary services are being established to help harmonise data from different national and international scientific communities. One such vocabulary service is currently being established by the Australian National Data Services (ANDS). Data citation is another important aspect of the NCI data infrastructure, which allows tracking of data usage and infrastructure investment, encourage data sharing, and increasing trust in research that is reliant on these data collections. We incorporate the standard vocabularies into the data citation metadata so that the data citation become machine readable and semantically friendly for web-search purpose as well. By standardizing our metadata structure across our entire data corpus, we are laying the foundation to enable the application of appropriate semantic mechanisms to enhance discovery and analysis of NCI's national environmental research data information. We expect that this will further

  13. SPASE, Metadata, and the Heliophysics Virtual Observatories

    Science.gov (United States)

    Thieman, James; King, Todd; Roberts, Aaron

    2010-01-01

    To provide data search and access capability in the field of Heliophysics (the study of the Sun and its effects on the Solar System, especially the Earth) a number of Virtual Observatories (VO) have been established both via direct funding from the U.S. National Aeronautics and Space Administration (NASA) and through other funding agencies in the U.S. and worldwide. At least 15 systems can be labeled as Virtual Observatories in the Heliophysics community, 9 of them funded by NASA. The problem is that different metadata and data search approaches are used by these VO's and a search for data relevant to a particular research question can involve consulting with multiple VO's - needing to learn a different approach for finding and acquiring data for each. The Space Physics Archive Search and Extract (SPASE) project is intended to provide a common data model for Heliophysics data and therefore a common set of metadata for searches of the VO's. The SPASE Data Model has been developed through the common efforts of the Heliophysics Data and Model Consortium (HDMC) representatives over a number of years. We currently have released Version 2.1 of the Data Model. The advantages and disadvantages of the Data Model will be discussed along with the plans for the future. Recent changes requested by new members of the SPASE community indicate some of the directions for further development.

  14. Archive of digital chirp subbottom profile data collected during USGS Cruise 13CCT04 offshore of Petit Bois Island, Mississippi, August 2013

    Science.gov (United States)

    Forde, Arnell S.; Flocks, James G.; Kindinger, Jack G.; Bernier, Julie C.; Kelso, Kyle W.; Wiese, Dana S.

    2015-01-01

    From August 13-23, 2013, the U.S. Geological Survey (USGS), in cooperation with the U.S. Army Corps of Engineers (USACE) conducted geophysical surveys to investigate the geologic controls on barrier island framework and long-term sediment transport offshore of Petit Bois Island, Mississippi. This investigation is part of a broader USGS study on Coastal Change and Transport (CCT). These surveys were funded through the Mississippi Coastal Improvements Program (MsCIP) with partial funding provided by the Northern Gulf of Mexico Ecosystem Change and Hazard Susceptibility Project. This report serves as an archive of unprocessed digital chirp subbottom data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FGDC) metadata. Gained-showing a relative increase in signal amplitude-digital images of the seismic profiles are provided.

  15. Assuring the Quality of Agricultural Learning Repositories: Issues for the Learning Object Metadata Creation Process of the CGIAR

    Science.gov (United States)

    Zschocke, Thomas; Beniest, Jan

    The Consultative Group on International Agricultural Re- search (CGIAR) has established a digital repository to share its teaching and learning resources along with descriptive educational information based on the IEEE Learning Object Metadata (LOM) standard. As a critical component of any digital repository, quality metadata are critical not only to enable users to find more easily the resources they require, but also for the operation and interoperability of the repository itself. Studies show that repositories have difficulties in obtaining good quality metadata from their contributors, especially when this process involves many different stakeholders as is the case with the CGIAR as an international organization. To address this issue the CGIAR began investigating the Open ECBCheck as well as the ISO/IEC 19796-1 standard to establish quality protocols for its training. The paper highlights the implications and challenges posed by strengthening the metadata creation workflow for disseminating learning objects of the CGIAR.

  16. QualityML: a dictionary for quality metadata encoding

    Science.gov (United States)

    Ninyerola, Miquel; Sevillano, Eva; Serral, Ivette; Pons, Xavier; Zabala, Alaitz; Bastin, Lucy; Masó, Joan

    2014-05-01

    The scenario of rapidly growing geodata catalogues requires tools focused on facilitate users the choice of products. Having quality fields populated in metadata allow the users to rank and then select the best fit-for-purpose products. In this direction, we have developed the QualityML (http://qualityml.geoviqua.org), a dictionary that contains hierarchically structured concepts to precisely define and relate quality levels: from quality classes to quality measurements. Generically, a quality element is the path that goes from the higher level (quality class) to the lowest levels (statistics or quality metrics). This path is used to encode quality of datasets in the corresponding metadata schemas. The benefits of having encoded quality, in the case of data producers, are related with improvements in their product discovery and better transmission of their characteristics. In the case of data users, particularly decision-makers, they would find quality and uncertainty measures to take the best decisions as well as perform dataset intercomparison. Also it allows other components (such as visualization, discovery, or comparison tools) to be quality-aware and interoperable. On one hand, the QualityML is a profile of the ISO geospatial metadata standards providing a set of rules for precisely documenting quality indicator parameters that is structured in 6 levels. On the other hand, QualityML includes semantics and vocabularies for the quality concepts. Whenever possible, if uses statistic expressions from the UncertML dictionary (http://www.uncertml.org) encoding. However it also extends UncertML to provide list of alternative metrics that are commonly used to quantify quality. A specific example, based on a temperature dataset, is shown below. The annual mean temperature map has been validated with independent in-situ measurements to obtain a global error of 0.5 ° C. Level 0: Quality class (e.g., Thematic accuracy) Level 1: Quality indicator (e.g., Quantitative

  17. Scalable Metadata Management for a Large Multi-Source Seismic Data Repository

    Energy Technology Data Exchange (ETDEWEB)

    Gaylord, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dodge, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Magana-Zook, S. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barno, J. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Knapp, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thomas, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sullivan, D. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ruppert, S. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mellors, R. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity.

  18. AFSC/NMML/CCEP: Northern fur seal demography at San Miguel Island, California, 1974 - 2014

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The National Marine Mammal Laboratories' California Current Ecosystem Program (AFSC/NOAA) initiated a long-term marking program of northern fur seals (Callorhinus...

  19. DataNet: A flexible metadata overlay over file resources

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Managing and sharing data stored in files results in a challenge due to data amounts produced by various scientific experiments [1]. While solutions such as Globus Online [2] focus on file transfer and synchronization, in this work we propose an additional layer of metadata over file resources which helps to categorize and structure the data, as well as to make it efficient in integration with web-based research gateways. A basic concept of the proposed solution [3] is a data model consisting of entities built from primitive types such as numbers, texts and also from files and relationships among different entities. This allows for building complex data structure definitions and mix metadata and file data into a single model tailored for a given scientific field. A data model becomes actionable after being deployed as a data repository which is done automatically by the proposed framework by using one of the available PaaS (platform-as-a-service) platforms and is exposed to the world as a REST service, which...

  20. Semantic Web: Metadata, Linked Data, Open Data

    Directory of Open Access Journals (Sweden)

    Vanessa Russo

    2015-12-01

    Full Text Available What's the Semantic Web? What's the use? The inventor of the Web Tim Berners-Lee describes it as a research methodology able to take advantage of the network to its maximum capacity. This metadata system represents the innovative element through web 2.0 to web 3.0. In this context will try to understand what are the theoretical and informatic requirements of the Semantic Web. Finally will explain Linked Data applications to develop new tools for active citizenship.

  1. Dreams deferred: Contextualizing the health and psychosocial needs of undocumented Asian and Pacific Islander young adults in Northern California.

    Science.gov (United States)

    Sudhinaraset, May; Ling, Irving; To, Tu My; Melo, Jason; Quach, Thu

    2017-07-01

    There are currently 1.5 million undocumented Asians and Pacific Islanders (APIs) in the US. Undocumented API young adults, in particular, come of age in a challenging political and social climate, but little is known about their health outcomes. To our knowledge, this is the first study to assess the psychosocial needs and health status of API undocumented young adults. Guided by social capital theory, this qualitative study describes the social context of API undocumented young adults (ages 18-31), including community and government perceptions, and how social relationships influence health. This study was conducted in Northern California and included four focus group discussions (FGDs) and 24 in-depth interviews (IDIs), with 32 unique participants total. FGDs used purposeful sampling by gender (two male and two female discussions) and education status (in school and out-of-school). Findings suggest low bonding and bridging social capital. Results indicate that community distrust is high, even within the API community, due to high levels of exploitation, discrimination, and threats of deportation. Participants described how documentation status is a barrier in accessing health services, particularly mental health and sexual and reproductive health services. This study identifies trusted community groups and discusses recommendations for future research, programs, and policies. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Building a High Performance Metadata Broker using Clojure, NoSQL and Message Queues

    Science.gov (United States)

    Truslove, I.; Reed, S.

    2013-12-01

    In practice, Earth and Space Science Informatics often relies on getting more done with less: fewer hardware resources, less IT staff, fewer lines of code. As a capacity-building exercise focused on rapid development of high-performance geoinformatics software, the National Snow and Ice Data Center (NSIDC) built a prototype metadata brokering system using a new JVM language, modern database engines and virtualized or cloud computing resources. The metadata brokering system was developed with the overarching goals of (i) demonstrating a technically viable product with as little development effort as possible, (ii) using very new yet very popular tools and technologies in order to get the most value from the least legacy-encumbered code bases, and (iii) being a high-performance system by using scalable subcomponents, and implementation patterns typically used in web architectures. We implemented the system using the Clojure programming language (an interactive, dynamic, Lisp-like JVM language), Redis (a fast in-memory key-value store) as both the data store for original XML metadata content and as the provider for the message queueing service, and ElasticSearch for its search and indexing capabilities to generate search results. On evaluating the results of the prototyping process, we believe that the technical choices did in fact allow us to do more for less, due to the expressive nature of the Clojure programming language and its easy interoperability with Java libraries, and the successful reuse or re-application of high performance products or designs. This presentation will describe the architecture of the metadata brokering system, cover the tools and techniques used, and describe lessons learned, conclusions, and potential next steps.

  3. California sea lion and northern fur seal censuses conducted at Channel Islands, California by Alaska Fisheries Science Center from 1969-07-31 to 2015-08-08 (NCEI Accession 0145165)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The National Marine Mammal Laboratories' California Current Ecosystem Program (AFSC/NOAA) initiated and maintains census programs for California sea lions (Zalophus...

  4. On the communication of scientific data: The Full-Metadata Format

    DEFF Research Database (Denmark)

    Riede, Moritz; Schueppel, Rico; Sylvester-Hvid, Kristian O.

    2010-01-01

    In this paper, we introduce a scientific format for text-based data files, which facilitates storing and communicating tabular data sets. The so-called Full-Metadata Format builds on the widely used INI-standard and is based on four principles: readable self-documentation, flexible structure, fail...

  5. Polychlorinated biphenyls (PCBs) in recreational marina sediments of San Diego Bay, southern California.

    Science.gov (United States)

    Neira, Carlos; Vales, Melissa; Mendoza, Guillermo; Hoh, Eunha; Levin, Lisa A

    2018-01-01

    Polychlorinated biphenyl (PCB) concentrations were determined in surface sediments from three recreational marinas in San Diego Bay, California. Total PCB concentrations ranged from 23 to 153, 31-294, and 151-1387ngg -1 for Shelter Island Yacht Basin (SIYB), Harbor Island West (HW) and Harbor Island East (HE), respectively. PCB concentrations were significantly higher in HE and PCB group composition differed relative to HW and SIYB, which were not significantly different from each other in concentration or group composition. In marina sediments there was a predominance (82-85%) of heavier molecular weight PCBs with homologous groups (6CL-7CL) comprising 59% of the total. In HE 75% of the sites exceeded the effect range median (ERM), and toxicity equivalence (TEQ dioxin-like PCBs) values were higher relative to those of HW and SIYB, suggesting a potential ecotoxicological risk. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Infection of California sea lions (Zalophus californianus) with terrestrial Brucella spp.

    Science.gov (United States)

    Avalos-Téllez, Rosalía; Ramírez-Pfeiffer, Carlos; Hernández-Castro, Rigoberto; Díaz-Aparicio, Efrén; Sánchez-Domínguez, Carlos; Zavala-Norzagaray, Alan; Arellano-Reynoso, Beatriz; Suárez-Güemes, Francisco; Aguirre, A Alonso; Aurioles-Gamboa, David

    2014-10-01

    Infections with Brucella ceti and pinnipedialis are prevalent in marine mammals worldwide. A total of 22 California sea lions (Zalophus californianus) were examined to determine their exposure to Brucella spp. at San Esteban Island in the Gulf of California, Mexico, in June and July 2011. Although samples of blood, vaginal mucus and milk cultured negative for these bacteria, the application of rose Bengal, agar gel immunodiffusion, PCR and modified fluorescence polarization assays found that five animals (22.7%) had evidence of exposure to Brucella strains. The data also suggested that in two of these five sea lions the strains involved were of terrestrial origin, a novel finding in marine mammals. Further work will be required to validate and determine the epidemiological significance of this finding. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Food webs including parasites, biomass, body sizes, and life stages for three California/Baja California estuaries

    Science.gov (United States)

    Hechinger, Ryan F.; Lafferty, Kevin D.; McLaughlin, John P.; Fredensborg, Brian L.; Huspeni, Todd C.; Lorda, Julio; Sandhu, Parwant K.; Shaw, Jenny C.; Torchin, Mark E.; Whitney, Kathleen L.; Kuris, Armand M.

    2001-01-01

    This data set presents food webs for three North American Pacific coast estuaries and a “Metaweb” composed of the species/stages compiled from all three estuaries. The webs have four noteworthy attributes: (1) parasites (infectious agents), (2) body-size information, (3) biomass information, and (4) ontogenetic stages of many animals with complex life cycles. The estuaries are Carpinteria Salt Marsh, California (CSM); Estero de Punta Banda, Baja California (EPB); and Bahía Falsa in Bahía San Quintín, Baja California (BSQ). Most data on species assemblages and parasitism were gathered via consistent sampling that acquired body size and biomass information for plants and animals larger than ∼1 mm, and for many infectious agents (mostly metazoan parasites, but also some microbes). We augmented this with information from additional published sources and by sampling unrepresented groups (e.g., plankton). We estimated free-living consumer–resource links primarily by extending a previously published version of the CSM web (which the current CSM web supplants) and determined most parasite consumer–resource links from direct observation. We recognize 21 possible link types including four general interactions: predators consuming prey, parasites consuming hosts, predators consuming parasites, and parasites consuming parasites. While generally resolved to the species level, we report stage-specific nodes for many animals with complex life cycles. We include additional biological information for each node, such as taxonomy, lifestyle (free-living, infectious, commensal, mutualist), mobility, and residency. The Metaweb includes 500 nodes, 314 species, and 11 270 links projected to be present given appropriate species' co-occurrences. Of these, 9247 links were present in one or more of the estuarine webs. The remaining 2023 links were not present in the estuaries but are included here because they may occur in other places or times. Initial analyses have examined

  8. The importance of metadata to assess information content in digital reconstructions of neuronal morphology.

    Science.gov (United States)

    Parekh, Ruchi; Armañanzas, Rubén; Ascoli, Giorgio A

    2015-04-01

    Digital reconstructions of axonal and dendritic arbors provide a powerful representation of neuronal morphology in formats amenable to quantitative analysis, computational modeling, and data mining. Reconstructed files, however, require adequate metadata to identify the appropriate animal species, developmental stage, brain region, and neuron type. Moreover, experimental details about tissue processing, neurite visualization and microscopic imaging are essential to assess the information content of digital morphologies. Typical morphological reconstructions only partially capture the underlying biological reality. Tracings are often limited to certain domains (e.g., dendrites and not axons), may be incomplete due to tissue sectioning, imperfect staining, and limited imaging resolution, or can disregard aspects irrelevant to their specific scientific focus (such as branch thickness or depth). Gauging these factors is critical in subsequent data reuse and comparison. NeuroMorpho.Org is a central repository of reconstructions from many laboratories and experimental conditions. Here, we introduce substantial additions to the existing metadata annotation aimed to describe the completeness of the reconstructed neurons in NeuroMorpho.Org. These expanded metadata form a suitable basis for effective description of neuromorphological data.

  9. Information resource description creating and managing metadata

    CERN Document Server

    Hider, Philip

    2012-01-01

    An overview of the field of information organization that examines resource description as both a product and process of the contemporary digital environment.This timely book employs the unifying mechanism of the semantic web and the resource description framework to integrate the various traditions and practices of information and knowledge organization. Uniquely, it covers both the domain-specific traditions and practices and the practices of the ?metadata movement' through a single lens ? that of resource description in the broadest, semantic web sense.This approach more readily accommodate

  10. Ready to put metadata on the post-2015 development agenda? Linking data publications to responsible innovation and science diplomacy.

    Science.gov (United States)

    Özdemir, Vural; Kolker, Eugene; Hotez, Peter J; Mohin, Sophie; Prainsack, Barbara; Wynne, Brian; Vayena, Effy; Coşkun, Yavuz; Dereli, Türkay; Huzair, Farah; Borda-Rodriguez, Alexander; Bragazzi, Nicola Luigi; Faris, Jack; Ramesar, Raj; Wonkam, Ambroise; Dandara, Collet; Nair, Bipin; Llerena, Adrián; Kılıç, Koray; Jain, Rekha; Reddy, Panga Jaipal; Gollapalli, Kishore; Srivastava, Sanjeeva; Kickbusch, Ilona

    2014-01-01

    Metadata refer to descriptions about data or as some put it, "data about data." Metadata capture what happens on the backstage of science, on the trajectory from study conception, design, funding, implementation, and analysis to reporting. Definitions of metadata vary, but they can include the context information surrounding the practice of science, or data generated as one uses a technology, including transactional information about the user. As the pursuit of knowledge broadens in the 21(st) century from traditional "science of whats" (data) to include "science of hows" (metadata), we analyze the ways in which metadata serve as a catalyst for responsible and open innovation, and by extension, science diplomacy. In 2015, the United Nations Millennium Development Goals (MDGs) will formally come to an end. Therefore, we propose that metadata, as an ingredient of responsible innovation, can help achieve the Sustainable Development Goals (SDGs) on the post-2015 agenda. Such responsible innovation, as a collective learning process, has become a key component, for example, of the European Union's 80 billion Euro Horizon 2020 R&D Program from 2014-2020. Looking ahead, OMICS: A Journal of Integrative Biology, is launching an initiative for a multi-omics metadata checklist that is flexible yet comprehensive, and will enable more complete utilization of single and multi-omics data sets through data harmonization and greater visibility and accessibility. The generation of metadata that shed light on how omics research is carried out, by whom and under what circumstances, will create an "intervention space" for integration of science with its socio-technical context. This will go a long way to addressing responsible innovation for a fairer and more transparent society. If we believe in science, then such reflexive qualities and commitments attained by availability of omics metadata are preconditions for a robust and socially attuned science, which can then remain broadly

  11. Standardizing metadata and taxonomic identification in metabarcoding studies.

    Science.gov (United States)

    Tedersoo, Leho; Ramirez, Kelly S; Nilsson, R Henrik; Kaljuvee, Aivi; Kõljalg, Urmas; Abarenkov, Kessy

    2015-01-01

    High-throughput sequencing-based metabarcoding studies produce vast amounts of ecological data, but a lack of consensus on standardization of metadata and how to refer to the species recovered severely hampers reanalysis and comparisons among studies. Here we propose an automated workflow covering data submission, compression, storage and public access to allow easy data retrieval and inter-study communication. Such standardized and readily accessible datasets facilitate data management, taxonomic comparisons and compilation of global metastudies.

  12. The Genomic Observatories Metadatabase (GeOMe): A new repository for field and sampling event metadata associated with genetic samples

    Science.gov (United States)

    Deck, John; Gaither, Michelle R.; Ewing, Rodney; Bird, Christopher E.; Davies, Neil; Meyer, Christopher; Riginos, Cynthia; Toonen, Robert J.; Crandall, Eric D.

    2017-01-01

    The Genomic Observatories Metadatabase (GeOMe, http://www.geome-db.org/) is an open access repository for geographic and ecological metadata associated with biosamples and genetic data. Whereas public databases have served as vital repositories for nucleotide sequences, they do not accession all the metadata required for ecological or evolutionary analyses. GeOMe fills this need, providing a user-friendly, web-based interface for both data contributors and data recipients. The interface allows data contributors to create a customized yet standard-compliant spreadsheet that captures the temporal and geospatial context of each biosample. These metadata are then validated and permanently linked to archived genetic data stored in the National Center for Biotechnology Information’s (NCBI’s) Sequence Read Archive (SRA) via unique persistent identifiers. By linking ecologically and evolutionarily relevant metadata with publicly archived sequence data in a structured manner, GeOMe sets a gold standard for data management in biodiversity science. PMID:28771471

  13. The Genomic Observatories Metadatabase (GeOMe: A new repository for field and sampling event metadata associated with genetic samples.

    Directory of Open Access Journals (Sweden)

    John Deck

    2017-08-01

    Full Text Available The Genomic Observatories Metadatabase (GeOMe, http://www.geome-db.org/ is an open access repository for geographic and ecological metadata associated with biosamples and genetic data. Whereas public databases have served as vital repositories for nucleotide sequences, they do not accession all the metadata required for ecological or evolutionary analyses. GeOMe fills this need, providing a user-friendly, web-based interface for both data contributors and data recipients. The interface allows data contributors to create a customized yet standard-compliant spreadsheet that captures the temporal and geospatial context of each biosample. These metadata are then validated and permanently linked to archived genetic data stored in the National Center for Biotechnology Information's (NCBI's Sequence Read Archive (SRA via unique persistent identifiers. By linking ecologically and evolutionarily relevant metadata with publicly archived sequence data in a structured manner, GeOMe sets a gold standard for data management in biodiversity science.

  14. Metadata Access Tool for Climate and Health

    Science.gov (United States)

    Trtanji, J.

    2012-12-01

    The need for health information resources to support climate change adaptation and mitigation decisions is growing, both in the United States and around the world, as the manifestations of climate change become more evident and widespread. In many instances, these information resources are not specific to a changing climate, but have either been developed or are highly relevant for addressing health issues related to existing climate variability and weather extremes. To help address the need for more integrated data, the Interagency Cross-Cutting Group on Climate Change and Human Health, a working group of the U.S. Global Change Research Program, has developed the Metadata Access Tool for Climate and Health (MATCH). MATCH is a gateway to relevant information that can be used to solve problems at the nexus of climate science and public health by facilitating research, enabling scientific collaborations in a One Health approach, and promoting data stewardship that will enhance the quality and application of climate and health research. MATCH is a searchable clearinghouse of publicly available Federal metadata including monitoring and surveillance data sets, early warning systems, and tools for characterizing the health impacts of global climate change. Examples of relevant databases include the Centers for Disease Control and Prevention's Environmental Public Health Tracking System and NOAA's National Climate Data Center's national and state temperature and precipitation data. This presentation will introduce the audience to this new web-based geoportal and demonstrate its features and potential applications.

  15. Definition of an ISO 19115 metadata profile for SeaDataNet II Cruise Summary Reports and its XML encoding

    Science.gov (United States)

    Boldrini, Enrico; Schaap, Dick M. A.; Nativi, Stefano

    2013-04-01

    SeaDataNet implements a distributed pan-European infrastructure for Ocean and Marine Data Management whose nodes are maintained by 40 national oceanographic and marine data centers from 35 countries riparian to all European seas. A unique portal makes possible distributed discovery, visualization and access of the available sea data across all the member nodes. Geographic metadata play an important role in such an infrastructure, enabling an efficient documentation and discovery of the resources of interest. In particular: - Common Data Index (CDI) metadata describe the sea datasets, including identification information (e.g. product title, interested area), evaluation information (e.g. data resolution, constraints) and distribution information (e.g. download endpoint, download protocol); - Cruise Summary Reports (CSR) metadata describe cruises and field experiments at sea, including identification information (e.g. cruise title, name of the ship), acquisition information (e.g. utilized instruments, number of samples taken) In the context of the second phase of SeaDataNet (SeaDataNet 2 EU FP7 project, grant agreement 283607, started on October 1st, 2011 for a duration of 4 years) a major target is the setting, adoption and promotion of common international standards, to the benefit of outreach and interoperability with the international initiatives and communities (e.g. OGC, INSPIRE, GEOSS, …). A standardization effort conducted by CNR with the support of MARIS, IFREMER, STFC, BODC and ENEA has led to the creation of a ISO 19115 metadata profile of CDI and its XML encoding based on ISO 19139. The CDI profile is now in its stable version and it's being implemented and adopted by the SeaDataNet community tools and software. The effort has then continued to produce an ISO based metadata model and its XML encoding also for CSR. The metadata elements included in the CSR profile belong to different models: - ISO 19115: E.g. cruise identification information, including

  16. Utility of collecting metadata to manage a large scale conditions database in ATLAS

    International Nuclear Information System (INIS)

    Gallas, E J; Albrand, S; Borodin, M; Formica, A

    2014-01-01

    The ATLAS Conditions Database, based on the LCG Conditions Database infrastructure, contains a wide variety of information needed in online data taking and offline analysis. The total volume of ATLAS conditions data is in the multi-Terabyte range. Internally, the active data is divided into 65 separate schemas (each with hundreds of underlying tables) according to overall data taking type, detector subsystem, and whether the data is used offline or strictly online. While each schema has a common infrastructure, each schema's data is entirely independent of other schemas, except at the highest level, where sets of conditions from each subsystem are tagged globally for ATLAS event data reconstruction and reprocessing. The partitioned nature of the conditions infrastructure works well for most purposes, but metadata about each schema is problematic to collect in global tools from such a system because it is only accessible via LCG tools schema by schema. This makes it difficult to get an overview of all schemas, collect interesting and useful descriptive and structural metadata for the overall system, and connect it with other ATLAS systems. This type of global information is needed for time critical data preparation tasks for data processing and has become more critical as the system has grown in size and diversity. Therefore, a new system has been developed to collect metadata for the management of the ATLAS Conditions Database. The structure and implementation of this metadata repository will be described. In addition, we will report its usage since its inception during LHC Run 1, how it has been exploited in the process of conditions data evolution during LSI (the current LHC long shutdown) in preparation for Run 2, and long term plans to incorporate more of its information into future ATLAS Conditions Database tools and the overall ATLAS information infrastructure.

  17. Autonomous Underwater Vehicle Data Management and Metadata Interoperability for Coastal Ocean Studies

    Science.gov (United States)

    McCann, M. P.; Ryan, J. P.; Chavez, F. P.; Rienecker, E.

    2004-12-01

    Data from over 1000 km of Autonomous Underwater Vehicle (AUV) surveys of Monterey Bay have been collected and cataloged in an ocean observatory data management system. The Monterey Bay Aquarium Institute's AUV is equipped with a suite of instruments that include a conductivity, temperature, depth (CTD) instrument, transmissometers, a fluorometer, a nitrate sensor, and an inertial navigation system. Data are logged on the vehicle and upon completion of a survey XML descriptions of the data are submitted to the Shore Side Data System (SSDS). Instrument data are then processed on shore to apply calibrations and produce scientifically useful data products. The SSDS employs a data model that tracks data from the instrument that created it through all the consuming processes that generate derived products. SSDS employs OPeNDAP and netCDF to provide data set interoperability at the data level. The core of SSDS is the metadata that is the catalog of these data sets and their relation to all other relevant data. The metadata is managed in a relational database and governed by a Enterprise Java Bean (EJB) server application. Cross-platform Java applications have been written to manage and visualize these data. A Java Swing application - the Hierarchical Ocean Observatory Visualization and Editing System (HOOVES) - has been developed to provide visualization of data set pedigree and data set variables. Because the SSDS data model is generalized according to "Data Producers" and "Data Containers" many different types of data can be represented in SSDS allowing for interoperability at a metadata level. Comparisons of appropriate data sets, whether they are from an autonomous underwater vehicle or from a fixed mooring are easily made using SSDS. The authors will present the SSDS data model and show examples of how the model helps organize data set metadata allowing for data discovery and interoperability. With improved discovery and interoperability the system is helping us

  18. Web Approach for Ontology-Based Classification, Integration, and Interdisciplinary Usage of Geoscience Metadata

    Directory of Open Access Journals (Sweden)

    B Ritschel

    2012-10-01

    Full Text Available The Semantic Web is a W3C approach that integrates the different sources of semantics within documents and services using ontology-based techniques. The main objective of this approach in the geoscience domain is the improvement of understanding, integration, and usage of Earth and space science related web content in terms of data, information, and knowledge for machines and people. The modeling and representation of semantic attributes and relations within and among documents can be realized by human readable concept maps and machine readable OWL documents. The objectives for the usage of the Semantic Web approach in the GFZ data center ISDC project are the design of an extended classification of metadata documents for product types related to instruments, platforms, and projects as well as the integration of different types of metadata related to data product providers, users, and data centers. Sources of content and semantics for the description of Earth and space science product types and related classes are standardized metadata documents (e.g., DIF documents, publications, grey literature, and Web pages. Other sources are information provided by users, such as tagging data and social navigation information. The integration of controlled vocabularies as well as folksonomies plays an important role in the design of well formed ontologies.

  19. An institutional repository initiative and issues concerning metadata

    OpenAIRE

    BAYRAM, Özlem; ATILGAN, Doğan; ARSLANTEKİN, Sacit

    2006-01-01

    Ankara University has become one of the fist open access initiatives in Turkey. Ankara University Open Access Program (AUO) was formed as part of the Open Access project (http://acikarsiv.ankara.edu.tr ) and supported by the University with an example of an open access institutional repository. As for the further step, the system will require the metadata tools to enable international recognization. According to Budapest Open Access Initiative, as suggested two strategies for open access t...

  20. An Examination of the Adoption of Preservation Metadata in Cultural Heritage Institutions: An Exploratory Study Using Diffusion of Innovations Theory

    Science.gov (United States)

    Alemneh, Daniel Gelaw

    2009-01-01

    Digital preservation is a significant challenge for cultural heritage institutions and other repositories of digital information resources. Recognizing the critical role of metadata in any successful digital preservation strategy, the Preservation Metadata Implementation Strategies (PREMIS) has been extremely influential on providing a "core" set…

  1. Geo-metadata design for the GIS of the pre-selected site for China's high-level radioactive waste repository

    International Nuclear Information System (INIS)

    Zhong Xia; Wang Ju; Huang Shutao; Wang Shuhong; Gao Min

    2008-01-01

    The information system for the geological disposal of high-level radioactive waste aims at the integrated management and full application of multi-sourceful information in the research for geological disposal of high-level radioactive waste. And the establishment and operation of the system need geo-metadata's support of multi-sourceful information. In the paper, on the basis of geo-data analysis for pre-selected site of disposal of high-level radioactive waste, we can apply the existing metadata standards. Also we can research and design the content information, management pattern and application for geo-metadata of the multi-sourceful information. (authors)

  2. Overview of long-term field experiments in Germany - metadata visualization

    Science.gov (United States)

    Muqit Zoarder, Md Abdul; Heinrich, Uwe; Svoboda, Nikolai; Grosse, Meike; Hierold, Wilfried

    2017-04-01

    BonaRes ("soil as a sustainable resource for the bioeconomy") is conducting to collect data and metadata of agricultural long-term field experiments (LTFE) of Germany. It is funded by the German Federal Ministry of Education and Research (BMBF) under the umbrella of the National Research Strategy BioEconomy 2030. BonaRes consists of ten interdisciplinary research project consortia and the 'BonaRes - Centre for Soil Research'. BonaRes Data Centre is responsible for collecting all LTFE data and regarding metadata into an enterprise database upon higher level of security and visualization of the data and metadata through data portal. In the frame of the BonaRes project, we are compiling an overview of long-term field experiments in Germany that is based on a literature review, the results of the online survey and direct contacts with LTFE operators. Information about research topic, contact person, website, experiment setup and analyzed parameters are collected. Based on the collected LTFE data, an enterprise geodatabase is developed and a GIS-based web-information system about LTFE in Germany is also settled. Various aspects of the LTFE, like experiment type, land-use type, agricultural category and duration of experiment, are presented in thematic maps. This information system is dynamically linked to the database, which means changes in the data directly affect the presentation. An easy data searching option using LTFE name, -location or -operators and the dynamic layer selection ensure a user-friendly web application. Dispersion and visualization of the overlapping LTFE points on the overview map are also challenging and we make it automatized at very zoom level which is also a consistent part of this application. The application provides both, spatial location and meta-information of LTFEs, which is backed-up by an enterprise geodatabase, GIS server for hosting map services and Java script API for web application development.

  3. Metadata Harvesting in Regional Digital Libraries in the PIONIER Network

    Science.gov (United States)

    Mazurek, Cezary; Stroinski, Maciej; Werla, Marcin; Weglarz, Jan

    2006-01-01

    Purpose: The paper aims to present the concept of the functionality of metadata harvesting for regional digital libraries, based on the OAI-PMH protocol. This functionality is a part of regional digital libraries platform created in Poland. The platform was required to reach one of main objectives of the Polish PIONIER Programme--to enrich the…

  4. Metadata In, Library Out. A Simple, Robust Digital Library System

    Directory of Open Access Journals (Sweden)

    Tonio Loewald

    2010-06-01

    Full Text Available Tired of being held hostage to expensive systems that did not meet our needs, the University of Alabama Libraries developed an XML schema-agnostic, light-weight digital library delivery system based on the principles of "Keep It Simple, Stupid!" Metadata and derivatives reside in openly accessible web directories, which support the development of web agents and new usability software, as well as modification and complete retrieval at any time. The file name structure is echoed in the file system structure, enabling the delivery software to make inferences about relationships, sequencing, and complex object structure without having to encapsulate files in complex metadata schemas. The web delivery system, Acumen, is built of PHP, JSON, JavaScript and HTML5, using MySQL to support fielded searching. Recognizing that spreadsheets are more user-friendly than XML, an accompanying widget, Archivists Utility, transforms spreadsheets into MODS based on rules selected by the user. Acumen, Archivists Utility, and all supporting software scripts will be made available as open source.

  5. Using seabird habitat modeling to inform marine spatial planning in central California's National Marine Sanctuaries.

    Directory of Open Access Journals (Sweden)

    Jennifer McGowan

    Full Text Available Understanding seabird habitat preferences is critical to future wildlife conservation and threat mitigation in California. The objective of this study was to investigate drivers of seabird habitat selection within the Gulf of the Farallones and Cordell Bank National Marine Sanctuaries to identify areas for targeted conservation planning. We used seabird abundance data collected by the Applied California Current Ecosystem Studies Program (ACCESS from 2004-2011. We used zero-inflated negative binomial regression to model species abundance and distribution as a function of near surface ocean water properties, distances to geographic features and oceanographic climate indices to identify patterns in foraging habitat selection. We evaluated seasonal, inter-annual and species-specific variability of at-sea distributions for the five most abundant seabirds nesting on the Farallon Islands: western gull (Larus occidentalis, common murre (Uria aalge, Cassin's auklet (Ptychorampus aleuticus, rhinoceros auklet (Cerorhinca monocerata and Brandt's cormorant (Phalacrocorax penicillatus. The waters in the vicinity of Cordell Bank and the continental shelf east of the Farallon Islands emerged as persistent and highly selected foraging areas across all species. Further, we conducted a spatial prioritization exercise to optimize seabird conservation areas with and without considering impacts of current human activities. We explored three conservation scenarios where 10, 30 and 50 percent of highly selected, species-specific foraging areas would be conserved. We compared and contrasted results in relation to existing marine protected areas (MPAs and the future alternative energy footprint identified by the California Ocean Uses Atlas. Our results show that the majority of highly selected seabird habitat lies outside of state MPAs where threats from shipping, oil spills, and offshore energy development remain. This analysis accentuates the need for innovative marine

  6. An on-line Integrated Bookkeeping: electronic run log book and Meta-Data Repository for ATLAS

    CERN Document Server

    Barczyc, M.; Caprini, M.; Da Silva Conceicao, J.; Dobson, M.; Flammer, J.; Burckhart-Chromek, D.; Caprini, M.; Conceicao, J.D.S.; Dobson, M.; Flammer, J.; Jones, R.; Kazarov, A.; Kolos, S.; Kazarov, A.; Kolos, S.; Liko, D.; Mapelli, L.; Soloviev, I.; Hart, R.; Amorim, A.; Mapelli, L.; Soloviev, I.; Amorim, A.; Klose, D.; Lima, J.; Lucio, L.; Pedro, L.; Wolters, H.; Badescu, E.; Alexandrov, I.; Kotov, V.; Mineev, M.; Ryabov, Yu.

    2003-01-01

    In the context of the ATLAS experiment there is growing evidence of the importance of different kinds of Meta-data including all the important details of the detector and data acquisition that are vital for the analysis of the acquired data. The Online BookKeeper (OBK) is a component of ATLAS online software that stores all information collected while running the experiment, including the Meta-data associated with the event acquisition, triggering and storage. The facilities for acquisition of control data within the on-line software framework, together with a full functional Web interface, make the OBK a powerful tool containing all information needed for event analysis, including an electronic log book. In this paper we explain how OBK plays a role as one of the main collectors and managers of Meta-data produced on-line, and we'll also focus on the Web facilities already available. The usage of the web interface as an electronic run logbook is also explained, together with the future extensions. We describe...

  7. Detection of Vandalism in Wikipedia using Metadata Features – Implementation in Simple English and Albanian sections

    Directory of Open Access Journals (Sweden)

    Arsim Susuri

    2017-03-01

    Full Text Available In this paper, we evaluate a list of classifiers in order to use them in the detection of vandalism by focusing on metadata features. Our work is focused on two low resource data sets (Simple English and Albanian from Wikipedia. The aim of this research is to prove that this form of vandalism detection applied in one data set (language can be extended into another data set (language. Article views data sets in Wikipedia have been used rarely for the purpose of detecting vandalism. We will show the benefits of using article views data set with features from the article revisions data set with the aim of improving the detection of vandalism. The key advantage of using metadata features is that these metadata features are language independent and simple to extract because they require minimal processing. This paper shows that application of vandalism models across low resource languages is possible, and vandalism can be detected through view patterns of articles.

  8. Migrating Seals on Shifting Sands: Testing Alternate Hypotheses for Holocene Ecological and Cultural Change on the California Coast

    Science.gov (United States)

    Koch, P. L.; Newsome, S. D.; Gifford-Gonzalez, D.

    2001-12-01

    The coast of California presented Holocene humans with a diverse set of ecosystems and geomorphic features, from large islands off a semi-desert mainland in the south, to a mix of sandy and rocky beaches abutting grassland and oak forest in central California, to a rocky coast hugged by dense coniferous forest in the north. Theories explaining trends in human resource use, settlement patterns, and demography are equally diverse, but can be categorized as 1) driven by diffusion of technological innovations from outside the region, 2) driven by population growth leading to more intensive extraction of resources, or 3) driven by climatic factors that affect the resource base. With respect to climatic shifts, attention has focused on a possible regime shift ca. 5500 BP, following peak Holocene warming, and on evidence for massive droughts and a drop in marine productivity ca. 1000 BP. While evidence for a coincidence between climatic, cultural, and ecological change is present, albeit complex, in southern California, similar data are largely lacking from central and northern California. We are using isotopic and archaeofaunal analysis to test ideas for ecological and cultural change in central California. Three features of the archaeological record are relevant. First, overall use of marine resources by coastal communities declined after 1000 BP. Second, northern fur seals, which are common in earlier sites, drop in abundance relative to remaining marine animals. We have previously established that Holocene humans in central California were hunting gregariously-breeding northern fur seals from mainland rookeries. These seals breed exclusively on offshore islands today, typically at high latitudes. Their restriction to these isolated sites today may be a response to human overexploitation of their mainland rookeries prehistorically. Finally, collection of oxygen and carbon isotope data from mussels at the archaeological sites, while still in a preliminary phase, has

  9. Implementation of a metadata architecture and knowledge collection to support semantic interoperability in an enterprise data warehouse.

    Science.gov (United States)

    Dhaval, Rakesh; Borlawsky, Tara; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti; Payne, Philip R O

    2008-11-06

    In order to enhance interoperability between enterprise systems, and improve data validity and reliability throughout The Ohio State University Medical Center (OSUMC), we have initiated the development of an ontology-anchored metadata architecture and knowledge collection for our enterprise data warehouse. The metadata and corresponding semantic relationships stored in the OSUMC knowledge collection are intended to promote consistency and interoperability across the heterogeneous clinical, research, business and education information managed within the data warehouse.

  10. Transforming and enhancing metadata for enduser discovery: a case study

    Directory of Open Access Journals (Sweden)

    Edward M. Corrado

    2014-05-01

    The Libraries’ workflow and portions of code will be shared; issues and challenges involved will be discussed. While this case study is specific to Binghamton University Libraries, examples of strategies used at other institutions will also be introduced. This paper should be useful to anyone interested in describing large quantities of photographs or other materials with preexisting embedded metadata.

  11. The ATLAS EventIndex: data flow and inclusion of other metadata

    CERN Document Server

    Prokoshin, Fedor; The ATLAS collaboration; Cardenas Zarate, Simon Ernesto; Favareto, Andrea; Fernandez Casani, Alvaro; Gallas, Elizabeth; Garcia Montoro, Carlos; Gonzalez de la Hoz, Santiago; Hrivnac, Julius; Malon, David; Salt, Jose; Sanchez, Javier; Toebbicke, Rainer; Yuan, Ruijun

    2016-01-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information obtained from the ATLAS detector. The basic unit of this information is event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex are the event picking, providing information for the Event Service and data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the GRID, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalog AMI and the Rucio data man...

  12. The ATLAS EventIndex: data flow and inclusion of other metadata

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00064378; Cardenas Zarate, Simon Ernesto; Favareto, Andrea; Fernandez Casani, Alvaro; Gallas, Elizabeth; Garcia Montoro, Carlos; Gonzalez de la Hoz, Santiago; Hrivnac, Julius; Malon, David; Prokoshin, Fedor; Salt, Jose; Sanchez, Javier; Toebbicke, Rainer; Yuan, Ruijun

    2016-01-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information collected from the ATLAS detector. The basic unit of this information is the event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex is event picking, as well as data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the Grid, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalogue AMI and the Rucio data management system and information on p...

  13. Digital Libraries that Demonstrate High Levels of Mutual Complementarity in Collection-level Metadata Give a Richer Representation of their Content and Improve Subject Access for Users

    Directory of Open Access Journals (Sweden)

    Aoife Lawton

    2014-12-01

    Full Text Available A Review of: Zavalina, O. L. (2013. Complementarity in subject metadata in large-scale digital libraries: A comparative analysis. Cataloging & Classification Quarterly, 52(1, 77-89. http://dx.doi.org/10.1080/01639374.2013.848316 Abstract Objective – To determine how well digital library content is represented through free-text and subject headings. Specifically to examine whether a combination of free-text description data and controlled vocabulary is more comprehensive than free-text description data alone in describing digital collections. Design – Qualitative content analysis and complementarity comparison. Setting – Three large scale cultural heritage digital libraries: one in Europe and two in the United States of America. Methods – The researcher retrieved XML files of complete metadata records for two of the digital libraries, while the third library openly exposed its full metadata. The systematic samples obtained for all three libraries enabled qualitative content analysis to uncover how metadata values relate to each other at the collection level. The researcher retrieved 99 collection-level metadata records in total for analysis. The breakdown was 39, 33, and 27 records per digital library. When comparing metadata in the free-text Description metadata element with data in four controlled vocabulary elements, Subject, Geographic Coverage, Temporal Coverage and Object Type, the researcher observed three types of complementarity: one-way, two-way and multiple-complementarity. The author refers to complementarity as “describing a collection’s subject matter with mutually complementary data values in controlled vocabulary and free-text subject metadata elements” (Zavalina, 2013, p. 77. For example, within a Temporal Coverage metadata element the term “19th century” would complement a Description metadata element “1850–1899” in the same record. Main Results – The researcher found a high level of one

  14. California State Waters Map Series—Offshore of Santa Cruz, California

    Science.gov (United States)

    Cochrane, Guy R.; Dartnell, Peter; Johnson, Samuel Y.; Erdey, Mercedes D.; Golden, Nadine E.; Greene, H. Gary; Dieter, Bryan E.; Hartwell, Stephen R.; Ritchie, Andrew C.; Finlayson, David P.; Endris, Charles A.; Watt, Janet T.; Davenport, Clifton W.; Sliter, Ray W.; Maier, Katherine L.; Krigsman, Lisa M.; Cochrane, Guy R.; Cochran, Susan A.

    2016-03-24

    upper Quaternary shelf, estuarine, and fluvial sediments deposited as sea level fluctuated in the late Pleistocene. The inner shelf is characterized by bedrock outcrops that have local thin sediment cover, the result of regional uplift, high wave energy, and limited sediment supply. The midshelf occupies part of an extensive, shore-parallel mud belt. The thickest sediment deposits, inferred to consist mainly of lowstand nearshore deposits, are found in the southeastern and northwestern parts of the map area.Coastal sediment transport in the map area is characterized by northwest-to-southeast littoral transport of sediment that is derived mainly from ephemeral streams in the Santa Cruz Mountains and also from local coastal-bluff erosion. During the last approximately 300 years, as much as 18 million cubic yards (14 million cubic meters) of sand-sized sediment has been eroded from the area between Año Nuevo Island and Point Año Nuevo and transported south; this mass of eroded sand is now enriching beaches in the map area. Sediment transport is within the Santa Cruz littoral cell, which terminates in the submarine Monterey Canyon.Benthic species observed in the Offshore of Santa Cruz map area are natives of the cold-temperate biogeographic zone that is called either the “Oregonian province” or the “northern California ecoregion.” This biogeographic province is maintained by the long-term stability of the southward-flowing California Current, the eastern limb of the North Pacific subtropical gyre that flows from southern British Columbia to Baja California. At its midpoint off central California, the California Current transports subarctic surface (0–500 m deep) waters southward, about 150 to 1,300 km from shore. Seasonal northwesterly winds that are, in part, responsible for the California Current, generate coastal upwelling. The south end of the Oregonian province is at Point Conception (about 300 km south of the map area), although its associated

  15. Effects of roads on survival of San Clemente Island foxes

    Science.gov (United States)

    Snow, N.P.; Andelt, William F.; Stanley, T.R.; Resnik, J.R.; Munson, L.

    2012-01-01

    Roads generate a variety of influences on wildlife populations; however, little is known about the effects of roads on endemic wildlife on islands. Specifically, road-kills of island foxes (Urocyon littoralis) on San Clemente Island (SCI), Channel Islands, California, USA are a concern for resource managers. To determine the effects of roads on island foxes, we radiocollared foxes using a 3-tiered sampling design to represent the entire population in the study area, a sub-population near roads, and a sub-population away from roads on SCI. We examined annual survival rates using nest-survival models, causes of mortalities, and movements for each sample. We found the population had high annual survival (0.90), although survival declined with use of road habitat, particularly for intermediate-aged foxes. Foxes living near roads suffered lower annual survival (0.76), resulting from high frequencies of road-kills (7 of 11 mortalities). Foxes living away from roads had the highest annual survival (0.97). Road-kill was the most prominent cause of mortality detected on SCI, which we estimated as killing 3-8% of the population in the study area annually. Based on movements, we were unable to detect any responses by foxes that minimized their risks from roads. The probabilities of road-kills increased with use of the road habitat, volume of traffic, and decreasing road sinuosity. We recommend that managers should attempt to reduce road-kills by deterring or excluding foxes from entering roads, and attempting to modify behaviors of motorists to be vigilant for foxes. ?? 2011 The Wildlife Society.

  16. Agricultural damages and losses from ARkStorm scenario flooding in California

    Science.gov (United States)

    Wein, Anne; David Mitchell,; Peters, Jeff; John Rowden,; Johnny Tran,; Alessandra Corsi,; Dinitz, Laura B.

    2016-01-01

    Scientists designed the ARkStorm scenario to challenge the preparedness of California communities for widespread flooding with a historical precedence and increased likelihood under climate change. California is an important provider of vegetables, fruits, nuts, and other agricultural products to the nation. This study analyzes the agricultural damages and losses pertaining to annual crops, perennial crops, and livestock in California exposed to ARkStorm flooding. Statewide, flood damage is incurred on approximately 23% of annual crop acreage, 5% of perennial crop acreage, and 5% of livestock, e.g., dairy, feedlot, and poultry, acreage. The sum of field repair costs, forgone income, and product replacement costs span $3.7 and $7.1 billion (2009) for a range of inundation durations. Perennial crop loss estimates dominate, and the vulnerability of orchards and vineyards has likely increased with recent expansion. Crop reestablishment delays from levee repair and dewatering more than double annual crop losses in the delta islands, assuming the fragile system does not remain permanently flooded. The exposure of almost 200,000 dairy cows to ARkStorm flooding poses livestock evacuation challenges. Read More: http://ascelibrary.org/doi/abs/10.1061/%28ASCE%29NH.1527-6996.0000174

  17. Summary Record of the First Meeting of the Radioactive Waste Repository Metadata Management (RepMet) Initiative

    International Nuclear Information System (INIS)

    2014-01-01

    National radioactive waste repository programmes are collecting large amounts of data to support the long-term management of their nations' radioactive wastes. The data and related records increase in number, type and quality as programmes proceed through the successive stages of repository development: pre-siting, siting, characterisation, construction, operation and finally closure. Regulatory and societal approvals are included in this sequence. Some programmes are also documenting past repository projects and facing a challenge in allowing both current and future generations to understand actions carried out in the past. Metadata allows context to be stored with data and information so that it can be located, used, updated and maintained. Metadata helps waste management organisations better utilise their data in carrying out their statutory tasks and can also help verify and demonstrate that their programmes are appropriately driven. The NEA Radioactive Waste Repository Metadata Management (RepMet) initiative aims to bring about a better understanding of the identification and administration of metadata - a key aspect of data management - to support national programmes in managing their radioactive waste repository data, information and records in a way that is both harmonised internationally and suitable for long-term management and use. This is a summary record of the 1. meeting of the RepMet initiative. The actions and decisions from this meeting were sent separately to the group after the meeting, but are also included in this document (Annex A). The list of participants is attached as well (Annex B)

  18. Definition of a CDI metadata profile and its ISO 19139 based encoding

    Science.gov (United States)

    Boldrini, Enrico; de Korte, Arjen; Santoro, Mattia; Schaap, Dick M. A.; Nativi, Stefano; Manzella, Giuseppe

    2010-05-01

    The Common Data Index (CDI) is the middleware service adopted by SeaDataNet for discovery and query. The primary goal of the EU funded project SeaDataNet is to develop a system which provides transparent access to marine data sets and data products from 36 countries in and around Europe. The European context of SeaDataNet requires that the developed system complies with European Directive INSPIRE. In order to assure the required conformity a GI-cat based solution is proposed. GI-cat is a broker service able to mediate from different metadata sources and publish them through a consistent and unified interface. In this case GI-cat is used as a front end to the SeaDataNet portal publishing the original data, based on CDI v.1 XML schema, through an ISO 19139 application profile catalog interface (OGC CSW AP ISO). The choice of ISO 19139 is supported and driven by INSPIRE Implementing Rules, that have been used as a reference through the whole development process. A mapping from the CDI data model to the ISO 19139 was hence to be implemented in GI-cat and a first draft quickly developed, as both CDI v.1 and ISO 19139 happen to be XML implementations based on the same abstract data model (standard ISO 19115 - metadata about geographic information). This first draft mapping pointed out the CDI metadata model differences with respect to ISO 19115, as it was not possible to accommodate all the information contained in CDI v.1 into ISO 19139. Moreover some modifications were needed in order to reach INSPIRE compliance. The consequent work consisted in the definition of the CDI metadata model as a profile of ISO 19115. This included checking of all the metadata elements present in CDI and their cardinality. A comparison was made with respect to ISO 19115 and possible extensions were individuated. ISO 19139 was then chosen as a natural XML implementation of this new CDI metadata profile. The mapping and the profile definition processes were iteratively refined leading up to a

  19. Metadata and Tools for Integration and Preservation of Cultural Heritage 3D Information

    Directory of Open Access Journals (Sweden)

    Achille Felicetti

    2011-12-01

    Full Text Available In this paper we investigate many of the various storage, portability and interoperability issues arising among archaeologists and cultural heritage people when dealing with 3D technologies. On the one side, the available digital repositories look often unable to guarantee affordable features in the management of 3D models and their metadata; on the other side the nature of most of the available data format for 3D encoding seem to be not satisfactory for the necessary portability required nowadays by 3D information across different systems. We propose a set of possible solutions to show how integration can be achieved through the use of well known and wide accepted standards for data encoding and data storage. Using a set of 3D models acquired during various archaeological campaigns and a number of open source tools, we have implemented a straightforward encoding process to generate meaningful semantic data and metadata. We will also present the interoperability process carried out to integrate the encoded 3D models and the geographic features produced by the archaeologists. Finally we will report the preliminary (rather encouraging development of a semantic enabled and persistent digital repository, where 3D models (but also any kind of digital data and metadata can easily be stored, retrieved and shared with the content of other digital archives.

  20. An Intelligent Web Digital Image Metadata Service Platform for Social Curation Commerce Environment

    Directory of Open Access Journals (Sweden)

    Seong-Yong Hong

    2015-01-01

    Full Text Available Information management includes multimedia data management, knowledge management, collaboration, and agents, all of which are supporting technologies for XML. XML technologies have an impact on multimedia databases as well as collaborative technologies and knowledge management. That is, e-commerce documents are encoded in XML and are gaining much popularity for business-to-business or business-to-consumer transactions. Recently, the internet sites, such as e-commerce sites and shopping mall sites, deal with a lot of image and multimedia information. This paper proposes an intelligent web digital image information retrieval platform, which adopts XML technology for social curation commerce environment. To support object-based content retrieval on product catalog images containing multiple objects, we describe multilevel metadata structures representing the local features, global features, and semantics of image data. To enable semantic-based and content-based retrieval on such image data, we design an XML-Schema for the proposed metadata. We also describe how to automatically transform the retrieval results into the forms suitable for the various user environments, such as web browser or mobile device, using XSLT. The proposed scheme can be utilized to enable efficient e-catalog metadata sharing between systems, and it will contribute to the improvement of the retrieval correctness and the user’s satisfaction on semantic-based web digital image information retrieval.

  1. Open Access Metadata, Catalogers, and Vendors: The Future of Cataloging Records

    Science.gov (United States)

    Flynn, Emily Alinder

    2013-01-01

    The open access (OA) movement is working to transform scholarly communication around the world, but this philosophy can also apply to metadata and cataloging records. While some notable, large academic libraries, such as Harvard University, the University of Michigan, and the University of Cambridge, released their cataloging records under OA…

  2. Archive of digital Chirp subbottom profile data collected during USGS cruises 09CCT03 and 09CCT04, Mississippi and Alabama Gulf Islands, June and July 2009

    Science.gov (United States)

    Forde, Arnell S.; Dadisman, Shawn V.; Flocks, James G.; Wiese, Dana S.

    2011-01-01

    In June and July of 2009, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the geologic controls on island framework from Cat Island, Mississippi, to Dauphin Island, Alabama, as part of a broader USGS study on Coastal Change and Transport (CCT). The surveys were funded through the Northern Gulf of Mexico Ecosystem Change and Hazard Susceptibility Project as part of the Holocene Evolution of the Mississippi-Alabama Region Subtask (http://ngom.er.usgs.gov/task2_2/index.php). This report serves as an archive of unprocessed digital Chirp seismic profile data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FGDC) metadata. Single-beam and Swath bathymetry data were also collected during these cruises and will be published as a separate archive. Gained (a relative increase in signal amplitude) digital images of the seismic profiles are also provided. Refer to the Acronyms page for expansion of acronyms and abbreviations used in this report.

  3. Impact of Metadata on Full-text Information Retrieval Performance: An Experimental Research on a Small Scale Turkish Corpus

    Directory of Open Access Journals (Sweden)

    Çağdaş Çapkın

    2016-12-01

    Full Text Available Information institutions use text-based information retrieval systems to store, index and retrieve metadata, full-text, or both metadata and full-text (hybrid contents. The aim of this research was to evaluate impact of these contents on information retrieval performance. For this purpose, metadata (MIR, full-text (FIR and hybrid (HIR content information retrieval systems were developed with default Lucene information retrieval model for a small scale Turkish corpus. In order to evaluate performance of this three systems, “precision - recall” and “normalized recall” tests were conducted. Experimental findings showed that there were no significant differences between MIR and FIR in mean average precision (MAP performance. On the other hand, MAP performance of HIR was significantly higher in comparison to MIR and FIR. When information retrieval performance was evaluated as user-centered, the “normalized recall” performances of MIR and HIR were significantly higher than FIR. Additionally, there were no significant differences between the systems in retrieved relevant document means. Processing different types of contents such as metadata and full-text had some advantages and disadvantages for information retrieval systems in terms of term management. The advantages brought together in hybrid content processing (HIR and information retrieval performance improved.

  4. Epidemic Spread of Symbiotic and Non-Symbiotic Bradyrhizobium Genotypes Across California.

    Science.gov (United States)

    Hollowell, A C; Regus, J U; Gano, K A; Bantay, R; Centeno, D; Pham, J; Lyu, J Y; Moore, D; Bernardo, A; Lopez, G; Patil, A; Patel, S; Lii, Y; Sachs, J L

    2016-04-01

    The patterns and drivers of bacterial strain dominance remain poorly understood in natural populations. Here, we cultured 1292 Bradyrhizobium isolates from symbiotic root nodules and the soil root interface of the host plant Acmispon strigosus across a >840-km transect in California. To investigate epidemiology and the potential role of accessory loci as epidemic drivers, isolates were genotyped at two chromosomal loci and were assayed for presence or absence of accessory "symbiosis island" loci that encode capacity to form nodules on hosts. We found that Bradyrhizobium populations were very diverse but dominated by few haplotypes-with a single "epidemic" haplotype constituting nearly 30 % of collected isolates and spreading nearly statewide. In many Bradyrhizobium lineages, we inferred presence and absence of the symbiosis island suggesting recurrent evolutionary gain and or loss of symbiotic capacity. We did not find statistical phylogenetic evidence that the symbiosis island acquisition promotes strain dominance and both symbiotic and non-symbiotic strains exhibited population dominance and spatial spread. Our dataset reveals that a strikingly few Bradyrhizobium genotypes can rapidly spread to dominate a landscape and suggests that these epidemics are not driven by the acquisition of accessory loci as occurs in key human pathogens.

  5. Developing an Internet- and Mobile-Based System to Measure Cigarette Use Among Pacific Islanders: An Ecological Momentary Assessment Study.

    Science.gov (United States)

    Pike, James Russell; Xie, Bin; Tan, Nasya; Sabado-Liwag, Melanie Dee; Orne, Annette; Toilolo, Tupou; Cen, Steven; May, Vanessa; Lee, Cevadne; Pang, Victor Kaiwi; Rainer, Michelle A; Vaivao, Dorothy Etimani S; Lepule, Jonathan Tana; Tanjasiri, Sora Park; Palmer, Paula Healani

    2016-01-07

    Recent prevalence data indicates that Pacific Islanders living in the United States have disproportionately high smoking rates when compared to the general populace. However, little is known about the factors contributing to tobacco use in this at-risk population. Moreover, few studies have attempted to determine these factors utilizing technology-based assessment techniques. The objective was to develop a customized Internet-based Ecological Momentary Assessment (EMA) system capable of measuring cigarette use among Pacific Islanders in Southern California. This system integrated the ubiquity of text messaging, the ease of use associated with mobile phone apps, the enhanced functionality offered by Internet-based Cell phone-optimized Assessment Techniques (ICAT), and the high survey completion rates exhibited by EMA studies that used electronic diaries. These features were tested in a feasibility study designed to assess whether Pacific Islanders would respond to this method of measurement and whether the data gathered would lead to novel insights regarding the intrapersonal, social, and ecological factors associated with cigarette use. 20 young adult smokers in Southern California who self-identified as Pacific Islanders were recruited by 5 community-based organizations to take part in a 7-day EMA study. Participants selected six consecutive two-hour time blocks per day during which they would be willing to receive a text message linking them to an online survey formatted for Web-enabled mobile phones. Both automated reminders and community coaches were used to facilitate survey completion. 720 surveys were completed from 840 survey time blocks, representing a completion rate of 86%. After adjusting for gender, age, and nicotine dependence, feeling happy (P=technology-based assessments of tobacco use among Pacific Islanders. Such systems can foster high levels of survey completion and may lead to novel insights for future research and interventions.

  6. The Semantic Mapping of Archival Metadata to the CIDOC CRM Ontology

    Science.gov (United States)

    Bountouri, Lina; Gergatsoulis, Manolis

    2011-01-01

    In this article we analyze the main semantics of archival description, expressed through Encoded Archival Description (EAD). Our main target is to map the semantics of EAD to the CIDOC Conceptual Reference Model (CIDOC CRM) ontology as part of a wider integration architecture of cultural heritage metadata. Through this analysis, it is concluded…

  7. The biological soil crusts of the San Nicolas Island: Enigmatic algae from a geographically isolated ecosystem

    Science.gov (United States)

    Flechtner, V.R.; Johansen, J.R.; Belnap, J.

    2008-01-01

    Composite soil samples from 7 sites on San Nicolas Island were evaluated quantitatively and qualitatively for the presence of cyanobacteria and eukaryotic microalgae. Combined data demonstrated a rich algal flora with 19 cyanobacterial and 19 eukaryotic microalgal genera being identified, for a total of 56 species. Nine new species were identified and described among the cyanobacteria and the eukaryotic microalgae that were isolated: Leibleinia edaphica, Aphanothece maritima, Chroococcidiopsis edaphica, Cyanosarcina atroveneta, Hassallia californica, Hassallia pseudoramosissima, Microchaete terrestre, Palmellopsis californiens, and Pseudotetracystis compactis. Distinct distributional patterns of algal taxa existed among sites on the island and among soil algal floras of western North America. Some algal taxa appeared to be widely distributed across many desert regions, including Microcoleus vaginatus, Nostoc punctiforme, Nostoc paludosum, and Tolypothrix distorta, Chlorella vulgaris, Diplosphaera cf. chodatii, Myrmecia astigmatica, Myrmecia biatorellae, Hantzschia amphioxys, and Luticola mutica. Some taxa share a distinctly southern distribution with soil algae from southern Arizona, southern California, and Baja California (e.g., Scenedesmus deserticola and Eustigmatos magnus). The data presented herein support the view that the cyanobacterial and microalgal floras of soil crusts possess significant biodiversity, much of it previously undescribed.

  8. Semantic Metadata for Heterogeneous Spatial Planning Documents

    Science.gov (United States)

    Iwaniak, A.; Kaczmarek, I.; Łukowicz, J.; Strzelecki, M.; Coetzee, S.; Paluszyński, W.

    2016-09-01

    Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa). The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.

  9. That obscure object of desire: multimedia metadata on the Web, part 2

    NARCIS (Netherlands)

    F.-M. Nack (Frank); J.R. van Ossenbruggen (Jacco); L. Hardman (Lynda)

    2003-01-01

    textabstractThis article discusses the state of the art in metadata for audio-visual media in large semantic networks, such as the Semantic Web. Our discussion is predominantly motivated by the two most widely known approaches towards machine-processable and semantic-based content description,

  10. That obscure object of desire: multimedia metadata on the Web, part 1

    NARCIS (Netherlands)

    F.-M. Nack (Frank); J.R. van Ossenbruggen (Jacco); L. Hardman (Lynda)

    2003-01-01

    textabstractThis article discusses the state of the art in metadata for audio-visual media in large semantic networks, such as the Semantic Web. Our discussion is predominantly motivated by the two most widely known approaches towards machine-processable and semantic-based content description,

  11. Automating standards based metadata creation using free and open source GIS tools

    NARCIS (Netherlands)

    Ellull, C.D.; Tamash, N.; Xian, F.; Stuiver, H.J.; Rickles, P.

    2013-01-01

    The importance of understanding the quality of data used in any GIS operation should not be underestimated. Metadata (data about data) traditionally provides a description of this quality information, but it is frequently deemed as complex to create and maintain. Additionally, it is generally stored

  12. Twenty-first century metadata operations challenges, opportunities, directions

    CERN Document Server

    Lee Eden, Bradford

    2014-01-01

    It has long been apparent to academic library administrators that the current technical services operations within libraries need to be redirected and refocused in terms of both format priorities and human resources. A number of developments and directions have made this reorganization imperative, many of which have been accelerated by the current economic crisis. All of the chapters detail some aspect of technical services reorganization due to downsizing and/or reallocation of human resources, retooling professional and support staff in higher level duties and/or non-MARC metadata, ""value-a

  13. The ANSS Station Information System: A Centralized Station Metadata Repository for Populating, Managing and Distributing Seismic Station Metadata

    Science.gov (United States)

    Thomas, V. I.; Yu, E.; Acharya, P.; Jaramillo, J.; Chowdhury, F.

    2015-12-01

    Maintaining and archiving accurate site metadata is critical for seismic network operations. The Advanced National Seismic System (ANSS) Station Information System (SIS) is a repository of seismic network field equipment, equipment response, and other site information. Currently, there are 187 different sensor models and 114 data-logger models in SIS. SIS has a web-based user interface that allows network operators to enter information about seismic equipment and assign response parameters to it. It allows users to log entries for sites, equipment, and data streams. Users can also track when equipment is installed, updated, and/or removed from sites. When seismic equipment configurations change for a site, SIS computes the overall gain of a data channel by combining the response parameters of the underlying hardware components. Users can then distribute this metadata in standardized formats such as FDSN StationXML or dataless SEED. One powerful advantage of SIS is that existing data in the repository can be leveraged: e.g., new instruments can be assigned response parameters from the Incorporated Research Institutions for Seismology (IRIS) Nominal Response Library (NRL), or from a similar instrument already in the inventory, thereby reducing the amount of time needed to determine parameters when new equipment (or models) are introduced into a network. SIS is also useful for managing field equipment that does not produce seismic data (eg power systems, telemetry devices or GPS receivers) and gives the network operator a comprehensive view of site field work. SIS allows users to generate field logs to document activities and inventory at sites. Thus, operators can also use SIS reporting capabilities to improve planning and maintenance of the network. Queries such as how many sensors of a certain model are installed or what pieces of equipment have active problem reports are just a few examples of the type of information that is available to SIS users.

  14. NOAA's efforts to map extent, health and condition of deep sea corals and sponges and their habitat on the banks and island slopes of Southern California

    Science.gov (United States)

    Etnoyer, P. J.; Salgado, E.; Stierhoff, K.; Wickes, L.; Nehasil, S.; Kracker, L.; Lauermann, A.; Rosen, D.; Caldow, C.

    2015-12-01

    Southern California's deep-sea corals are diverse and abundant, but subject to multiple stressors, including corallivory, ocean acidification, and commercial bottom fishing. NOAA has surveyed these habitats using a remotely operated vehicle (ROV) since 2003. The ROV was equipped with high-resolution cameras to document deep-water groundfish and their habitat in a series of research expeditions from 2003 - 2011. Recent surveys 2011-2015 focused on in-situ measures of aragonite saturation and habitat mapping in notable habitats identified in previous years. Surveys mapped abundance and diversity of fishes and corals, as well as commercial fisheries landings and frequency of fishing gear. A novel priority setting algorithm was developed to identify hotspots of diversity and fishing intensity, and to determine where future conservation efforts may be warranted. High density coral aggregations identified in these analyses were also used to guide recent multibeam mapping efforts. The maps suggest a large extent of unexplored and unprotected hard-bottom habitat in the mesophotic zone and deep-sea reaches of Channel Islands National Marine Sanctuary.

  15. Islands in the Midst of the World

    Science.gov (United States)

    2002-01-01

    The Greek islands of the Aegean Sea, scattered across 800 kilometers from north to south and between Greece and western Turkey, are uniquely situated at the intersection of Europe, Asia and Africa. This image from the Multi-angle Imaging SpectroRadiometer includes many of the islands of the East Aegean, Sporades, Cyclades, Dodecanese and Crete, as well as part of mainland Turkey. Many sites important to ancient and modern history can be found here. The largest modern city in the Aegean coast is Izmir, situated about one quarter of the image length from the top, southeast of the large three-pronged island of Lesvos. Izmir can be located as a bright coastal area near the greenish waters of the Izmir Bay, about one quarter of the image length from the top, southeast of Lesvos. The coastal areas around this cosmopolitan Turkish city were a center of Ionian culture from the 11th century BC, and at the top of the image (north of Lesvos), once stood the ancient city of Troy.The image was acquired before the onset of the winter rains, on September 30, 2001, but dense vegetation is never very abundant in the arid Mediterranean climate. The sharpness and clarity of the view also indicate dry, clear air. Some vegetative changes can be detected between the western or southern islands such as Crete (the large island along the bottom of the image) and those closer to the Turkish coast which appear comparatively green. Volcanic activities are evident by the form of the islands of Santorini. This small group of islands shaped like a broken ring are situated to the right and below image center. Santorini's Thera volcano erupted around 1640 BC, and the rim of the caldera collapsed, forming the shape of the islands as they exist today.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and views almost the entire globe every 9 days. This natural-color image was acquired by MISR's nadir (vertical-viewing) camera, and is a portion of the

  16. The PDS4 Data Dictionary Tool - Metadata Design for Data Preparers

    Science.gov (United States)

    Raugh, A.; Hughes, J. S.

    2017-12-01

    One of the major design goals of the PDS4 development effort was to create an extendable Information Model (IM) for the archive, and to allow mission data designers/preparers to create extensions for metadata definitions specific to their own contexts. This capability is critical for the Planetary Data System - an archive that deals with a data collection that is diverse along virtually every conceivable axis. Amid such diversity in the data itself, it is in the best interests of the PDS archive and its users that all extensions to the IM follow the same design techniques, conventions, and restrictions as the core implementation itself. But it is unrealistic to expect mission data designers to acquire expertise in information modeling, model-driven design, ontology, schema formulation, and PDS4 design conventions and philosophy in order to define their own metadata. To bridge that expertise gap and bring the power of information modeling to the data label designer, the PDS Engineering Node has developed the data dictionary creation tool known as "LDDTool". This tool incorporates the same software used to maintain and extend the core IM, packaged with an interface that enables a developer to create his extension to the IM using the same, standards-based metadata framework PDS itself uses. Through this interface, the novice dictionary developer has immediate access to the common set of data types and unit classes for defining attributes, and a straight-forward method for constructing classes. The more experienced developer, using the same tool, has access to more sophisticated modeling methods like abstraction and extension, and can define context-specific validation rules. We present the key features of the PDS Local Data Dictionary Tool, which both supports the development of extensions to the PDS4 IM, and ensures their compatibility with the IM.

  17. What Information Does Your EHR Contain? Automatic Generation of a Clinical Metadata Warehouse (CMDW) to Support Identification and Data Access Within Distributed Clinical Research Networks.

    Science.gov (United States)

    Bruland, Philipp; Doods, Justin; Storck, Michael; Dugas, Martin

    2017-01-01

    Data dictionaries provide structural meta-information about data definitions in health information technology (HIT) systems. In this regard, reusing healthcare data for secondary purposes offers several advantages (e.g. reduce documentation times or increased data quality). Prerequisites for data reuse are its quality, availability and identical meaning of data. In diverse projects, research data warehouses serve as core components between heterogeneous clinical databases and various research applications. Given the complexity (high number of data elements) and dynamics (regular updates) of electronic health record (EHR) data structures, we propose a clinical metadata warehouse (CMDW) based on a metadata registry standard. Metadata of two large hospitals were automatically inserted into two CMDWs containing 16,230 forms and 310,519 data elements. Automatic updates of metadata are possible as well as semantic annotations. A CMDW allows metadata discovery, data quality assessment and similarity analyses. Common data models for distributed research networks can be established based on similarity analyses.

  18. California Institute for Water Resources - California Institute for Water

    Science.gov (United States)

    Resources Skip to Content Menu California Institute for Water Resources Share Print Site Map Resources Publications Keep in Touch QUICK LINKS Our Blog: The Confluence Drought & Water Information University of California California Institute for Water Resources California Institute for Water Resources

  19. Tags and self-organisation: a metadata ecology for learning resources in a multilingual context

    OpenAIRE

    Vuorikari, Riina Hannuli

    2010-01-01

    Vuorikari, R. (2009). Tags and self-organisation: a metadata ecology for learning resources in a multilingual context. Doctoral thesis. November, 13, 2009, Heerlen, The Netherlands: Open University of the Netherlands, CELSTEC.

  20. Tags and self-organisation: a metadata ecology for learning resources in a multilingual context

    NARCIS (Netherlands)

    Vuorikari, Riina

    2009-01-01

    Vuorikari, R. (2009). Tags and self-organisation: a metadata ecology for learning resources in a multilingual context. Doctoral thesis. November, 13, 2009, Heerlen, The Netherlands: Open University of the Netherlands, CELSTEC.

  1. Defining Linkages between the GSC and NSF's LTER Program: How the Ecological Metadata Language (EML) Relates to GCDML and Other Outcomes

    Science.gov (United States)

    Inigo San Gil; Wade Sheldon; Tom Schmidt; Mark Servilla; Raul Aguilar; Corinna Gries; Tanya Gray; Dawn Field; James Cole; Jerry Yun Pan; Giri Palanisamy; Donald Henshaw; Margaret O' Brien; Linda Kinkel; Kathrine McMahon; Renzo Kottmann; Linda Amaral-Zettler; John Hobbie; Philip Goldstein; Robert P. Guralnick; James Brunt; William K. Michener

    2008-01-01

    The Genomic Standards Consortium (GSC) invited a representative of the Long-Term Ecological Research (LTER) to its fifth workshop to present the Ecological Metadata Language (EML) metadata standard and its relationship to the Minimum Information about a Genome/Metagenome Sequence (MIGS/MIMS) and its implementation, the Genomic Contextual Data Markup Language (GCDML)....

  2. A Metadata Model for E-Learning Coordination through Semantic Web Languages

    Science.gov (United States)

    Elci, Atilla

    2005-01-01

    This paper reports on a study aiming to develop a metadata model for e-learning coordination based on semantic web languages. A survey of e-learning modes are done initially in order to identify content such as phases, activities, data schema, rules and relations, etc. relevant for a coordination model. In this respect, the study looks into the…

  3. Progress Report on the Airborne Metadata and Time Series Working Groups of the 2016 ESDSWG

    Science.gov (United States)

    Evans, K. D.; Northup, E. A.; Chen, G.; Conover, H.; Ames, D. P.; Teng, W. L.; Olding, S. W.; Krotkov, N. A.

    2016-12-01

    NASA's Earth Science Data Systems Working Groups (ESDSWG) was created over 10 years ago. The role of the ESDSWG is to make recommendations relevant to NASA's Earth science data systems from users' experiences. Each group works independently focusing on a unique topic. Participation in ESDSWG groups comes from a variety of NASA-funded science and technology projects, including MEaSUREs and ROSS. Participants include NASA information technology experts, affiliated contractor staff and other interested community members from academia and industry. Recommendations from the ESDSWG groups will enhance NASA's efforts to develop long term data products. The Airborne Metadata Working Group is evaluating the suitability of the current Common Metadata Repository (CMR) and Unified Metadata Model (UMM) for airborne data sets and to develop new recommendations as necessary. The overarching goal is to enhance the usability, interoperability, discovery and distribution of airborne observational data sets. This will be done by assessing the suitability (gaps) of the current UMM model for airborne data using lessons learned from current and past field campaigns, listening to user needs and community recommendations and assessing the suitability of ISO metadata and other standards to fill the gaps. The Time Series Working Group (TSWG) is a continuation of the 2015 Time Series/WaterML2 Working Group. The TSWG is using a case study-driven approach to test the new Open Geospatial Consortium (OGC) TimeseriesML standard to determine any deficiencies with respect to its ability to fully describe and encode NASA earth observation-derived time series data. To do this, the time series working group is engaging with the OGC TimeseriesML Standards Working Group (SWG) regarding unsatisfied needs and possible solutions. The effort will end with the drafting of an OGC Engineering Report based on the use cases and interactions with the OGC TimeseriesML SWG. Progress towards finalizing

  4. Investigations of the marine flora and fauna of the Fiji Islands.

    Science.gov (United States)

    Feussner, Klaus-Dieter; Ragini, Kavita; Kumar, Rohitesh; Soapi, Katy M; Aalbersberg, William G; Harper, Mary Kay; Carte, Brad; Ireland, Chris M

    2012-12-01

    Over the past 30 years, approximately 140 papers have been published on marine natural products chemistry and related research from the Fiji Islands. These came about from studies starting in the early 1980s by the research groups of Crews at the University of California Santa Cruz, Ireland at the University of Utah, Gerwick from the Scripps Institution of Oceanography, the University of California at San Diego and the more recent groups of Hay at the Georgia Institute of Technology (GIT) and Jaspars from the University of Aberdeen. This review covers both known and novel marine-derived natural products and their biological activities. The marine organisms reviewed include invertebrates, plants and microorganisms, highlighting the vast structural diversity of compounds isolated from these organisms. Increasingly during this period, natural products chemists at the University of the South Pacific have been partners in this research, leading in 2006 to the development of a Centre for Drug Discovery and Conservation (CDDC).

  5. Defense Virtual Library: Technical Metadata for the Long-Term Management of Digital Materials: Preliminary Guidelines

    National Research Council Canada - National Science Library

    Flynn, Marcy

    2002-01-01

    ... of the digital materials being preserved. This report, prepared by Silver Image Management (SIM), proposes technical metadata elements appropriate for digital objects in the Defense Virtual Library...

  6. Automated metadata--final project report

    Energy Technology Data Exchange (ETDEWEB)

    Schissel, David [General Atomics, San Diego, CA (United States)

    2016-04-01

    This report summarizes the work of the Automated Metadata, Provenance Cataloging, and Navigable Interfaces: Ensuring the Usefulness of Extreme-Scale Data Project (MPO Project) funded by the United States Department of Energy (DOE), Offices of Advanced Scientific Computing Research and Fusion Energy Sciences. Initially funded for three years starting in 2012, it was extended for 6 months with additional funding. The project was a collaboration between scientists at General Atomics, Lawrence Berkley National Laboratory (LBNL), and Massachusetts Institute of Technology (MIT). The group leveraged existing computer science technology where possible, and extended or created new capabilities where required. The MPO project was able to successfully create a suite of software tools that can be used by a scientific community to automatically document their scientific workflows. These tools were integrated into workflows for fusion energy and climate research illustrating the general applicability of the project’s toolkit. Feedback was very positive on the project’s toolkit and the value of such automatic workflow documentation to the scientific endeavor.

  7. SEMANTIC METADATA FOR HETEROGENEOUS SPATIAL PLANNING DOCUMENTS

    Directory of Open Access Journals (Sweden)

    A. Iwaniak

    2016-09-01

    Full Text Available Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa. The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.

  8. Metadata requirements for results of diagnostic imaging procedures: a BIIF profile to support user applications

    Science.gov (United States)

    Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.

    2002-05-01

    A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).

  9. A network analysis using metadata to investigate innovation in clean-tech – Implications for energy policy

    International Nuclear Information System (INIS)

    Marra, Alessandro; Antonelli, Paola; Dell’Anna, Luca; Pozzi, Cesare

    2015-01-01

    Clean-technology (clean-tech) is a large and increasing sector. Research and development (R&D) is the lifeline of the industry and innovation is fostered by a plethora of high-tech start-ups and small and medium-sized enterprises (SMEs). Any empirical-based attempt to detect the pattern of technological innovation in the industry is challenging. This paper proposes an investigation of innovation in clean-tech using metadata provided by CrunchBase. Metadata reveal information on markets, products, services and technologies driving innovation in the clean-tech industry worldwide and for San Francisco, the leader in clean-tech innovation with more than two hundred specialised companies. A network analysis using metadata is the employed methodology and the main metrics of the resulting networks are discussed from an economic point of view. The purpose of the paper is to understand specifically specializations and technological complementarities underlying innovative companies, detect emerging industrial clusters at the global and local/metropolitan level and, finally, suggest a way to realize whether observed start-ups, SMEs and clusters follow a technological path of complementary innovation and market opportunity or, instead, present a risk of lock-in. The discussion of the results of the network analysis shows interesting implications for energy policy, particularly useful from an operational point of view. - Highlights: • Metadata provide information on companies' products and technologies. • A network analysis enables detection of specializations and complementarities. • An investigation of the network allows to identify emerging industrial clusters. • Metrics help to appreciate complementary innovation and market opportunity. • Results of the network analysis show interesting policy implications.

  10. California Bioregions

    Data.gov (United States)

    California Natural Resource Agency — California regions developed by the Inter-agency Natural Areas Coordinating Committee (INACC) were digitized from a 1:1,200,000 California Department of Fish and...

  11. Native plant recovery in study plots after fennel (Foeniculum vulgare) control on Santa Cruz Island

    Science.gov (United States)

    Power, Paula; Stanley, Thomas R.; Cowan, Clark; Robertson, James R.

    2014-01-01

    Santa Cruz Island is the largest of the California Channel Islands and supports a diverse and unique flora which includes 9 federally listed species. Sheep, cattle, and pigs, introduced to the island in the mid-1800s, disturbed the soil, browsed native vegetation, and facilitated the spread of exotic invasive plants. Recent removal of introduced herbivores on the island led to the release of invasive fennel (Foeniculum vulgare), which expanded to become the dominant vegetation in some areas and has impeded the recovery of some native plant communities. In 2007, Channel Islands National Park initiated a program to control fennel using triclopyr on the eastern 10% of the island. We established replicate paired plots (seeded and nonseeded) at Scorpion Anchorage and Smugglers Cove, where notably dense fennel infestations (>10% cover) occurred, to evaluate the effectiveness of native seed augmentation following fennel removal. Five years after fennel removal, vegetative cover increased as litter and bare ground cover decreased significantly (P species increased at Scorpion Anchorage in both seeded and nonseeded plots. At Smugglers Cove, exotic cover decreased significantly (P = 0.0001) as native cover comprised of Eriogonum arborescensand Leptosyne gigantea increased significantly (P < 0.0001) in seeded plots only. Nonseeded plots at Smugglers Cove were dominated by exotic annual grasses, primarily Avena barbata. The data indicate that seeding with appropriate native seed is a critical step in restoration following fennel control in areas where the native seed bank is depauperate.

  12. Ecoregions of California

    Science.gov (United States)

    Griffith, Glenn E.; Omernik, James M.; Smith, David W.; Cook, Terry D.; Tallyn, Ed; Moseley, Kendra; Johnson, Colleen B.

    2016-02-23

    (2000), and Omernik and Griffith (2014).California has great ecological and biological diversity. The State contains offshore islands and coastal lowlands, large alluvial valleys, forested mountain ranges, deserts, and various aquatic habitats. There are 13 level III ecoregions and 177 level IV ecoregions in California and most continue into ecologically similar parts of adjacent States of the United States or Mexico (Bryce and others, 2003; Thorson and others, 2003; Griffith and others, 2014).The California ecoregion map was compiled at a scale of 1:250,000. It revises and subdivides an earlier national ecoregion map that was originally compiled at a smaller scale (Omernik, 1987; U.S. Environmental Protection Agency, 2013). This poster is the result of a collaborative project primarily between U.S. Environmental Protection Agency (USEPA) Region IX, USEPA National Health and Environmental Effects Research Laboratory (Corvallis, Oregon), California Department of Fish and Wildlife (DFW), U.S. Department of Agriculture (USDA)–Natural Resources Conservation Service (NRCS), U.S. Department of the Interior–Geological Survey (USGS), and other State of California agencies and universities.The project is associated with interagency efforts to develop a common framework of ecological regions (McMahon and others, 2001). Reaching that objective requires recognition of the differences in the conceptual approaches and mapping methodologies applied to develop the most common ecoregion-type frameworks, including those developed by the USDA–Forest Service (Bailey and others, 1994; Miles and Goudy, 1997; Cleland and others, 2007), the USEPA (Omernik 1987, 1995), and the NRCS (U.S. Department of Agriculture–Soil Conservation Service, 1981; U.S. Department of Agriculture–Natural Resources Conservation Service, 2006). As each of these frameworks is further refined, their differences are becoming less discernible. Regional collaborative projects such as this one in California

  13. Preliminary study of technical terminology for the retrieval of scientific book metadata records

    DEFF Research Database (Denmark)

    Larsen, Birger; Lioma, Christina; Frommholz, Ingo

    2012-01-01

    Books only represented by brief metadata (book records) are particularly hard to retrieve. One way of improving their retrieval is by extracting retrieval enhancing features from them. This work focusses on scientific (physics) book records. We ask if their technical terminology can be used...

  14. mzML2ISA & nmrML2ISA: generating enriched ISA-Tab metadata files from metabolomics XML data.

    Science.gov (United States)

    Larralde, Martin; Lawson, Thomas N; Weber, Ralf J M; Moreno, Pablo; Haug, Kenneth; Rocca-Serra, Philippe; Viant, Mark R; Steinbeck, Christoph; Salek, Reza M

    2017-08-15

    Submission to the MetaboLights repository for metabolomics data currently places the burden of reporting instrument and acquisition parameters in ISA-Tab format on users, who have to do it manually, a process that is time consuming and prone to user input error. Since the large majority of these parameters are embedded in instrument raw data files, an opportunity exists to capture this metadata more accurately. Here we report a set of Python packages that can automatically generate ISA-Tab metadata file stubs from raw XML metabolomics data files. The parsing packages are separated into mzML2ISA (encompassing mzML and imzML formats) and nmrML2ISA (nmrML format only). Overall, the use of mzML2ISA & nmrML2ISA reduces the time needed to capture metadata substantially (capturing 90% of metadata on assay and sample levels), is much less prone to user input errors, improves compliance with minimum information reporting guidelines and facilitates more finely grained data exploration and querying of datasets. mzML2ISA & nmrML2ISA are available under version 3 of the GNU General Public Licence at https://github.com/ISA-tools. Documentation is available from http://2isa.readthedocs.io/en/latest/. reza.salek@ebi.ac.uk or isatools@googlegroups.com. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  15. Heat Islands

    Science.gov (United States)

    EPA's Heat Island Effect Site provides information on heat islands, their impacts, mitigation strategies, related research, a directory of heat island reduction initiatives in U.S. communities, and EPA's Heat Island Reduction Program.

  16. PCBs and DDT in the serum of juvenile California sea lions: associations with vitamins A and E and thyroid hormones

    International Nuclear Information System (INIS)

    Debier, Cathy; Ylitalo, Gina M.; Weise, Michael; Gulland, Frances; Costa, Daniel P.; Le Boeuf, Burney J.; Tillesse, Tanguy de; Larondelle, Yvan

    2005-01-01

    Top-trophic predators like California sea lions bioaccumulate high levels of persistent fat-soluble pollutants that may provoke physiological impairments such as endocrine or vitamins A and E disruption. We measured circulating levels of polychlorinated biphenyls (PCBs) and dichlorodiphenyltrichloroethane (DDT) in 12 healthy juvenile California sea lions captured on An-tilde o Nuevo Island, California, in 2002. We investigated the relationship between the contamination by PCBs and DDT and the circulating levels of vitamins A and E and thyroid hormones (thyroxine, T4 and triiodothyronine, T3). Serum concentrations of total PCBs (ΣPCBs) and total DDT were 14 ± 9 mg/kg and 28 ± 19 mg/kg lipid weight, respectively. PCB toxic equivalents (ΣPCB TEQs) were 320 ± 170 ng/kg lipid weight. Concentrations of ΣPCBs and ΣPCB TEQs in serum lipids were negatively correlated (p 0.1). As juvenile California sea lions are useful sentinels of coastal contamination, the high levels encountered in their serum is cause for concern about the ecosystem health of the area. - Results show high levels of organochlorine contaminants in juvenile California sea lions and a link between vitamin A, thyroid hormones and PCB exposure

  17. An Observation Capability Metadata Model for EO Sensor Discovery in Sensor Web Enablement Environments

    Directory of Open Access Journals (Sweden)

    Chuli Hu

    2014-10-01

    Full Text Available Accurate and fine-grained discovery by diverse Earth observation (EO sensors ensures a comprehensive response to collaborative observation-required emergency tasks. This discovery remains a challenge in an EO sensor web environment. In this study, we propose an EO sensor observation capability metadata model that reuses and extends the existing sensor observation-related metadata standards to enable the accurate and fine-grained discovery of EO sensors. The proposed model is composed of five sub-modules, namely, ObservationBreadth, ObservationDepth, ObservationFrequency, ObservationQuality and ObservationData. The model is applied to different types of EO sensors and is formalized by the Open Geospatial Consortium Sensor Model Language 1.0. The GeosensorQuery prototype retrieves the qualified EO sensors based on the provided geo-event. An actual application to flood emergency observation in the Yangtze River Basin in China is conducted, and the results indicate that sensor inquiry can accurately achieve fine-grained discovery of qualified EO sensors and obtain enriched observation capability information. In summary, the proposed model enables an efficient encoding system that ensures minimum unification to represent the observation capabilities of EO sensors. The model functions as a foundation for the efficient discovery of EO sensors. In addition, the definition and development of this proposed EO sensor observation capability metadata model is a helpful step in extending the Sensor Model Language (SensorML 2.0 Profile for the description of the observation capabilities of EO sensors.

  18. ARIADNE: a tracking system for relationships in LHCb metadata

    International Nuclear Information System (INIS)

    Shapoval, I; Clemencic, M; Cattaneo, M

    2014-01-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ariadne – a generic metadata relationships tracking system based on the novel NoSQL Neo4j graph database. Its aim is to track and analyze many thousands of evolving relationships for cases such as the one described above, and several others, which would otherwise remain unmanaged and potentially harmful. The highlights of the paper include the system's implementation and management details, infrastructure needed for running it, security issues, first experience of usage in the LHCb production and potential of the system to be applied to a wider set of LHCb tasks.

  19. ARIADNE: a Tracking System for Relationships in LHCb Metadata

    Science.gov (United States)

    Shapoval, I.; Clemencic, M.; Cattaneo, M.

    2014-06-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ariadne - a generic metadata relationships tracking system based on the novel NoSQL Neo4j graph database. Its aim is to track and analyze many thousands of evolving relationships for cases such as the one described above, and several others, which would otherwise remain unmanaged and potentially harmful. The highlights of the paper include the system's implementation and management details, infrastructure needed for running it, security issues, first experience of usage in the LHCb production and potential of the system to be applied to a wider set of LHCb tasks.

  20. The United States' energy play - California, the national drama, and renewable power

    International Nuclear Information System (INIS)

    Sklar, Scott

    2001-01-01

    The energy supply crisis in California is examined, and the problems resulting from the deteriorating electricity infrastructures due to under investment and the slowing down of power plant construction due to deregulation are considered. Details are given of the lead shown by California in the use of renewable energy sources and the insulation from the worst of the energy crisis of some town such as Redding, Sacramento and Los Angeles which own their own electric utility. The building of solar homes, incentives offered for energy efficiency and the installation of photovoltaics (PV) by the Long Island Power Authority, and the investment in a PV micro-manufacturing plant in Illinois are reported. The absence of any cheap energy, new state energy portfolios, the passing of net-metering laws to promote PV and other renewable energy resources in 30 states, and the growth of the renewable energy sector in the US and in energy service companies are discussed

  1. The NOAA OneStop System: From Well-Curated Metadata to Data Discovery

    Science.gov (United States)

    McQuinn, E.; Jakositz, A.; Caldwell, A.; Delk, Z.; Neufeld, D.; Shapiro, J.; Partee, R.; Milan, A.

    2017-12-01

    The NOAA OneStop project is a pathfinder in the realm of enabling users to search for, discover, and access NOAA data. As the project continues along its path to maturity, it has become evident that three areas are of utmost importance to its success in the Earth science community: ensuring quality metadata, building a robust and scalable backend architecture, and keeping the user interface simple to use. Why is this the case? Because, simply put, we are dealing with all aspects of a Big Data problem: large volumes of disparate data needing to be quickly and easily processed and retrieved. In this presentation we discuss the three key aspects of OneStop architecture and how development in each area must be done through cross-team collaboration in order to succeed. We cover aspects of the web-based user interface and OneStop API and how metadata curators and software engineers have worked together to continually iterate on an ever-improving data discovery tool meant to be used by a variety of users searching across a broad assortment of data types.

  2. The Story of California = La Historia de California.

    Science.gov (United States)

    Bartel, Nick

    "The Story of California" is a history and geography of the state of California, intended for classroom use by limited-English-proficient, native Spanish-speaking students in California's urban middle schools. The book is designed with the left page in English and the right page in Spanish to facilitate student transition into…

  3. A Semantically Enabled Metadata Repository for Solar Irradiance Data Products

    Science.gov (United States)

    Wilson, A.; Cox, M.; Lindholm, D. M.; Nadiadi, I.; Traver, T.

    2014-12-01

    The Laboratory for Atmospheric and Space Physics, LASP, has been conducting research in Atmospheric and Space science for over 60 years, and providing the associated data products to the public. LASP has a long history, in particular, of making space-based measurements of the solar irradiance, which serves as crucial input to several areas of scientific research, including solar-terrestrial interactions, atmospheric, and climate. LISIRD, the LASP Interactive Solar Irradiance Data Center, serves these datasets to the public, including solar spectral irradiance (SSI) and total solar irradiance (TSI) data. The LASP extended metadata repository, LEMR, is a database of information about the datasets served by LASP, such as parameters, uncertainties, temporal and spectral ranges, current version, alerts, etc. It serves as the definitive, single source of truth for that information. The database is populated with information garnered via web forms and automated processes. Dataset owners keep the information current and verified for datasets under their purview. This information can be pulled dynamically for many purposes. Web sites such as LISIRD can include this information in web page content as it is rendered, ensuring users get current, accurate information. It can also be pulled to create metadata records in various metadata formats, such as SPASE (for heliophysics) and ISO 19115. Once these records are be made available to the appropriate registries, our data will be discoverable by users coming in via those organizations. The database is implemented as a RDF triplestore, a collection of instances of subject-object-predicate data entities identifiable with a URI. This capability coupled with SPARQL over HTTP read access enables semantic queries over the repository contents. To create the repository we leveraged VIVO, an open source semantic web application, to manage and create new ontologies and populate repository content. A variety of ontologies were used in

  4. GeoBoost: accelerating research involving the geospatial metadata of virus GenBank records.

    Science.gov (United States)

    Tahsin, Tasnia; Weissenbacher, Davy; O'Connor, Karen; Magge, Arjun; Scotch, Matthew; Gonzalez-Hernandez, Graciela

    2018-05-01

    GeoBoost is a command-line software package developed to address sparse or incomplete metadata in GenBank sequence records that relate to the location of the infected host (LOIH) of viruses. Given a set of GenBank accession numbers corresponding to virus GenBank records, GeoBoost extracts, integrates and normalizes geographic information reflecting the LOIH of the viruses using integrated information from GenBank metadata and related full-text publications. In addition, to facilitate probabilistic geospatial modeling, GeoBoost assigns probability scores for each possible LOIH. Binaries and resources required for running GeoBoost are packed into a single zipped file and freely available for download at https://tinyurl.com/geoboost. A video tutorial is included to help users quickly and easily install and run the software. The software is implemented in Java 1.8, and supported on MS Windows and Linux platforms. gragon@upenn.edu. Supplementary data are available at Bioinformatics online.

  5. Sea-level history during the Last Interglacial complex on San Nicolas Island, California: implications for glacial isostatic adjustment processes, paleozoogeography and tectonics

    Science.gov (United States)

    Muhs, Daniel R.; Simmons, Kathleen R.; Schumann, R. Randall; Groves, Lindsey T.; Mitrovica, Jerry X.; Laurel, Deanna

    2012-01-01

    San Nicolas Island, California has one of the best records of fossiliferous Quaternary marine terraces in North America, with at least fourteen terraces rising to an elevation of ~270 m above present-day sea level. In our studies of the lowest terraces, we identified platforms at 38-36 m (terrace 2a), 33-28 m (terrace 2b), and 13-8 m (terrace 1). Uranium-series dating of solitary corals from these terraces yields three clusters of ages: ~120 ka on terrace 2a (marine isotope stage [MIS] 5.5), ~120 and ~100 ka on terrace 2b (MIS 5.5 and 5.3), and ~80 ka (MIS 5.1) on terrace 1. We conclude that corals on terrace 2b that date to ~120 ka were reworked from a formerly broader terrace 2a during the ~100 ka sea stand. Fossil faunas differ on the three terraces. Isolated fragments of terrace 2a have a fauna similar to that of modern waters surrounding San Nicolas Island. A mix of extralimital southern and extralimital northern species is found on terrace 2b, and extralimital northern species are on terrace 1. On terrace 2b, with its mixed faunas, extralimital southern species, indicating warmer than present waters, are interpreted to be from the ~120 ka high sea stand, reworked from terrace 2a. The extralimital northern species on terrace 2b, indicating cooler than present waters, are interpreted to be from the ~100 ka sea stand. The abundant extralimital northern species on terrace 1 indicate cooler than present waters at ~80 ka. Using the highest elevations of the ~120 ka platform of terrace 2a, and assuming a paleo-sea level of +6 m based on previous studies, San Nicolas Island has experienced late Quaternary uplift rates of ~0.25-0.27 m/ka. These uplift rates, along with shoreline angle elevations and ages of terrace 2b (~100 ka) and terrace 1 (~80 ka) yield relative (local) paleo-sea level elevations of +2 to +6 m for the ~100 ka sea stand and -11 to -12 m for the ~80 ka sea stand. These estimates are significantly higher than those reported for the ~100 ka and ~80 ka

  6. Island biogeography

    DEFF Research Database (Denmark)

    Whittaker, Robert James; Fernández-Palacios, José María; Matthews, Thomas J.

    2017-01-01

    Islands provide classic model biological systems. We review how growing appreciation of geoenvironmental dynamics of marine islands has led to advances in island biogeographic theory accommodating both evolutionary and ecological phenomena. Recognition of distinct island geodynamics permits gener...

  7. Phonion: Practical Protection of Metadata in Telephony Networks

    Directory of Open Access Journals (Sweden)

    Heuser Stephan

    2017-01-01

    Full Text Available The majority of people across the globe rely on telephony networks as their primary means of communication. As such, many of the most sensitive personal, corporate and government related communications pass through these systems every day. Unsurprisingly, such connections are subject to a wide range of attacks. Of increasing concern is the use of metadata contained in Call Detail Records (CDRs, which contain source, destination, start time and duration of a call. This information is potentially dangerous as the very act of two parties communicating can reveal significant details about their relationship and put them in the focus of targeted observation or surveillance, which is highly critical especially for journalists and activists. To address this problem, we develop the Phonion architecture to frustrate such attacks by separating call setup functions from call delivery. Specifically, Phonion allows users to preemptively establish call circuits across multiple providers and technologies before dialing into the circuit and does not require constant Internet connectivity. Since no single carrier can determine the ultimate destination of the call, it provides unlinkability for its users and helps them to avoid passive surveillance. We define and discuss a range of adversary classes and analyze why current obfuscation technologies fail to protect users against such metadata attacks. In our extensive evaluation we further analyze advanced anonymity technologies (e.g., VoIP over Tor, which do not preserve our functional requirements for high voice quality in the absence of constant broadband Internet connectivity and compatibility with landline and feature phones. Phonion is the first practical system to provide guarantees of unlinkable communication against a range of practical adversaries in telephony systems.

  8. Tenarife Island, Canary Island Archipelago, Atlantic Ocean

    Science.gov (United States)

    1991-01-01

    Tenarife Island is one of the most volcanically active of the Canary Island archipelago, Atlantic Ocean, just off the NW coast of Africa, (28.5N, 16.5W). The old central caldera, nearly filled in by successive volcanic activity culminating in two stratocones. From those two peaks, a line of smaller cinder cones extend to the point of the island. Extensive gullies dissect the west side of the island and some forests still remain on the east side.

  9. Taxonomic names, metadata, and the Semantic Web

    Directory of Open Access Journals (Sweden)

    Roderic D. M. Page

    2006-01-01

    Full Text Available Life Science Identifiers (LSIDs offer an attractive solution to the problem of globally unique identifiers for digital objects in biology. However, I suggest that in the context of taxonomic names, the most compelling benefit of adopting these identifiers comes from the metadata associated with each LSID. By using existing vocabularies wherever possible, and using a simple vocabulary for taxonomy-specific concepts we can quickly capture the essential information about a taxonomic name in the Resource Description Framework (RDF format. This opens up the prospect of using technologies developed for the Semantic Web to add ``taxonomic intelligence" to biodiversity databases. This essay explores some of these ideas in the context of providing a taxonomic framework for the phylogenetic database TreeBASE.

  10. Evolution of the ATLAS Metadata Interface (AMI)

    CERN Document Server

    Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian

    2015-01-01

    The ATLAS Metadata Interface (AMI) can be considered to be a mature application because it has existed for at least 10 years. Over the years, the number of users and the number of functions provided for these users has increased. It has been necessary to adapt the hardware infrastructure in a seamless way so that the Quality of Service remains high. We will describe the evolution of the application from the initial one, using single server with a MySQL backend database, to the current state, where we use a cluster of Virtual Machines on the French Tier 1 Cloud at Lyon, an ORACLE database backend also at Lyon, with replication to CERN using ORACLE streams behind a back-up server.

  11. Cool PDO phase leads to recent rebound in coastal southern California fog

    Directory of Open Access Journals (Sweden)

    Witiw, Michael R.

    2015-12-01

    Full Text Available The relationship between coastal fog in southern California and the Pacific Decadal Oscillation (PDO is investigated during the last decade. Fog occurrence was examined at two locations in southern California: San Diego and Los Angeles international airports. Both locations are located near the Pacific coast with strong marine influences. The period looked at was 2001 through 2012. The cool season (October-March and warm season (April-September were examined separately because of the different types of fog that prevail in each season. Previous studies have shown a relation between fog and the Pacific Decadal Oscillation (PDO. However, a switch in polarity in the PDO in the mid-1970s (from a cool to a warm phase coupled with a sharp decrease in particulate concentrations calls into question the strong relationship shown. Further studies suggest that the decrease in dense fog seen from the 1960s through the 1990s was largely due to increasing urban heat island effects coupled with a decrease in atmospheric particulate matter. Since 1998, the PDO again changed polarity and fog frequencies began to rise. However, urban heat island and particulate effects were relatively constant making it easier to isolate any effects of the PDO on fog occurrence. Previous studies examined the occurrence of dense fog (visibility less than 400 meters, but because of the decrease in fog in this category, 800 meters was chosen this time. That also corresponds to the 0.5 mile visibility which triggers special reports at the California airports when visibility moves through this threshold. Although there was no strong relationship between fog and PDO in the most recent period, Pacific Ocean oscillations were found to show significant relationships with fog frequencies historically. Upwelling indices show a significant relationship with fog frequencies when examined by the phase of the PDO. Even stronger relationships are found when selecting La Niña and El Niño events.

  12. There's Trouble in Paradise: Problems with Educational Metadata Encountered during the MALTED Project.

    Science.gov (United States)

    Monthienvichienchai, Rachada; Sasse, M. Angela; Wheeldon, Richard

    This paper investigates the usability of educational metadata schemas with respect to the case of the MALTED (Multimedia Authoring Language Teachers and Educational Developers) project at University College London (UCL). The project aims to facilitate authoring of multimedia materials for language learning by allowing teachers to share multimedia…

  13. Shallow magnetic inclinations in the Cretaceous Valle Group, Baja California: remagnetization, compaction, or terrane translation?

    Science.gov (United States)

    Smith, Douglas P.; Busby, Cathy J.

    1993-10-01

    Paleomagnetic data from Albian to Turonian sedimentary rocks on Cedros Island, Mexico (28.2° N, 115.2° W) support the interpretation that Cretaceous rocks of western Baja California have moved farther northward than the 3° of latitude assignable to Neogene oblique rifting in the Gulf of California. Averaged Cretaceous paleomagnetic results from Cedros Island support 20 ± 10° of northward displacement and 14 ± 7° of clockwise rotation with respect to cratonic North America. Positive field stability tests from the Vizcaino terrane substantiate a mid-Cretaceous age for the high-temperature characteristic remanent magnetization in mid-Cretaceous strata. Therefore coincidence of characteristic magnetization directions and the expected Quaternary axial dipole direction is not due to post mid-Cretaceous remagnetization. A slump test performed on internally coherent, intrabasinal slump blocks within a paleontologically dated olistostrome demonstrates a mid-Cretaceous age of magnetization in the Valle Group. The in situ high-temperature natural remanent magnetization directions markedly diverge from the expected Quaternary axial dipole, indicating that the characteristic, high-temperature magnetization was acquired prior to intrabasinal slumping. Early acquisition of the characteristic magnetization is also supported by a regional attitude test involving three localities in coherent mid-Cretaceous Valle Group strata. Paleomagnetic inclinations in mudstone are not different from those in sandstone, indicating that burial compaction did not bias the results toward shallow inclinations in the Vizcaino terrane.

  14. The National Assessment of Shoreline Change:A GIS Compilation of Vector Shorelines and Associated Shoreline Change Data for the Sandy Shorelines of the California Coast

    Science.gov (United States)

    Hapke, Cheryl J.; Reid, David

    2006-01-01

    California coastline at http://pubs.usgs.gov/of/2006/1219/ for additional information regarding methods and results (Hapke et al., 2006). Data in this report are organized into downloadable layers by region (Northern, Central and Southern California) and are provided as vector datasets with metadata. Vector shorelines may represent a compilation of data from one or more sources and these sources are included in the dataset metadata. This project employs the Environmental Systems Research Institute's (ESRI) ArcGIS as it's GIS mapping tool and contains several data layers (shapefiles) that are used to create a geographic view of the California Coast. These vector data form a basemap comprised of polygon and line themes that include a U.S. coastline (1:80,000), U.S. cities, and state boundaries.

  15. Identity and privacy. Unique in the shopping mall: on the reidentifiability of credit card metadata.

    Science.gov (United States)

    de Montjoye, Yves-Alexandre; Radaelli, Laura; Singh, Vivek Kumar; Pentland, Alex Sandy

    2015-01-30

    Large-scale data sets of human behavior have the potential to fundamentally transform the way we fight diseases, design cities, or perform research. Metadata, however, contain sensitive information. Understanding the privacy of these data sets is key to their broad use and, ultimately, their impact. We study 3 months of credit card records for 1.1 million people and show that four spatiotemporal points are enough to uniquely reidentify 90% of individuals. We show that knowing the price of a transaction increases the risk of reidentification by 22%, on average. Finally, we show that even data sets that provide coarse information at any or all of the dimensions provide little anonymity and that women are more reidentifiable than men in credit card metadata. Copyright © 2015, American Association for the Advancement of Science.

  16. Canary Islands

    Science.gov (United States)

    1992-01-01

    This easterly looking view shows the seven major volcanic islands of the Canary Island chain (28.0N, 16.5W) and offers a unique view of the islands that have become a frequent vacation spot for Europeans. The northwest coastline of Africa, (Morocco and Western Sahara), is visible in the background. Frequently, these islands create an impact on local weather (cloud formations) and ocean currents (island wakes) as seen in this photo.

  17. Automated metadata--final project report

    International Nuclear Information System (INIS)

    Schissel, David

    2016-01-01

    This report summarizes the work of the Automated Metadata, Provenance Cataloging, and Navigable Interfaces: Ensuring the Usefulness of Extreme-Scale Data Project (MPO Project) funded by the United States Department of Energy (DOE), Offices of Advanced Scientific Computing Research and Fusion Energy Sciences. Initially funded for three years starting in 2012, it was extended for 6 months with additional funding. The project was a collaboration between scientists at General Atomics, Lawrence Berkley National Laboratory (LBNL), and Massachusetts Institute of Technology (MIT). The group leveraged existing computer science technology where possible, and extended or created new capabilities where required. The MPO project was able to successfully create a suite of software tools that can be used by a scientific community to automatically document their scientific workflows. These tools were integrated into workflows for fusion energy and climate research illustrating the general applicability of the project's toolkit. Feedback was very positive on the project's toolkit and the value of such automatic workflow documentation to the scientific endeavor.

  18. Study on Information Management for the Conservation of Traditional Chinese Architectural Heritage - 3d Modelling and Metadata Representation

    Science.gov (United States)

    Yen, Y. N.; Weng, K. H.; Huang, H. Y.

    2013-07-01

    After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.

  19. A semantically rich and standardised approach enhancing discovery of sensor data and metadata

    Science.gov (United States)

    Kokkinaki, Alexandra; Buck, Justin; Darroch, Louise

    2016-04-01

    The marine environment plays an essential role in the earth's climate. To enhance the ability to monitor the health of this important system, innovative sensors are being produced and combined with state of the art sensor technology. As the number of sensors deployed is continually increasing,, it is a challenge for data users to find the data that meet their specific needs. Furthermore, users need to integrate diverse ocean datasets originating from the same or even different systems. Standards provide a solution to the above mentioned challenges. The Open Geospatial Consortium (OGC) has created Sensor Web Enablement (SWE) standards that enable different sensor networks to establish syntactic interoperability. When combined with widely accepted controlled vocabularies, they become semantically rich and semantic interoperability is achievable. In addition, Linked Data is the recommended best practice for exposing, sharing and connecting information on the Semantic Web using Uniform Resource Identifiers (URIs), Resource Description Framework (RDF) and RDF Query Language (SPARQL). As part of the EU-funded SenseOCEAN project, the British Oceanographic Data Centre (BODC) is working on the standardisation of sensor metadata enabling 'plug and play' sensor integration. Our approach combines standards, controlled vocabularies and persistent URIs to publish sensor descriptions, their data and associated metadata as 5 star Linked Data and OGC SWE (SensorML, Observations & Measurements) standard. Thus sensors become readily discoverable, accessible and useable via the web. Content and context based searching is also enabled since sensors descriptions are understood by machines. Additionally, sensor data can be combined with other sensor or Linked Data datasets to form knowledge. This presentation will describe the work done in BODC to achieve syntactic and semantic interoperability in the sensor domain. It will illustrate the reuse and extension of the Semantic Sensor

  20. Oceanographic data collected during the Davidson Seamount 2002 expedition on the RV Western Flyer, in the North Pacific Ocean, southwest of Monterey, California from May 17, 2002 - May 24, 2002 (NODC Accession 0072306)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This spring, scientists explored the first "undersea island" to be called a seamount. Davidson seamount, located 120 km Southwest of Monterey, California, is one of...

  1. Placing Music Artists and Songs in Time Using Editorial Metadata and Web Mining Techniques

    NARCIS (Netherlands)

    Bountouridis, D.; Veltkamp, R.C.; Balen, J.M.H. van

    2013-01-01

    This paper investigates the novel task of situating music artists and songs in time, thereby adding contextual information that typically correlates with an artist’s similarities, collaborations and influences. The proposed method makes use of editorial metadata in conjunction with web mining

  2. The Racialized Experiences of Asian American and Pacific Islander Students: An Examination of Campus Racial Climate at the University of California, Los Angeles. iCount: A Data Quality Movement for Asian Americans and Pacific Islanders

    Science.gov (United States)

    Nguyen, Bach Mai Dolly; Nguyen, Mike Hoa; Chan, Jason; Teranishi, Robert T.

    2016-01-01

    In 2013, the National Commission on Asian American and Pacific Islander Research in Education (CARE) launched iCount: A Data Quality Movement for Asian Americans and Pacific Islanders in Higher Education, a collaborative effort with the White House Initiative on Asian Americans and Pacific Islanders (WHIAAPI) and with generous support from the…

  3. Extensive geographic and ontogenetic variation characterizes the trophic ecology of a temperate reef fish on southern California (USA) rocky reefs

    Science.gov (United States)

    Hamilton, Scott L.; Caselle, Jennifer E.; Lantz, Coulson A.; Egloff, Tiana L.; Kondo, Emi; Newsome, Seth D.; Loke-Smith, Kerri; Pondella, Daniel J.; Young, Kelly A.; Lowe, Christopher G.

    2015-01-01

    Interactions between predator and prey act to shape the structure of ecological communities, and these interactions can differ across space. California sheephead Semicossyphus pulcher are common predators of benthic invertebrates in kelp beds and rocky reefs in southern California, USA. Through gut content and stable isotope (δ13C and †15N) analyses, we investigated geographic and ontogenetic variation in trophic ecology across 9 populations located at island and mainland sites throughout southern California. We found extensive geographic variation in California sheephead diet composition over small spatial scales. Populations differed in the proportion of sessile filter/suspension feeders or mobile invertebrates in the diet. Spatial variation in diet was highly correlated with other life history and demographic traits (e.g. growth, survivorship, reproductive condition, and energy storage), in addition to proxies of prey availability from community surveys. Multivariate descriptions of the diet from gut contents roughly agreed with the spatial groupings of sites based on stable isotope analysis of both California sheephead and their prey. Ontogenetic changes in diet occurred consistently across populations, despite spatial differences in size structure. As California sheephead increase in size, diets shift from small filter feeders, like bivalves, to larger mobile invertebrates, such as sea urchins. Our results indicate that locations with large California sheephead present, such as many marine reserves, may experience increased predation pressure on sea urchins, which could ultimately affect kelp persistence. PMID:26246648

  4. An Automatic Indicator of the Reusability of Learning Objects Based on Metadata That Satisfies Completeness Criteria

    Science.gov (United States)

    Sanz-Rodríguez, Javier; Margaritopoulos, Merkourios; Margaritopoulos, Thomas; Dodero, Juan Manuel; Sánchez-Alonso, Salvador; Manitsaris, Athanasios

    The search for learning objects in open repositories is currently a tedious task, owing to the vast amount of resources available and the fact that most of them do not have associated ratings to help users make a choice. In order to tackle this problem, we propose a reusability indicator, which can be calculated automatically using the metadata that describes the objects, allowing us to select those materials most likely to be reused. In order for this reusability indicator to be applied, metadata records must reach a certain amount of completeness, guaranteeing that the material is adequately described. This reusability indicator is tested in two studies on the Merlot and eLera repositories, and results obtained offer evidence to support their effectiveness.

  5. A Geospatial Data Recommender System based on Metadata and User Behaviour

    Science.gov (United States)

    Li, Y.; Jiang, Y.; Yang, C. P.; Armstrong, E. M.; Huang, T.; Moroni, D. F.; Finch, C. J.; McGibbney, L. J.

    2017-12-01

    Earth observations are produced in a fast velocity through real time sensors, reaching tera- to peta- bytes of geospatial data daily. Discovering and accessing the right data from the massive geospatial data is like finding needle in the haystack. To help researchers find the right data for study and decision support, quite a lot of research focusing on improving search performance have been proposed including recommendation algorithm. However, few papers have discussed the way to implement a recommendation algorithm in geospatial data retrieval system. In order to address this problem, we propose a recommendation engine to improve discovering relevant geospatial data by mining and utilizing metadata and user behavior data: 1) metadata based recommendation considers the correlation of each attribute (i.e., spatiotemporal, categorical, and ordinal) to data to be found. In particular, phrase extraction method is used to improve the accuracy of the description similarity; 2) user behavior data are utilized to predict the interest of a user through collaborative filtering; 3) an integration method is designed to combine the results of the above two methods to achieve better recommendation Experiments show that in the hybrid recommendation list, the all the precisions are larger than 0.8 from position 1 to 10.

  6. California State Waters Map Series: offshore of Carpinteria, California

    Science.gov (United States)

    Johnson, Samuel Y.; Dartnell, Peter; Cochrane, Guy R.; Golden, Nadine E.; Phillips, Eleyne L.; Ritchie, Andrew C.; Kvitek, Rikk G.; Greene, H. Gary; Endris, Charles A.; Seitz, Gordon G.; Sliter, Ray W.; Erdey, Mercedes D.; Wong, Florence L.; Gutierrez, Carlos I.; Krigsman, Lisa M.; Draut, Amy E.; Hart, Patrick E.; Johnson, Samuel Y.; Cochran, Susan A.

    2013-01-01

    In 2007, the California Ocean Protection Council initiated the California Seafloor Mapping Program (CSMP), designed to create a comprehensive seafloor map of high-resolution bathymetry, marine benthic habitats, and geology within the 3-nautical-mile limit of California’s State Waters. The CSMP approach is to create highly detailed seafloor maps through collection, integration, interpretation, and visualization of swath sonar data, acoustic backscatter, seafloor video, seafloor photography, high-resolution seismic-reflection profiles, and bottom-sediment sampling data. The map products display seafloor morphology and character, identify potential marine benthic habitats, and illustrate both the surficial seafloor geology and shallow (to about 100 m) subsurface geology. The Offshore of Carpinteria map area lies within the central Santa Barbara Channel region of the Southern California Bight. This geologically complex region forms a major biogeographic transition zone, separating the cold-temperate Oregonian province north of Point Conception from the warm-temperate California province to the south. The map area is in the southern part of the Western Transverse Ranges geologic province, which is north of the California Continental Borderland. Significant clockwise rotation—at least 90°—since the early Miocene has been proposed for the Western Transverse Ranges province, and the region is presently undergoing north-south shortening. The small city of Carpinteria is the most significant onshore cultural center in the map area; the smaller town of Summerland lies west of Carpinteria. These communities rest on a relatively flat coastal piedmont that is surrounded on the north, east, and west by hilly relief on the flanks of the Santa Ynez Mountains. El Estero, a salt marsh on the coast west of Carpinteria, is an ecologically important coastal estuary. Southeast of Carpinteria, the coastal zone is narrow strip containing highway and railway transportation corridors

  7. Tidal wetland fluxes of dissolved organic carbon and sediment at Browns Island, California: initial evaluation

    Science.gov (United States)

    Ganju, N.K.; Bergamaschi, B.; Schoellhamer, D.H.

    2003-01-01

    Carbon and sediment fluxes from tidal wetlands are of increasing concern in the Sacramento-San Joaquin River Delta (Delta), because of drinking water issues and habitat restoration efforts. Certain forms of dissolved organic carbon (DOC) react with disinfecting chemicals used to treat drinking water, to form disinfection byproducts (DBPs), some of which are potential carcinogens. The contribution of DBP precursors by tidal wetlands is unknown. Sediment transport to and from tidal wetlands determines the potential for marsh accretion, thereby affecting habitat formation.Water, carbon, and sediment flux were measured in the main channel of Browns Island, a tidal wetland located at the confluence of Suisun Bay and the Delta. In-situ instrumentation were deployed between May 3 and May 21, 2002. Water flux was measured using acoustic Doppler current profilers and the index-velocity method. DOC concentrations were measured using calibrated ultraviolet absorbance and fluorescence instruments. Suspended-sediment concentrations were measured using a calibrated nephelometric turbidity sensor. Tidally averaged water flux through the channel was dependent on water surface elevations in Suisun Bay. Strong westerly winds resulted in higher water surface elevations in the area east of Browns Island, causing seaward flow, while subsiding winds reversed this effect. Peak ebb flow transported 36% more water than peak flood flow, indicating an ebb-dominant system. DOC concentrations were affected strongly by porewater drainage from the banks of the channel. Peak DOC concentrations were observed during slack after ebb, when the most porewater drained into the channel. Suspended-sediment concentrations were controlled by tidal currents that mobilized sediment from the channel bed, and stronger tides mobilized more sediment than the weaker tides. Sediment was transported mainly to the island during the 2-week monitoring period, though short periods of export occurred during the spring

  8. California Political Districts

    Data.gov (United States)

    California Natural Resource Agency — This is a series of district layers pertaining to California'spolitical districts, that are derived from the California State Senateand State Assembly information....

  9. Multidecadal shoreline changes of atoll islands in the Marshall Islands

    Science.gov (United States)

    Ford, M.

    2012-12-01

    Atoll islands are considered highly vulnerable to the impacts of continued sea level rise. One of the most commonly predicted outcomes of continued sea level rise is widespread and chronic shoreline erosion. Despite the widespread implications of predicted erosion, the decadal scale changes of atoll island shorelines are poorly resolved. The Marshall Islands is one of only four countries where the majority of inhabited land is comprised of reef and atoll islands. Consisting of 29 atolls and 5 mid-ocean reef islands, the Marshall Islands are considered highly vulnerable to the impacts of sea level rise. A detailed analysis of shoreline change on over 300 islands on 10 atolls was undertaken using historic aerial photos (1945-1978) and modern high resolution satellite imagery (2004-2012). Results highlight the complex and dynamic nature of atoll islands, with significant shifts in shoreline position observed over the period of analysis. Results suggest shoreline accretion is the dominant mode of change on the islands studied, often associated with a net increase in vegetated island area. However, considerable inter- and intra-atoll variability exists with regards to shoreline stability. Findings are discussed with respect to island morphodynamics and potential hazard mitigation and planning responses within atoll settings.

  10. California Geothermal Forum: A Path to Increasing Geothermal Development in California

    Energy Technology Data Exchange (ETDEWEB)

    Young, Katherine R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-01-01

    The genesis of this report was a 2016 forum in Sacramento, California, titled 'California Geothermal Forum: A Path to Increasing Geothermal Development in California.' The forum was held at the California Energy Commission's (CEC) headquarters in Sacramento, California with the primary goal being to advance the dialogues for the U.S. Department of Energy's Geothermal Technologies Office (GTO) and CEC technical research and development (R&D) focuses for future consideration. The forum convened a diverse group of stakeholders from government, industry, and research to lay out pathways for new geothermal development in California while remaining consistent with critical Federal and State conservation planning efforts, particularly at the Salton Sea.

  11. Tracking the origins and diet of an endemic island canid (Urocyon littoralis) across 7300 years of human cultural and environmental change

    Science.gov (United States)

    Hofman, Courtney A.; Rick, Torben C.; Maldonado, Jesús E.; Collins, Paul W.; Erlandson, Jon M.; Fleischer, Robert C.; Smith, Chelsea; Sillett, T. Scott; Ralls, Katherine; Teeter, Wendy; Vellanoweth, René L.; Newsome, Seth D.

    2016-08-01

    Understanding how human activities have influenced the foraging ecology of wildlife is important as our planet faces ongoing and impending habitat and climatic change. We review the canine surrogacy approach (CSA)-a tool for comparing human, dog, and other canid diets in the past-and apply CSA to investigate possible ancient human resource provisioning in an endangered canid, the California Channel Islands fox (Urocyon littoralis). We conducted stable isotope analysis of bone collagen samples from ancient and modern island foxes (n = 214) and mainland gray foxes (Urocyon cinereoargenteus, n = 24). We compare these data to isotope values of ancient humans and dogs, and synthesize 29 Accelerator Mass Spectrometry (AMS) radiocarbon dates that fine-tune the chronology of island foxes. AMS dates confirm that island foxes likely arrived during the early Holocene (>7300 cal BP) on the northern islands in the archipelago and during the middle Holocene (>5500 cal BP) on the southern islands. We found no evidence that island foxes were consistently using anthropogenic resources (e.g., food obtained by scavenging around human habitation sites or direct provisioning by Native Americans), except for a few individuals on San Nicolas Island and possibly on San Clemente and Santa Rosa islands. Decreases in U. littoralis carbon and nitrogen isotope values between prehistoric times and the 19th century on San Nicolas Island suggest that changes in human land use from Native American hunter-gatherer occupations to historical ranching had a strong influence on fox diet. Island foxes exhibit considerable dietary variation through time and between islands and have adapted to a wide variety of climatic and cultural changes over the last 7300 years. This generalist foraging strategy suggests that endemic island foxes may be resilient to future changes in resource availability.

  12. Private Schools, California, 2009, California Department of Education

    Data.gov (United States)

    U.S. Environmental Protection Agency — California law (California Education Code Section 33190) requires private schools offering or conducting a full-time elementary or secondary level day school for...

  13. Estimating pediatric entrance skin dose from digital radiography examination using DICOM metadata: A quality assurance tool

    Energy Technology Data Exchange (ETDEWEB)

    Brady, S. L., E-mail: samuel.brady@stjude.org; Kaufman, R. A., E-mail: robert.kaufman@stjude.org [Department of Diagnostic Imaging, St. Jude Children’s Research Hospital, Memphis, Tennessee 38105 (United States)

    2015-05-15

    Purpose: To develop an automated methodology to estimate patient examination dose in digital radiography (DR) imaging using DICOM metadata as a quality assurance (QA) tool. Methods: Patient examination and demographical information were gathered from metadata analysis of DICOM header data. The x-ray system radiation output (i.e., air KERMA) was characterized for all filter combinations used for patient examinations. Average patient thicknesses were measured for head, chest, abdomen, knees, and hands using volumetric images from CT. Backscatter factors (BSFs) were calculated from examination kVp. Patient entrance skin air KERMA (ESAK) was calculated by (1) looking up examination technique factors taken from DICOM header metadata (i.e., kVp and mA s) to derive an air KERMA (k{sub air}) value based on an x-ray characteristic radiation output curve; (2) scaling k{sub air} with a BSF value; and (3) correcting k{sub air} for patient thickness. Finally, patient entrance skin dose (ESD) was calculated by multiplying a mass–energy attenuation coefficient ratio by ESAK. Patient ESD calculations were computed for common DR examinations at our institution: dual view chest, anteroposterior (AP) abdomen, lateral (LAT) skull, dual view knee, and bone age (left hand only) examinations. Results: ESD was calculated for a total of 3794 patients; mean age was 11 ± 8 yr (range: 2 months to 55 yr). The mean ESD range was 0.19–0.42 mGy for dual view chest, 0.28–1.2 mGy for AP abdomen, 0.18–0.65 mGy for LAT view skull, 0.15–0.63 mGy for dual view knee, and 0.10–0.12 mGy for bone age (left hand) examinations. Conclusions: A methodology combining DICOM header metadata and basic x-ray tube characterization curves was demonstrated. In a regulatory era where patient dose reporting has become increasingly in demand, this methodology will allow a knowledgeable user the means to establish an automatable dose reporting program for DR and perform patient dose related QA testing for

  14. Foundation Investigation for Ground Based Radar Project-Kwajalein Island, Marshall Islands

    Science.gov (United States)

    1990-04-01

    iL_ COPY MISCELLANEOUS PAPER GL-90-5 i iFOUNDATION INVESTIGATION FOR GROUND BASED RADAR PROJECT--KWAJALEIN ISLAND, MARSHALL ISLANDS by Donald E...C!assification) Foundatioa Investigation for Ground Based Radar Project -- Kwajalein Island, Marshall Islands 12. PERSONAL AUTHOR(S) Yule, Donald E...investigation for the Ground Based Radar Project -- Kwajalein Island, Marshall Islands , are presented.- eophysical tests comprised of surface refrac- tion

  15. Event metadata records as a testbed for scalable data mining

    International Nuclear Information System (INIS)

    Gemmeren, P van; Malon, D

    2010-01-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  16. Social Web Content Enhancement in a Distance Learning Environment: Intelligent Metadata Generation for Resources

    Science.gov (United States)

    García-Floriano, Andrés; Ferreira-Santiago, Angel; Yáñez-Márquez, Cornelio; Camacho-Nieto, Oscar; Aldape-Pérez, Mario; Villuendas-Rey, Yenny

    2017-01-01

    Social networking potentially offers improved distance learning environments by enabling the exchange of resources between learners. The existence of properly classified content results in an enhanced distance learning experience in which appropriate materials can be retrieved efficiently; however, for this to happen, metadata needs to be present.…

  17. Simplifying the Reuse and Interoperability of Geoscience Data Sets and Models with Semantic Metadata that is Human-Readable and Machine-actionable

    Science.gov (United States)

    Peckham, S. D.

    2017-12-01

    Standardized, deep descriptions of digital resources (e.g. data sets, computational models, software tools and publications) make it possible to develop user-friendly software systems that assist scientists with the discovery and appropriate use of these resources. Semantic metadata makes it possible for machines to take actions on behalf of humans, such as automatically identifying the resources needed to solve a given problem, retrieving them and then automatically connecting them (despite their heterogeneity) into a functioning workflow. Standardized model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. A carefully constructed, unambiguous and rules-based schema to address this problem, called the Geoscience Standard Names ontology will be presented that utilizes Semantic Web best practices and technologies. It has also been designed to work across science domains and to be readable by both humans and machines.

  18. OntoCheck: verifying ontology naming conventions and metadata completeness in Protégé 4.

    Science.gov (United States)

    Schober, Daniel; Tudose, Ilinca; Svatek, Vojtech; Boeker, Martin

    2012-09-21

    Although policy providers have outlined minimal metadata guidelines and naming conventions, ontologies of today still display inter- and intra-ontology heterogeneities in class labelling schemes and metadata completeness. This fact is at least partially due to missing or inappropriate tools. Software support can ease this situation and contribute to overall ontology consistency and quality by helping to enforce such conventions. We provide a plugin for the Protégé Ontology editor to allow for easy checks on compliance towards ontology naming conventions and metadata completeness, as well as curation in case of found violations. In a requirement analysis, derived from a prior standardization approach carried out within the OBO Foundry, we investigate the needed capabilities for software tools to check, curate and maintain class naming conventions. A Protégé tab plugin was implemented accordingly using the Protégé 4.1 libraries. The plugin was tested on six different ontologies. Based on these test results, the plugin could be refined, also by the integration of new functionalities. The new Protégé plugin, OntoCheck, allows for ontology tests to be carried out on OWL ontologies. In particular the OntoCheck plugin helps to clean up an ontology with regard to lexical heterogeneity, i.e. enforcing naming conventions and metadata completeness, meeting most of the requirements outlined for such a tool. Found test violations can be corrected to foster consistency in entity naming and meta-annotation within an artefact. Once specified, check constraints like name patterns can be stored and exchanged for later re-use. Here we describe a first version of the software, illustrate its capabilities and use within running ontology development efforts and briefly outline improvements resulting from its application. Further, we discuss OntoChecks capabilities in the context of related tools and highlight potential future expansions. The OntoCheck plugin facilitates

  19. A Metadata based Knowledge Discovery Methodology for Seeding Translational Research.

    Science.gov (United States)

    Kothari, Cartik R; Payne, Philip R O

    2015-01-01

    In this paper, we present a semantic, metadata based knowledge discovery methodology for identifying teams of researchers from diverse backgrounds who can collaborate on interdisciplinary research projects: projects in areas that have been identified as high-impact areas at The Ohio State University. This methodology involves the semantic annotation of keywords and the postulation of semantic metrics to improve the efficiency of the path exploration algorithm as well as to rank the results. Results indicate that our methodology can discover groups of experts from diverse areas who can collaborate on translational research projects.

  20. Potential climatic refugia in semi-arid, temperate mountains: plant and arthropod assemblages associated with rock glaciers, talus slopes, and their forefield wetlands, Sierra Nevada, California, USA

    Science.gov (United States)

    Constance I. Millar; Robert D. Westfall; Angela Evenden; Jeffrey G. Holmquist; Jutta Schmidt-Gengenbach; Rebecca S. Franklin; Jan Nachlinger; Diane L. Delany

    2015-01-01

    Unique thermal and hydrologic regimes of rock-glacier and periglacial talus environments support little-studied mountain ecosystems. We report the first studies of vascular plant and arthropod diversity for these habitats in the central Sierra Nevada, California, USA. Surfaces of active rock glaciers develop scattered islands of soil that provide habitat for vegetation...