WorldWideScience

Sample records for netcdf big-endian little-endian

  1. 106-17 Telemetry Standards Recorder Data Packet Format Standard Chapter 11

    Science.gov (United States)

    2017-07-01

    Video Frame Format, 16-Bit Little-Endian Aligned...11-66 Figure 11-49. Format 0 MPEG-2/H.264 Video Frame Format, 16-Bit Big-Endian (Native) Aligned...Figure 11-60. Image Data Intra-Packet Header, Format 0 ............................................... 11-81 Figure 11-61. Still Imagery Packet Channel

  2. RSS MONTHLY 1-DEG MERGED WIND CLIMATOLOGY NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS Monthly 1-deg Merged Wind Climatology netCDF dataset provides one degree gridded data for the monthly means of wind speed and wind direction, a 20 year...

  3. OGC NetCDF specifications: Towards a unified Interface for Earth Observation data in the Geospatial Information domain

    Science.gov (United States)

    Nativi, S.; Domenico, B.

    2016-12-01

    The purpose of the OGC netCDF Standardization Working Group (SWG) is to extend further the existing netCDF standard with extension modules for additional data models, encodings, and conventions. The scope is to use netCDF as a unified model and interface for encoding and accessing multidisciplinary Geosciences data. This has facilitated the interoperability across the diverse Geoscience disciplines in the geospatial information area. OGC netCDF SWG has developed a primer document to provide an overview of the current OGC netCDF standards suite and describe the possible extensions. These extensions have been recognized to fill the gap between the netCDF Community (e.g. Climate Changes, Atmospheric and Oceanography Communities) and the Geospatial Information Community (e.g. GIS, Geo-Web, etc.). This is pursued by supporting modeling and encoding of digital geospatial information representing space/time-varying phenomena. OGC netCDF SWG, has recently recognized a set of useful specifications (e.g. semantics, conventions, and encodings) to be specified for improving interoperability among the systems using the netCDF technology. They address important requirements coming from the netCDF Community and consider the present geospatial information landscape, i.e. ISO standards, CF conventions, the other OGC specifications, W3C specification for spatial data on the Web, etc. The main netCDF developments and related challenges considered by the presentation are: (Discovery) Metadata conventions; Advanced «Reference» conventions; Earth Observation Conventions; Semi-structured Encodings.

  4. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F11 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Daily from DMSP F11 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  5. RSS SSMIS OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F17 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSMIS Ocean Product Grids Weekly Average from DMSP F17 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  6. RSS SSMIS OCEAN PRODUCT GRIDS DAILY FROM DMSP F17 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSMIS Ocean Product Grids Daily from DMSP F17 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  7. RSS SSMIS OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F16 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSMIS Ocean Product Grids 3-Day Average from DMSP F16 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  8. RSS SSMIS OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F17 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSMIS Ocean Product Grids Monthly Average from DMSP F17 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special...

  9. CRED 20m Gridded bathymetry of Nihoa Island, Hawaii, USA (NetCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry (20m) of the shelf and slope environments of Nihoa Island, Hawaii, USA. The netCDF includes multibeam bathymetry from the Simrad EM120, Simrad...

  10. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F15 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Monthly Average from DMSP F15 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special...

  11. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F13 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Monthly Average from DMSP F13 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special...

  12. RSS SSMIS OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F16 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSMIS Ocean Product Grids Weekly Average from DMSP F16 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  13. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F14 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Daily from DMSP F14 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  14. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F14 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Monthly Average from DMSP F14 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special...

  15. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F15 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Daily from DMSP F15 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  16. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F13 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Daily from DMSP F13 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  17. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F11 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Monthly Average from DMSP F11 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special...

  18. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F10 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Monthly Average from DMSP F10 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special...

  19. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F10 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Daily from DMSP F10 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  20. RSS SSMIS OCEAN PRODUCT GRIDS DAILY FROM DMSP F16 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSMIS Ocean Product Grids Daily from DMSP F16 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  1. RSS SSMIS OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F16 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSMIS Ocean Product Grids Monthly Average from DMSP F16 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special...

  2. RSS SSMIS OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F17 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSMIS Ocean Product Grids 3-Day Average from DMSP F17 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  3. RSS MONTHLY 1-DEG MICROWAVE TOTAL PRECIPITABLE WATER NETCDF V7R01

    Data.gov (United States)

    National Aeronautics and Space Administration — The Remote Sensing Systems (RSS) Monthly 1-degree Microwave Total Precipitable Water (TPW) netCDF dataset V7R01 provides global total columnar water vapor values, or...

  4. CRED 5 m Gridded bathymetry of Brooks Banks, Northwestern Hawaiian Islands, USA (NetCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry (5m) of the shelf and slope environments of Brooks Banks, Hawaii, USA. The netCDF grid includes multibeam bathymetry from the Simrad EM300, Simrad...

  5. CRED 20m Gridded bathymetry of Necker Islands, Northwestern Hawaiian Islands, USA (NetCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry of the shelf and slope environments of Necker Island, Northwestern Hawaiian Islands, Hawaii, USA. This netCDF includes multibeam bathymetry from...

  6. CRED 20 m Gridded bathymetry of Gardner Pinnacles, Northwestern Hawaiian Islands, USA (NetCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry (20m) of the shelf and slope environments of Gardner Pinnacles, Northwestern Hawaiian Islands, Hawaii, USA. This netCDF includes multibeam...

  7. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    Science.gov (United States)

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  8. Linking netCDF Data with the Semantic Web - Enhancing Data Discovery Across Domains

    Science.gov (United States)

    Biard, J. C.; Yu, J.; Hedley, M.; Cox, S. J. D.; Leadbetter, A.; Car, N. J.; Druken, K. A.; Nativi, S.; Davis, E.

    2016-12-01

    Geophysical data communities are publishing large quantities of data across a wide variety of scientific domains which are overlapping more and more. Whilst netCDF is a common format for many of these communities, it is only one of a large number of data storage and transfer formats. One of the major challenges ahead is finding ways to leverage these diverse data sets to advance our understanding of complex problems. We describe a methodology for incorporating Resource Description Framework (RDF) triples into netCDF files called netCDF-LD (netCDF Linked Data). NetCDF-LD explicitly connects the contents of netCDF files - both data and metadata, with external web-based resources, including vocabularies, standards definitions, and data collections, and through them, a whole host of related information. This approach also preserves and enhances the self describing essence of the netCDF format and its metadata, whilst addressing the challenge of integrating various conventions into files. We present a case study illustrating how reasoning over RDF graphs can empower researchers to discover datasets across domain boundaries.

  9. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F10 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Weekly Average from DMSP F10 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  10. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F15 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Weekly Average from DMSP F15 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  11. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F13 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Weekly Average from DMSP F13 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  12. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F8 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Daily from DMSP F8 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave...

  13. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F8 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Monthly Average from DMSP F8 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  14. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F8 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Products Grid Weekly Average from DMSP F8 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  15. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F14 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Weekly Average from DMSP F14 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  16. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F11 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Weekly Average from DMSP F11 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  17. GPM GROUND VALIDATION NOAA S-BAND PROFILER RAW DATA NETCDF FORMAT MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The S-band Profiler Raw dataset was saved in two data formats: netCDF anda proprietary Vaisala SPC format. The numeric values in both formats are exactly the same....

  18. MISR Level 3 Component Global Land product in netCDF format covering a day V004

    Data.gov (United States)

    National Aeronautics and Space Administration — The MISR Level 3 Component Global Land Product in netCDF contains a daily statistical summary of directional hemispherical reflectance (DHR), photosynthetically...

  19. MISR Level 3 Component Global Land product in netCDF format covering a year V004

    Data.gov (United States)

    National Aeronautics and Space Administration — The MISR Level 3 Yearly Component Global Land Product in netCDF contains a yearly statistical summary of directional hemispherical reflectance (DHR),...

  20. GPM GROUND VALIDATION NOAA S-BAND PROFILER RAW DATA NETCDF FORMAT MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation NOAA S-Band Profiler Raw Data NetCDF Format MC3E dataset was gathered during the Midlatitude Continental Convective Clouds Experiment...

  1. MISR Level 3 Component Global Land product in netCDF format covering a month V004

    Data.gov (United States)

    National Aeronautics and Space Administration — The MISR Level 3 Monthly Component Global Land Product in netCDF contains a monthly statistical summary of directional hemispherical reflectance (DHR),...

  2. MISR Level 3 FIRSTLOOK Global Albedo product in netCDF format covering a day V002

    Data.gov (United States)

    National Aeronautics and Space Administration — The MISR Level 3 FIRSTLOOK Component Global Albedo Product in netCDF covering a day contains a statistical summary of column albedo 555 nanometer optical depth, and...

  3. MISR Level 3 FIRSTLOOK Global Albedo product in netCDF format covering a month V002

    Data.gov (United States)

    National Aeronautics and Space Administration — The MISR Level 3 FIRSTLOOK Component Global Albedo Product in netCDF format covering a month contains a statistical summary of column albedo 555 nanometer optical...

  4. MISR Level 3 FIRSTLOOK Global Aerosol product in netCDF format covering a day V002

    Data.gov (United States)

    National Aeronautics and Space Administration — The MISR Level 3 FIRSTLOOK Global Aerosol Product in netCDF format covering a day contains a statistical summary of column aerosol 555 nanometer optical depth, and a...

  5. Serving Real-Time Point Observation Data in netCDF using Climate and Forecasting Discrete Sampling Geometry Conventions

    Science.gov (United States)

    Ward-Garrison, C.; May, R.; Davis, E.; Arms, S. C.

    2016-12-01

    NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The Climate and Forecasting (CF) metadata conventions for netCDF foster the ability to work with netCDF files in general and useful ways. These conventions include metadata attributes for physical units, standard names, and spatial coordinate systems. While these conventions have been successful in easing the use of working with netCDF-formatted output from climate and forecast models, their use for point-based observation data has been less so. Unidata has prototyped using the discrete sampling geometry (DSG) CF conventions to serve, using the THREDDS Data Server, the real-time point observation data flowing across the Internet Data Distribution (IDD). These data originate in text format reports for individual stations (e.g. METAR surface data or TEMP upper air data) and are converted and stored in netCDF files in real-time. This work discusses the experiences and challenges of using the current CF DSG conventions for storing such real-time data. We also test how parts of netCDF's extended data model can address these challenges, in order to inform decisions for a future version of CF (CF 2.0) that would take advantage of features of the netCDF enhanced data model.

  6. Implementing Network Common Data Form (netCDF) for the 3DWF Model

    Science.gov (United States)

    2016-02-01

    executable code file in MATLAB is listed in Appendix A-2. From the menu page of the 3DWF GUI currently being redesigned, as shown in Fig. 3, the...WRF result” from the top portion of the 3DWF GUI menu page The “WRF result” option will activate and run the MATLAB executable file named...initialization will also be documented. 15. SUBJECT TERMS 3DWF, netCDF, GUI , WFR 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF

  7. Developing a Hadoop-based Middleware for Handling Multi-dimensional NetCDF

    Science.gov (United States)

    Li, Z.; Yang, C. P.; Schnase, J. L.; Duffy, D.; Lee, T. J.

    2014-12-01

    Climate observations and model simulations are collecting and generating vast amounts of climate data, and these data are ever-increasing and being accumulated in a rapid speed. Effectively managing and analyzing these data are essential for climate change studies. Hadoop, a distributed storage and processing framework for large data sets, has attracted increasing attentions in dealing with the Big Data challenge. The maturity of Infrastructure as a Service (IaaS) of cloud computing further accelerates the adoption of Hadoop in solving Big Data problems. However, Hadoop is designed to process unstructured data such as texts, documents and web pages, and cannot effectively handle the scientific data format such as array-based NetCDF files and other binary data format. In this paper, we propose to build a Hadoop-based middleware for transparently handling big NetCDF data by 1) designing a distributed climate data storage mechanism based on POSIX-enabled parallel file system to enable parallel big data processing with MapReduce, as well as support data access by other systems; 2) modifying the Hadoop framework to transparently processing NetCDF data in parallel without sequencing or converting the data into other file formats, or loading them to HDFS; and 3) seamlessly integrating Hadoop, cloud computing and climate data in a highly scalable and fault-tolerance framework.

  8. NetCDF4/HDF5 and Linked Data in the Real World - Enriching Geoscientific Metadata without Bloat

    Science.gov (United States)

    Ip, Alex; Car, Nicholas; Druken, Kelsey; Poudjom-Djomani, Yvette; Butcher, Stirling; Evans, Ben; Wyborn, Lesley

    2017-04-01

    NetCDF4 has become the dominant generic format for many forms of geoscientific data, leveraging (and constraining) the versatile HDF5 container format, while providing metadata conventions for interoperability. However, the encapsulation of detailed metadata within each file can lead to metadata "bloat", and difficulty in maintaining consistency where metadata is replicated to multiple locations. Complex conceptual relationships are also difficult to represent in simple key-value netCDF metadata. Linked Data provides a practical mechanism to address these issues by associating the netCDF files and their internal variables with complex metadata stored in Semantic Web vocabularies and ontologies, while complying with and complementing existing metadata conventions. One of the stated objectives of the netCDF4/HDF5 formats is that they should be self-describing: containing metadata sufficient for cataloguing and using the data. However, this objective can be regarded as only partially-met where details of conventions and definitions are maintained externally to the data files. For example, one of the most widely used netCDF community standards, the Climate and Forecasting (CF) Metadata Convention, maintains standard vocabularies for a broad range of disciplines across the geosciences, but this metadata is currently neither readily discoverable nor machine-readable. We have previously implemented useful Linked Data and netCDF tooling (ncskos) that associates netCDF files, and individual variables within those files, with concepts in vocabularies formulated using the Simple Knowledge Organization System (SKOS) ontology. NetCDF files contain Uniform Resource Identifier (URI) links to terms represented as SKOS Concepts, rather than plain-text representations of those terms, so we can use simple, standardised web queries to collect and use rich metadata for the terms from any Linked Data-presented SKOS vocabulary. Geoscience Australia (GA) manages a large volume of diverse

  9. NetCDF based data archiving system applied to ITER Fast Plant System Control prototype

    Energy Technology Data Exchange (ETDEWEB)

    Castro, R., E-mail: rodrigo.castro@visite.es [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain); Ruiz, M.; De Arcas, G.; Barrera, E.; Lopez, J.M.; Sanz, D. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, UPM, Madrid (Spain); Goncalves, B.; Santos, B. [Associacao EURATOM/IST, IPFN - Laboratorio Associado, IST, Lisboa (Portugal); Utzel, N.; Makijarvi, P. [ITER Organization, St. Paul lez Durance Cedex (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Implementation of a data archiving solution for a Fast Plant System Controller (FPSC) for ITER CODAC. Black-Right-Pointing-Pointer Data archiving solution based on scientific NetCDF-4 file format and Lustre storage clustering. Black-Right-Pointing-Pointer EPICS control based solution. Black-Right-Pointing-Pointer Tests results and detailed analysis of using NetCDF-4 and clustering technologies on fast acquisition data archiving. - Abstract: EURATOM/CIEMAT and Technical University of Madrid (UPM) have been involved in the development of a FPSC (Fast Plant System Control) prototype for ITER, based on PXIe (PCI eXtensions for Instrumentation). One of the main focuses of this project has been data acquisition and all the related issues, including scientific data archiving. Additionally, a new data archiving solution has been developed to demonstrate the obtainable performances and possible bottlenecks of scientific data archiving in Fast Plant System Control. The presented system implements a fault tolerant architecture over a GEthernet network where FPSC data are reliably archived on remote, while remaining accessible to be redistributed, within the duration of a pulse. The storing service is supported by a clustering solution to guaranty scalability, so that FPSC management and configuration may be simplified, and a unique view of all archived data provided. All the involved components have been integrated under EPICS (Experimental Physics and Industrial Control System), implementing in each case the necessary extensions, state machines and configuration process variables. The prototyped solution is based on the NetCDF-4 (Network Common Data Format) file format in order to incorporate important features, such as scientific data models support, huge size files management, platform independent codification, or single-writer/multiple-readers concurrency. In this contribution, a complete description of the above mentioned solution

  10. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F13 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids 3-Day Average from DMSP F13 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  11. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F8 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Oceean Product Grids 3-Day Average from DMSP F8 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  12. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F10 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids 3-Day Average from DMSP F10 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  13. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F11 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids 3-Day Average from DMSP F11 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  14. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F15 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids 3-Day Average from DMSP F15 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  15. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F14 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids 3-Day Average from DMSP F14 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  16. BINEX as a Format for Near-Real Time GNSS and Other Data Streams

    Science.gov (United States)

    Estey, L.; Mencin, D.

    2008-12-01

    BINEX, for "BINary Exchange", is an open and operational binary format for GNSS data. It has been available as a format option on several different GPS receivers from several manufacturers starting with Ashtech's microZ in 2000, and has evolved to support GNSS data on Trimble's receivers. The data structures are very compact and are organized in epoch-by-epoch records which do not rely on any prior records for decoding. Typically, only a few hundred bytes per epoch are needed to store the L1 and L2 phase and code pseudoranges (both to 1mm resolution), CNo measurements (to 0.1 dBHz resolution), loss-of-lock flags, and so on. Ancillary site data, such as meteorological observations, can also be stored as BINEX records. Each BINEX record also identifies whether it is of little-endian or big-endian construction, so that BINEX creation can be optimized by processor type in a GNSS receiver or later construction by computer. Each BINEX record also has a scaled checksum or CRC of 1-16 bytes, dependent on record length. The Plate Boundary Observatory is currently using near-real time BINEX streams from Trimble NetRS receivers as a means of outputting various ancillary site data. For example, meteorologic data, pore pressure, borehole tilt, and so on can be monitored by multiple serial I/O on the NetRS and these port queries bundled as BINEX records are directed to one or more BINEX output streams, in addition to the primary GPS data epochs. Users can tap into which ever stream meets their need. In addition, the BINEX records are stored in the NetRS in session files for later retrieval in case of real-time data loss in the transmitted streams.

  17. netCDF Operators for Rapid Analysis of Measured and Modeled Swath-like Data

    Science.gov (United States)

    Zender, C. S.

    2015-12-01

    Swath-like data (hereafter SLD) are defined by non-rectangular and/or time-varying spatial grids in which one or more coordinates are multi-dimensional. It is often challenging and time-consuming to work with SLD, including all Level 2 satellite-retrieved data, non-rectangular subsets of Level 3 data, and model data on curvilinear grids. Researchers and data centers want user-friendly, fast, and powerful methods to specify, extract, serve, manipulate, and thus analyze, SLD. To meet these needs, large research-oriented agencies and modeling center such as NASA, DOE, and NOAA increasingly employ the netCDF Operators (NCO), an open-source scientific data analysis software package applicable to netCDF and HDF data. NCO includes extensive, fast, parallelized regridding features to facilitate analysis and intercomparison of SLD and model data. Remote sensing, weather and climate modeling and analysis communities face similar problems in handling SLD including how to easily: 1. Specify and mask irregular regions such as ocean basins and political boundaries in SLD (and rectangular) grids. 2. Bin, interpolate, average, or re-map SLD to regular grids. 3. Derive secondary data from given quality levels of SLD. These common tasks require a data extraction and analysis toolkit that is SLD-friendly and, like NCO, familiar in all these communities. With NCO users can 1. Quickly project SLD onto the most useful regular grids for intercomparison. 2. Access sophisticated statistical and regridding functions that are robust to missing data and allow easy specification of quality control metrics. These capabilities improve interoperability, software-reuse, and, because they apply to SLD, minimize transmission, storage, and handling of unwanted data. While SLD analysis still poses many challenges compared to regularly gridded, rectangular data, the custom analyses scripts SLD once required are now shorter, more powerful, and user-friendly.

  18. eWaterCycle visualisation. combining the strength of NetCDF and Web Map Service: ncWMS

    Science.gov (United States)

    Hut, R.; van Meersbergen, M.; Drost, N.; Van De Giesen, N.

    2016-12-01

    As a result of the eWatercycle global hydrological forecast we have created Cesium-ncWMS, a web application based on ncWMS and Cesium. ncWMS is a server side application capable of reading any NetCDF file written using the Climate and Forecasting (CF) conventions, and making the data available as a Web Map Service(WMS). ncWMS automatically determines available variables in a file, and creates maps colored according to map data and a user selected color scale. Cesium is a Javascript 3D virtual Globe library. It uses WebGL for rendering, which makes it very fast, and it is capable of displaying a wide variety of data types such as vectors, 3D models, and 2D maps. The forecast results are automatically uploaded to our web server running ncWMS. In turn, the web application can be used to change the settings for color maps and displayed data. The server uses the settings provided by the web application, together with the data in NetCDF to provide WMS image tiles, time series data and legend graphics to the Cesium-NcWMS web application. The user can simultaneously zoom in to the very high resolution forecast results anywhere on the world, and get time series data for any point on the globe. The Cesium-ncWMS visualisation combines a global overview with local relevant information in any browser. See the visualisation live at forecast.ewatercycle.org

  19. Enabling data-driven provenance in NetCDF, via OGC WPS operations. Climate Analysis services use case.

    Science.gov (United States)

    Mihajlovski, A.; Spinuso, A.; Plieger, M.; Som de Cerff, W.

    2016-12-01

    Modern Climate analysis platforms provide generic and standardized ways of accessing data and processing services. These are typically supported by a wide range of OGC formats and interfaces. However, the problem of instrumentally tracing the lineage of the transformations occurring on a dataset and its provenance remains an open challenge. It requires standard-driven and interoperable solutions to facilitate understanding, sharing of self-describing data products, fostering collaboration among peers. The CLIPC portal provided us real use case, where the need of an instrumented provenance management is fundamental. CLIPC provides a single point of access for scientific information on climate change. The data about the physical environment which is used to inform climate change policy and adaptation measures comes from several categories: satellite measurements, terrestrial observing systems, model projections and simulations and from re-analyses. This is made possible through the Copernicus Earth Observation Programme for Europe. With a backbone combining WPS and OPeNDAP services, CLIPC has two themes: 1. Harmonized access to climate datasets derived from models, observations and re-analyses 2. A climate impact tool kit to evaluate, rank and aggregate indicators The climate impact tool kit is realised with the orchestration of a number of WPS that ingest, normalize and combine NetCDF files. The WPS allowing this specific computation are hosted by the climate4impact portal, which is a more generic climate data-access and processing service. In this context, guaranteeing validation and reproducibility of results, is a clearly stated requirement to improve the quality of the results obtained by the combined analysis Two core contributions made, are the enabling of a provenance wrapper around WPS services and the enabling of provenance tracing within the NetCDF format, which adopts and extends the W3C's PROV model. To disseminate indicator data and create transformed

  20. Extending netCDF and CF conventions to support enhanced Earth Observation Ontology services: the Prod-Trees project

    Science.gov (United States)

    Mazzetti, Paolo; Valentin, Bernard; Koubarakis, Manolis; Nativi, Stefano

    2013-04-01

    the elicitation of user requirements in order to identify gaps in the current CF and netCDF specification for providing an extended support of the discovery of EO data. To this aim a Validation Group has been established including members from organizations actively using netCDF and CF standards. A questionnaire has been prepared and submitted to the Validation Group; it was aimed for being filled online, but also for guiding interviews. The presentation will focus on the project objectives, the first achievements with particular reference to the results of the requirements analysis and future plans.

  1. Using GDAL to Convert NetCDF 4 CF 1.6 to GeoTIFF: Interoperability Problems and Solutions for Data Providers and Distributors

    Science.gov (United States)

    Haran, T. M.; Brodzik, M. J.; Nordgren, B.; Estilow, T.; Scott, D. J.

    2015-12-01

    An increasing number of new Earth science datasets are being producedby data providers in self-describing, machine-independent file formatsincluding Hierarchical Data Format version 5 (HDF5) and NetworkCommon Data Form version 4 (netCDF-4). Furthermore data providers maybe producing netCDF-4 files that follow the conventions for Climateand Forecast metadata version 1.6 (CF 1.6) which, for datasets mappedto a projected raster grid covering all or a portion of the earth,includes the Coordinate Reference System (CRS) used to define howlatitude and longitude are mapped to grid coordinates, i.e. columnsand rows, and vice versa. One problem that users may encounter is thattheir preferred visualization and analysis tool may not yet includesupport for one of these newer formats. Moreover, data distributorssuch as NASA's NSIDC DAAC may not yet include support for on-the-flyconversion of data files for all data sets produced in a new format toa preferred older distributed format.There do exist open source solutions to this dilemma in the form ofsoftware packages that can translate files in one of the new formatsto one of the preferred formats. However these software packagesrequire that the file to be translated conform to the specificationsof its respective format. Although an online CF-Convention compliancechecker is available from cfconventions.org, a recent NSIDC userservices incident described here in detail involved an NSIDC-supporteddata set that passed the (then current) CF Checker Version 2.0.6, butwas in fact lacking two variables necessary for conformance. Thisproblem was not detected until GDAL, a software package which reliedon the missing variables, was employed by a user in an attempt totranslate the data into a different file format, namely GeoTIFF.This incident indicates that testing a candidate data product with oneor more software products written to accept the advertised conventionsis proposed as a practice which improves interoperability

  2. Trade Study: Storing NASA HDF5/netCDF-4 Data in the Amazon Cloud and Retrieving Data Via Hyrax Server Data Server

    Science.gov (United States)

    Habermann, Ted; Gallagher, James; Jelenak, Aleksandar; Potter, Nathan; Lee, Joe; Yang, Kent

    2017-01-01

    This study explored three candidate architectures with different types of objects and access paths for serving NASA Earth Science HDF5 data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the study were: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and netCDF4 data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3). Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.We will describe the three architectures and the use cases along with performance results and recommendations for further work.

  3. The Model Interoperability Experiment in the Gulf of Maine: A Success Story Made Possible by NetCDF, CF-1.0, NcML, NetCDF-Java, THREDDS, OPeNDAP and MATLAB

    Science.gov (United States)

    Signell, R. P.

    2008-12-01

    The Gulf of Maine Ocean Data Partnership Modeling Committee has been developing a Model Interoperability Experiment in the Gulf of Maine built around the Climate and Forecast (CF-1.0) metadata standard. The goal is to allow scientists to issue common Matlab commands to retrieve geospatially referenced data, regardless of model type. Our starting point was output from six different models: the ROMS, ECOM, POM and FVCOM ocean circulation models, the WRF meteorological model and the WaveWatch III ocean wave model. Although the models all had different grid conventions and were served at different institutions, each group produced NetCDF files, used Matlab for visualization and analysis, and had a standard HTTP 1.1 web server. Only one group used CF-conventions, however, and as a result each group had their own set of analysis and visualization routines to perform nearly identical tasks. The system was designed to achieve interoperability with a minimum of effort on the part of the data providers and data users. To supply data, participants need only place their existing NetCDF files on their own web sites. The data is accessed using the "byte range request" feature of HTTP, utilized in NetCDF-Java. The CF standardization is achieved using a layer of XML (NcML) which also provides virtual aggregation of data. The THREDDS Data Server allows for central cataloging of the dataset, access via the OPeNDAP web service, and for rectilinear grids, access via the OGC Web Coverage Service (WCS) and the NetCDF Subset Services as well. The OPeNDAP + CF standard data can be accessed with our NetCDF-Java based "CF Toolkit for MATLAB". This toolkit works on any MATLAB system without compiling, delivering geospatially referenced model output from all six models using common functions. To further expand the capabilities of CF clients such as the one we have developed, we need to further expand the CF conventions to specify additional common features of model output, including staggered

  4. Marine-mammals NetCDF formats and conventions

    OpenAIRE

    Marine-mammals data management team

    2014-01-01

    The instrumented sea-mammals program is the global network of open-ocean in-situ observations, being implemented by an international partnership of researchers. Instrumented sea-mammals provide trajectories and vertical profiles of various physical, biogeochemical variables in different regions around the globe. The program’s objective is to build and maintain a multidisciplinary global network for a broad range of research and operational applications including biology, climate and ecosystem...

  5. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F10 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  6. RSS SSMIS OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F17 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  7. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F11 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  8. RSS SSMIS OCEAN PRODUCT GRIDS DAILY FROM DMSP F16 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  9. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F13 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  10. RSS SSMIS OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F17 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  11. RSS SSMIS OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F17 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  12. Special Sensor Microwave Imager/Sounder (SSMIS) Temperature Data Record (TDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager/Sounder (SSMIS) is a series of passive microwave conically scanning imagers and sounders onboard the DMSP satellites beginning...

  13. Special Sensor Microwave Imager/Sounder (SSMIS) Sensor Data Record (SDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager/Sounder (SSMIS) is a series of passive microwave conically scanning imagers and sounders onboard the DMSP satellites beginning...

  14. Extended Special Sensor Microwave Imager (SSM/I) Sensor Data Record (SDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager (SSM/I) is a seven-channel linearly polarized passive microwave radiometer that operates at frequencies of 19.36 (vertically and...

  15. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F14 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  16. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F14 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  17. CRED 20 m Gridded bathymetry of Raita Bank, Hawaii, USA (NetCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry of the shelf and slope environments of Raita Bank, Northwestern Hawaiian Islands, Hawaii, USA. Bottom coverage was achieved in depths between 166...

  18. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F10 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  19. MISR Level 3 Cloud Motion Vector monthly Product in netCDF format V001

    Data.gov (United States)

    National Aeronautics and Space Administration — The MISR Level 3 Monthly Cloud Motion Vector Product contains retrievals of cloud motion determined by geometrically triangulating the position and motion of cloud...

  20. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F15 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  1. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F8 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  2. RSS SSMIS OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F16 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  3. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F11 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  4. 10m Gridded bathymetry of Swains Island, American Samoa, South Pacific (netCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (10 m cell size) bathymetry of the slope environment of Swains Island, American Samoa, South Pacific. Almost complete bottom coverage was achieved in depths...

  5. MISR Level 3 Cloud Motion Vector yearly Product in netCDF format V001

    Data.gov (United States)

    National Aeronautics and Space Administration — The MISR Level 3 Yearly Cloud Motion Vector Product contains retrievals of cloud motion determined by geometrically triangulating the position and motion of cloud...

  6. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F11 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  7. RSS SSMIS OCEAN PRODUCT GRIDS DAILY FROM DMSP F17 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  8. ASTER Global Emissivity Dataset Monthly 0.05 degree NetCDF4

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Emissivity Dataset (GED) is a collection of monthly files (see known issues for gaps)...

  9. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F10 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  10. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F13 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  11. CRED 20 m Gridded bathymetry of Twin Banks, Northwestern Hawaiian Islands, USA (NetCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry (20m) of the shelf and slope environments of Twin Banks, Hawaii, USA. Bottom coverage was achieved in depths between 61 and 1500 meters. The...

  12. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F15 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  13. panama_city_fl_1-3_arc-second_mhw_netcdf.grd

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC builds and distributes high-resolution, coastal digital elevation models (DEMs) that integrate ocean bathymetry and land topography to support NOAA's mission to...

  14. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F14 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  15. RSS SSMIS OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F16 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  16. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F15 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  17. RSS SSMIS OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F16 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  18. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F13 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  19. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F8 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  20. Extended Special Sensor Microwave Imager (SSM/I) Temperature Data Record (TDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager (SSM/I) is a seven-channel linearly polarized passive microwave radiometer that operates at frequencies of 19.36 (vertically and...

  1. RSS SSM/I OCEAN PRODUCT GRIDS DAILY FROM DMSP F8 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  2. International Comprehensive Ocean-Atmosphere Data Set (ICOADS) R3.0 netCDF version

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This collection contains observations of global ocean meteorological and oceanographic variables, such as sea surface and air temperatures, wind, pressure, humidity,...

  3. ASTER Global Emissivity Dataset, Monthly, 0.05 deg, netCDF4 V004

    Data.gov (United States)

    National Aeronautics and Space Administration — The AG5KMMOH.004 dataset was decommissioned as of December 14, 2016. Users are encouraged to use Version 4.1 of ASTER Global Emissivity Dataset, Monthly, 0.05...

  4. Ground penetrating radar data used in discovery of the early Christian church of Notre Dame de Baudes near Labastide-du-Temple, France

    Directory of Open Access Journals (Sweden)

    Ted L Gragson

    2016-06-01

    Full Text Available Data on ground-penetrating radar transect files are provided that support the research presented in "Discovery and Appraisal of the Early Christian Church of Notre Dame de Baudes near Labastide-du-Temple, France" [1]. Data consist of 102 transect files obtained with a GSSI SIR-3000 controller and a 400 MHz center frequency antenna in two grid blocks covering ca. 2700 m2. The data are distributed raw without post-processing in SEG-Y rev. 1 format (little endian.

  5. CRED 20 m Gridded bathymetry of Kingman Reef, Pacific Remote Island Areas, Central Pacific (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (20 m cell size) bathymetry of the lagoon, shelf and slope environments of Kingman Reef, Pacific Remote Island Areas, Central Pacific. Almost complete bottom...

  6. CRED 20 m Gridded bathymetry of Johnston Atoll, Pacific Remote Island Areas, Central Pacific (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry (20 m cell size) of the shelf and slope environments of Johnston Atoll, Pacific Remote Island Areas, Central Pacific. Almost complete bottom...

  7. CRED 40 m Gridded bathymetry of Palmyra Atoll, Pacific Remote Island Areas, Central Pacific (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (40 m cell size) bathymetry of the lagoon, shelf and slope environments of Palmyra Atoll, Pacific Remote Island Areas, Central Pacific. Almost complete...

  8. CRED 40m Gridded bathymetry of the banktop and slope environments of Vailulu Seamount, American Samoa (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (40 m cell size) bathymetry of of Vailulu Seamount, an active volcano that lies between Ta'u Island and Rose Atoll, American Samoa, South Pacific. Almost...

  9. CRED 5 m Gridded bathymetry of Kingman Reef, Pacific Remote Island Areas, Central Pacific (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (5 m cell size) bathymetry of the lagoon, shelf and slope environments of Kingman Reef, Pacific Remote Island Areas, Central Pacific. Almost complete bottom...

  10. 5 m Gridded bathymetry of the lagoon and slope environments of Rose Atoll, American Samoa (netCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry (5 m cell size) of the inner lagoon and slope environments of Rose Atoll, American Samoa. This survey provides coverage between <10 and 300...

  11. CRED 5 m Gridded bathymetry of Howland Island, Pacific Remote Island Areas, Central Pacific (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (5 m cell size) bathymetry of the shelf and slope environments of Howland Island, Pacific Remote Island Areas, Central Pacific. Almost complete bottom...

  12. CRED 60 m Gridded bathymetry of UTM Zone 4, Northwestern Hawaiian Islands, USA (NetCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry (60m) of the shelf and slope environments of the Northwestern Hawaiian Islands, USA within UTM Zone 4. Bottom coverage was achieved in depths...

  13. CRED 5 m Gridded bathymetry of Palmyra Atoll, Pacific Remote Island Areas, Central Pacific (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (5 m cell size) bathymetry of the lagoon, shelf and slope environments of Palmyra Atoll, Pacific Remote Island Areas, Central Pacific. Almost complete bottom...

  14. CRED Reson 8101 multibeam backscatter data of Jarvis Island, Pacific Remote Island Areas, Central Pacific in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of the shelf and slope environments of Jarvis Atoll, Pacific Island Areas, Central Pacific. These...

  15. CRED Reson 8101 multibeam backscatter data of Howland Island, Pacific Remote Island Areas, Central Pacific in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of the shelf and slope environments of Howland Island, Pacific Island Areas, Central Pacific. These...

  16. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F11 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  17. CRED 5 m Gridded bathymetry of Johnston Atoll, Pacific Remote Island Areas, Central Pacific (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry (5 m cell size) of the shelf and slope environments of Johnston Atoll, Pacific Remote Island Areas, Central Pacific. Almost complete bottom...

  18. CRED 40 m Gridded bathymetry of Howland Island, Pacific Remote Island Areas, Central Pacific (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (40 m cell size) bathymetry of the shelf and slope environments of Howland Island, Pacific Remote Island Areas, Central Pacific. Almost complete bottom...

  19. CRED 5 m Gridded bathymetry of Baker Island, Pacific Remote Island Areas, Central Pacific (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (5 m cell size) bathymetry of the shelf and slope environments of Baker Island, Pacific Remote Island Areas, Central Pacific. Almost complete bottom coverage...

  20. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F14 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  1. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F15 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  2. CRED 20 m Gridded bathymetry and IKONOS estimated depths of Pearl and Hermes Atoll, Hawaii, USA (NetCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry and IKONOS estimated depths of the shelf and slope environments of Pearl and Hermes Atoll, Hawaii, USA. Bottom coverage was achieved in depths...

  3. CRED 20m Gridded bathymetry and IKONOS estimated depths of Maro Reef, Hawaii, USA (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry and IKONOS estimated depths of the shelf and slope environments of Maro Reef, Hawaii, USA. Bottom coverage was achieved in depths between 0 and...

  4. CRED 40 m Gridded bathymetry of Baker Island, Pacific Remote Island Areas, Central Pacific (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (40 m cell size) bathymetry of the shelf and slope environments of Baker Island, Pacific Remote Island Areas, Central Pacific. Almost complete bottom...

  5. CRED 20 m Gridded bathymetry of Jarvis Island, Pacific Remote Island Areas, Central Pacific (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (20 m cell size) bathymetry of the shelf and slope environments of Jarvis Island, Pacific Remote Island Areas, Central Pacific. Almost complete bottom...

  6. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F13 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  7. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F10 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  8. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F8 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) data products produced as part...

  9. MISR Level 3 Global Cloud public Product in netCDF format covering a year V002

    Data.gov (United States)

    National Aeronautics and Space Administration — The Level 3 Yearly Component Global Cloud Product is a global summary of the Level 1 and Level 2 cloud parameters of interest averaged over a year and reported on a...

  10. CRED Reson 8101 multibeam backscatter data of Johnston Island, Pacific Remote Island Areas, Central Pacific in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of the lagoon, shelf, and slope environments of Palmyra Atoll, Pacific Island Areas, Central Pacific....

  11. CRED Simrad em3002d multibeam backscatter data of Johnston Atoll, Pacific Remote Island Areas, Central Pacific in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of the shelf and slope environments of Johnston Island, Pacific Island Areas, Central Pacific. These...

  12. CRED Simrad em300 multibeam backscatter data of Palmyra Atoll, Pacific Remote Island Areas, Central Pacific with 5 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of the lagoon, shelf, and slope environments of Palmyra Atoll, Pacific Island Areas, Central Pacific....

  13. CRED 20m Gridded bathymetry of the banktop and slope environments of Northeast Bank (sometimes called "Muli" Seamount), American Samoa (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (20 m cell size) bathymetry of the banktop and slope environments of Northeast Bank (sometimes called "Muli" Seamount), American Samoa, South Pacific. Almost...

  14. CRED 5m Gridded bathymetry of the banktop and slope environments of Northeast Bank (sometimes called "Muli" Seamount), American Samoa (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded (5 m cell size) bathymetry of the banktop and slope environments of Northeast Bank (sometimes called "Muli" Seamount), American Samoa, South Pacific. Almost...

  15. Gridded bathymetry of the banktop and slope environments of Ta'u Island of the Manu'a Island group, American Samoa (netCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry of the banktop and slope environments of Ta'u Island of the Manu'a Island group, American Samoa. This survey provides almost complete coverage...

  16. CRED Simrad em300 multibeam backscatter data of Baker Island, Pacific Remote Island Areas, Central Pacific in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of the shelf and slope environments of Baker Island, Pacific Island Areas, Central Pacific. These...

  17. CRED 60 m Gridded bathymetry and IKONOS estimated depths of UTM Zone 1, Northwestern Hawaiian Islands, USA (NetCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry and IKONOS estimated depths of the shelf and slope environments of the Northwestern Hawaiian Islands, USA within UTM Zone 1. Bottom coverage was...

  18. CRED Simrad em300 multibeam backscatter data of Johnston Atoll, Pacific Remote Island Areas, Central Pacific in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of the shelf and slope environments of Johnston Island, Pacific Island Areas, Central Pacific. These...

  19. CRED Simrad em300 multibeam backscatter data from shelf and slope environments at Howland Island, Pacific Remote Island Areas, Central Pacific in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Howland Island, Pacific Remote Island Areas, Central Pacific. These data provide coverage between...

  20. CRED 60 m Gridded bathymetry and IKONOS estimated depths of UTM Zone 3, Northwestern Hawaiian Islands, USA (netCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry and IKONOS estimated depths of the shelf and slope environments of the Northwestern Hawaiian Islands, USA within UTM Zone 3. Bottom coverage was...

  1. Trade Study: Storing NASA HDF5/netCDF-4 Data in the Amazon Cloud and Retrieving Data via Hyrax Server / THREDDS Data Server

    Science.gov (United States)

    Habermann, Ted; Jelenak, Aleksander; Lee, Joe; Yang, Kent; Gallagher, James; Potter, Nathan

    2017-01-01

    As part of the overall effort to understand implications of migrating ESDIS data and services to the cloud we are testing several common OPeNDAP and HDF use cases against three architectures for general performance and cost characteristics. The architectures include retrieving entire files, retrieving datasets using HTTP range gets, and retrieving elements of datasets (chunks) with HTTP range gets. We will describe these architectures and discuss our approach to estimating cost.

  2. CRED Reson 8101 multibeam backscatter data from the banktop and bank edge environments at Tutuila, American Samoa, South Pacific 16 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Tutuila, American Samoa, South Pacific These data provide coverage between 20 and 5000 meters. The...

  3. CRED Reson 8101 multibeam backscatter data from the banktop and bank edge environments at Tutuila, American Samoa, South Pacific with 1 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Tutuila, American Samoa, South Pacific These data provide coverage between 20 and 5000 meters. The...

  4. CRED Reson 8101 multibeam backscatter data from the lagoon and shelf environments at Kingman Reef, Pacific Remote Island Areas, Central Pacific in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of the lagoon and shelf environments at Kingman Reef, Pacific Island Areas, Central Pacific. These...

  5. CRED Reson 8101 multibeam backscatter data from the lagoon environment at Rose Island, American Samoa, South Pacific with 1 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Rose Island, American Samoa, South Pacific These data provide coverage between 20 and 5000 meters....

  6. CRED Simrad em300 multibeam backscatter data of Jarvis Island, Pacific Remote Island Areas, Central Pacific in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of the shelf and slope environments of Jarvis Atoll, Pacific Island Areas, Central Pacific. These...

  7. CRED Simrad em300 multibeam backscatter data from Kingman Reef, Pacific Remote Island Areas, Central Pacific in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of the lagoon, shelf, and slope environments of Kingman Reef, Pacific Island Areas, Central Pacific....

  8. CRED Reson 8101 multibeam backscatter data of Palmyra Atoll, Pacific Remote Island Areas, Central Pacific with 1 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of the lagoon, shelf, and slope environments of Palmyra Atoll, Pacific Island Areas, Central Pacific....

  9. CRED Simrad em300 multibeam backscatter data from the submarine slope environment at Rose Island, American Samoa, South Pacific with 5 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Rose Island, American Samoa, South Pacific. These data provide coverage between 20 and 5000...

  10. CRED Simrad em300 multibeam backscatter data from the slope environment of Swains Island, American Samoa, South Pacific in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Swains Island, American Samoa. These data provide coverage between 50 and 5000 meters. The...

  11. CRED 20 m Gridded bathymetry and IKONOS estimated depths of Northampton Seamounts to Laysan Island, Northwestern Hawaiian Islands, USA (NetCDF format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry and IKONOS estimated depths of the shelf and slope environments of Northampton Seamounts to Laysan Island, Northwestern Hawaiian Islands, Hawaii,...

  12. Figs1,2,3a

    Data.gov (United States)

    U.S. Environmental Protection Agency — all data is in the netCDF format and zipped. after downloading this data, you need to unzip it first to create original netCDF formatted data. This dataset is...

  13. Introduction to Reading and Visualizing ARM Data

    Energy Technology Data Exchange (ETDEWEB)

    Mather, James [Pacific Northwest National Laboratory

    2014-02-18

    Atmospheric Radiation Measurement (ARM) Program standard data format is NetCDF 3 (Network Common Data Form). The object of this tutorial is to provide a basic introduction to NetCDF with an emphasis on aspects of the ARM application of NetCDF. The goal is to provide basic instructions for reading and visualizing ARM NetCDF data with the expectation that these examples can then be applied to more complex applications.

  14. CRED Simrad em300 multibeam backscatter data from the banktop and slope environments of Northeast Bank ("Muli" Seamount), American Samoa with 5 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Northeast Bank ("Muli" Seamount), American Samoa. These data provide coverage between 50 and 5000...

  15. CRED 10m Gridded bathymetry of the submarine volcanos between Olosega and Ta'u Islands of the Manu'a Island group, American Samoa (NetCDF Format)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry of the submarine volcanos between Olosega and Ta'u Islands of the Manu'a Island group, American Samoa This survey provides almost complete...

  16. Development of an interface for the conversion of geodata in a NetCDF data model and publication of this data by the use of the web application DChart, related to the CEOP-AEGIS project

    OpenAIRE

    Holzer, Nicolai

    2011-01-01

    The Tibetan Plateau with an extent of about 2,5 million square kilometers at an average altitude higher than 4,700 meters has a significant impact on the Asian monsoon and regulates with its snow and ice reserves the upstream headwaters of seven major south-east Asian rivers. Upon the water supply of these rivers depend over 1,4 billion people, the agriculture, the economics, and the entire ecosystem in this region. As the increasing number of floods and droughts show, these seasonal water re...

  17. CRED Reson 8101 multibeam backscatter data from the banktop and bank edge environments of Ofu, Olosega, and Ta'u Islands of the Manua Island group, American Samoa with 1 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Ofu, Olosega, and Ta'u Islands of the Manua Island Group, American Samoa, South Pacific. These...

  18. CRED Simrad em120 multibeam backscatter data from portions of the banktop and bank edge environments at Maro Reef, Hawaii, USA with 30 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Maro Reef, Northwestern Hawaiian Islands, USA. These data provide coverage between 20 and 5000...

  19. CRED Simrad em3002d multibeam backscatter data from portions of the banktop and bank edge environments at Maro Reef, Hawaii, USA in netCDF format with 1 meter resolution

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Maro Reef, Northwestern Hawaiian Islands, USA. These data provide coverage between 20 and 5000...

  20. CRED Simrad em300 multibeam backscatter data from portions of the banktop and bank edge environments at Maro Reef, Hawaii, USA with 30 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Maro Reef, Northwestern Hawaiian Islands, USA. These data provide coverage between 20 and 5000...

  1. CRED Simrad em3002d multibeam backscatter data from the banktop and bank edge environments at Tutuila, American Samoa, South Pacific with 1 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Tutuila, American Samoa, South Pacific These data provide coverage between 20 and 5000 meters. The...

  2. CRED Reson 8101 multibeam backscatter data from portions of the banktop and bank edge environments at Maro Reef, Hawaii, USA with 1 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Maro Reef, Northwestern Hawaiian Islands, USA. These data provide coverage between 20 and 5000...

  3. CRED Simrad em3002d multibeam backscatter data from portions of the banktop and bank edge environments at Maro Reef, Hawaii, USA in netCDF format with 30 meter resolution

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Maro Reef, Northwestern Hawaiian Islands, USA. These data provide coverage between 20 and 5000...

  4. CRED Simrad em3002d multibeam backscatter data from the banktop and bank edge environments at Tutuila, American Samoa, South Pacific with 16 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Tutuila, American Samoa, South Pacific These data provide coverage between 20 and 5000 meters. The...

  5. CRED Simrad em3002d multibeam backscatter data from the banktop and slope environments of Northeast Bank ("Muli" Seamount), American Samoa with 1 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Northeast Bank ("Muli" Seamount), American Samoa. These data provide coverage between 20 and 5000...

  6. CRED Simrad em300 multibeam backscatter data from portions of the banktop and bank edge environments at Maro Reef, Hawaii, USA with 5 meter resolution in netCDF format

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam backscatter imagery extracted from gridded bathymetry of Maro Reef, Northwestern Hawaiian Islands, USA. These data provide coverage between 20 and 5000...

  7. Public-domain-software solution to data-access problems for numerical modelers

    Science.gov (United States)

    Jenter, Harry; Signell, Richard

    1992-01-01

    Unidata's network Common Data Form, netCDF, provides users with an efficient set of software for scientific-data-storage, retrieval, and manipulation. The netCDF file format is machine-independent, direct-access, self-describing, and in the public domain, thereby alleviating many problems associated with accessing output from large hydrodynamic models. NetCDF has programming interfaces in both the Fortran and C computer language with an interface to C++ planned for release in the future. NetCDF also has an abstract data type that relieves users from understanding details of the binary file structure; data are written and retrieved by an intuitive, user-supplied name rather than by file position. Users are aided further by Unidata's inclusion of the Common Data Language, CDL, a printable text-equivalent of the contents of a netCDF file. Unidata provides numerous operators and utilities for processing netCDF files. In addition, a number of public-domain and proprietary netCDF utilities from other sources are available at this time or will be available later this year. The U.S. Geological Survey has produced and is producing a number of public-domain netCDF utilities.

  8. Virtual Machine Language 2.1

    Science.gov (United States)

    Riedel, Joseph E.; Grasso, Christopher A.

    2012-01-01

    VML (Virtual Machine Language) is an advanced computing environment that allows spacecraft to operate using mechanisms ranging from simple, time-oriented sequencing to advanced, multicomponent reactive systems. VML has developed in four evolutionary stages. VML 0 is a core execution capability providing multi-threaded command execution, integer data types, and rudimentary branching. VML 1 added named parameterized procedures, extensive polymorphism, data typing, branching, looping issuance of commands using run-time parameters, and named global variables. VML 2 added for loops, data verification, telemetry reaction, and an open flight adaptation architecture. VML 2.1 contains major advances in control flow capabilities for executable state machines. On the resource requirements front, VML 2.1 features a reduced memory footprint in order to fit more capability into modestly sized flight processors, and endian-neutral data access for compatibility with Intel little-endian processors. Sequence packaging has been improved with object-oriented programming constructs and the use of implicit (rather than explicit) time tags on statements. Sequence event detection has been significantly enhanced with multi-variable waiting, which allows a sequence to detect and react to conditions defined by complex expressions with multiple global variables. This multi-variable waiting serves as the basis for implementing parallel rule checking, which in turn, makes possible executable state machines. The new state machine feature in VML 2.1 allows the creation of sophisticated autonomous reactive systems without the need to develop expensive flight software. Users specify named states and transitions, along with the truth conditions required, before taking transitions. Transitions with the same signal name allow separate state machines to coordinate actions: the conditions distributed across all state machines necessary to arm a particular signal are evaluated, and once found true, that

  9. Oceanographic and surface meteorological data collected from station shellpoint by Sanibel-Captiva Conservation Foundation River, Estuary and Coastal Observing Network (SCCF) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118784)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118784 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  10. Oceanographic and surface meteorological data collected from station cherrygrove by Long Bay Hypoxia Monitoring Consortium (LBHMC) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the North Atlantic Ocean from 2014-02-13 to 2015-07-09 (NODC Accession 0118795)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Accession 0118795 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention (CF)...

  11. Oceanographic and surface meteorological data collected from station binneydock by Florida Department of Environmental Protection (FLDEP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118770)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118770 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  12. Assessing model characterization of single source secondary pollutant impacts using 2013 SENEX field study measurements

    Data.gov (United States)

    U.S. Environmental Protection Agency — The dataset consists of 4 comma-separated value (csv) text files and 3 netCDF data files. Each csv file contains the observed and CMAQ modeled gas and aerosol...

  13. Oceanographic and surface meteorological data collected from station apachepier by Long Bay Hypoxia Monitoring Consortium (LBHMC) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the North Atlantic Ocean from 2014-02-13 to 2015-07-09 (NODC Accession 0118794)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Accession 0118794 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention (CF)...

  14. Oceanographic and surface meteorological data collected from station sun2 by Carolinas Coastal Ocean Observing and Prediction System (Caro-COOPS) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118741)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118741 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  15. Oceanographic and surface meteorological data collected from Oneida Lake Weather Station (ESF7) by State University of New York College of Environmental Science and Forestry and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-07 to 2017-08-31 (NCEI Accession 0123659)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123659 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  16. Oceanographic and surface meteorological data collected from station Dauphin Island, AL by Dauphin Island Sea Laboratory (DISL) and assembled by Gulf of Mexico Coastal Ocean Observing System (GCOOS) in the Coastal waters of Alabama and Gulf of Mexico from 2008-01-01 to 2017-04-30 (NCEI Accession 0163672)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0163672 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  17. Oceanographic and surface meteorological data collected from U-GLOS Station 004, Little Traverse Bay, by University of Michigan and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2017-08-31 (NODC Accession 0123643)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123643 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  18. Oceanographic and surface meteorological data collected from station wiwf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118765)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118765 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  19. Oceanographic and surface meteorological data collected from station gbif1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118751)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118751 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  20. Oceanographic and surface meteorological data collected from station GB17 by University of Wisconsin-Milwaukee and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2017-08-31 (NODC Accession 0123640)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123640 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  1. Oceanographic and surface meteorological data collected from station melbourne by Florida Department of Environmental Protection (FLDEP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-04-29 (NODC Accession 0118773)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118773 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  2. Oceanographic and surface meteorological data collected from station fortmyers by Sanibel-Captiva Conservation Foundation River, Estuary and Coastal Observing Network (SCCF) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118739)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118739 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  3. Oceanographic and surface meteorological data collected from station dkkf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118750)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118750 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  4. Oceanographic and surface meteorological data collected from station lrif1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118758)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118758 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  5. Oceanographic and surface meteorological data collected from station canf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118747)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118747 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  6. Oceanographic and surface meteorological data collected from station Bon Secour, LA by Dauphin Island Sea Laboratory (DISL) and assembled by Gulf of Mexico Coastal Ocean Observing System (GCOOS) in the Coastal waters of Alabama and Gulf of Mexico from 2011-01-01 to 2017-05-02 (NCEI Accession 0163204)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0163204 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  7. Oceanographic and surface meteorological data collected from station hcef1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118753)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118753 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  8. Oceanographic and surface meteorological data collected from station cnbf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118748)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118748 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  9. Oceanographic and surface meteorological data collected from station trrf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118764)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118764 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  10. Oceanographic and surface meteorological data collected from station gbtf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118752)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118752 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  11. Oceanographic and surface meteorological data collected from station mukf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118760)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118760 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  12. Oceanographic and surface meteorological data collected from station tcvf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118763)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118763 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  13. Oceanographic and surface meteorological data collected from station bdvf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118737)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118737 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  14. Oceanographic and surface meteorological data collected from station wwef1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118767)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118767 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  15. Oceanographic and surface meteorological data collected from RECON Alpena, Thunder Bay Buoy, by Great Lakes Environmental Research Laboratory and assembled by Great Lakes Observing System (GLOS) in the Great Lakes and Thunder Bay National Marine Sanctuary region from 2016-05-19 to 2016-06-30 (NCEI Accession 0137891)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0137891 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  16. Oceanographic and surface meteorological data collected from station c10 by University of South Florida (USF) Coastal Ocean Monitoring and Prediction System (USF) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Gulf of Mexico and North Atlantic Ocean from 2015-08-01 to 2016-05-31 (NCEI Accession 0131292)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0131292 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  17. Oceanographic and surface meteorological data collected from station cap2 by Carolinas Coastal Ocean Observing and Prediction System (Caro-COOPS) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118722)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118722 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  18. Oceanographic and surface meteorological data collected from station lmdf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118757)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118757 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  19. Oceanographic and surface meteorological data collected from station ppta1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal waters of Alabama, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118762)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118762 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  20. HIRENASD Solar Grids

    Data.gov (United States)

    National Aeronautics and Space Administration — These grids were generated by Markus Ritter (DLR) using Solar in TAU format netcdf.nc. The grids were converted to CGNS format by Pawel Chwalowski (NASA)

  1. Oceanographic and surface meteorological data collected from MTU Buoy by Michigan Technological University and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2016-07-01 (NODC Accession 0123644)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123644 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  2. Oceanographic and surface meteorological data collected from MTU1 Buoy by Michigan Technological University and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2016-06-30 (NODC Accession 0123646)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123646 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  3. Oceanographic and surface meteorological data collected from station sispnj by Florida Institute of Technology (FIT) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-27 to 2015-07-18 (NODC Accession 0118769)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Accession 0118769 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention (CF)...

  4. Oceanographic and surface meteorological data collected from station bnkf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118744)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118744 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  5. Physical oceanographic data collected from moorings deployed at Cordell Bank by Cordell Bank National Marine Sanctuary (CBNMS) and Bodega Marine Laboratory (BML) in the North Pacific Ocean from 2007-05-08 to 2011-12-14 (NODC Accession 0069874)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These are netCDF format data collected by CBNMS and BML to understand the physical processes at Cordell Bank and their potential effects on marine ecology. The...

  6. Physical oceanographic data collected from moorings deployed at Bodega Head by Gulf of the Farallones National Marine Sanctuary (GFNMS) and Bodega Marine Laboratory (BML) in the North Pacific Ocean from 2005-06-27 to 2011-10-27 (NODC Accession 0104152)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These are netCDF format data collected by GFNMS and BML to understand the physical processes at Bodega Head and their potential effects on marine ecology. The...

  7. Oceanographic and surface meteorological data collected from Holland Buoy by LimnoTech and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2017-08-31 (NODC Accession 0123650)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123650 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  8. Oceanographic and surface meteorological data collected from station tarponbay by Sanibel-Captiva Conservation Foundation River, Estuary and Coastal Observing Network (SCCF) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118785)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118785 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  9. Gridded multibeam bathymetry of Rota Island, Commonwealth of the Northern Mariana Islands (CNMI)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry shelf, bank and slope environments of Rota Island, CNMI. Bottom coverage was achieved in depths between 0 and -1905 meters. The netCDF and Arc...

  10. Oceanographic and surface meteorological data collected from station Sodus Bay Center (ESF5) by State University of New York College of Environmental Science and Forestry and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2016-06-30 (NODC Accession 0123657)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123657 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  11. Oceanographic and surface meteorological data collected from station Sodus Bay South (ESF2) by State University of New York College of Environmental Science and Forestry and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2016-06-30 (NODC Accession 0123654)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123654 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  12. Gridded multibeam bathymetry of Apra Harbor, Guam U.S. Territory

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry from Apra Harbor, Guam U.S. Territory. The netCDF and Arc ASCII grids include multibeam bathymetry from the Reson SeaBat 8125 multibeam sonar...

  13. Oceanographic and surface meteorological data collected from station pkyf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118761)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118761 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  14. Oceanographic and surface meteorological data collected from station redfishpass by Sanibel-Captiva Conservation Foundation River, Estuary and Coastal Observing Network (SCCF) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118783)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118783 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  15. Oceanographic and surface meteorological data collected from station NOAA_RSC_A by Regional Science Consortium and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-11 to 2016-06-30 (NODC Accession 0123653)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123653 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  16. Oceanographic and surface meteorological data collected from station jkyf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118754)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118754 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  17. Oceanographic and surface meteorological data collected from Station 45028, Western Lake Superior, by University of Minnesota - Duluth and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2016-06-30 (NODC Accession 0123649)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123649 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  18. Oceanographic and surface meteorological data collected from station frp2 by Carolinas Coastal Ocean Observing and Prediction System (Caro-COOPS) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118736)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118736 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  19. Oceanographic and surface meteorological data collected from Dunkirk Buoy, Lake Erie, by State University of New York College of Environmental Science and Forestry and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2015-10-26 (NODC Accession 0123655)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123655 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  20. Oceanographic and surface meteorological data collected from station nfb by University of South Florida (USF) Coastal Ocean Monitoring and Prediction System (USF) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2015-01-29 (NODC Accession 0118790)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NODC Accession 0118790 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  1. Oceanographic and surface meteorological data collected from University of Michigan Marine Hydrodynamics Laboratories Bio Buoy by University of Michigan and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2017-08-31 (NCEI Accession 0123660)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123660 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  2. Oceanographic and surface meteorological data collected from University of Michigan Marine Hydrodynamics Laboratories Bio Buoy by University of Michigan and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2017-08-31 (NODC Accession 0123645)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123645 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  3. Oceanographic and surface meteorological data collected from station apk by University of South Florida (USF) Coastal Ocean Monitoring and Prediction System (USF) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2015-01-29 (NODC Accession 0118740)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NODC Accession 0118740 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  4. Oceanographic and surface meteorological data collected from station buffalobluff by Florida Department of Environmental Protection (FLDEP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-03-07 to 2016-04-28 (NODC Accession 0118771)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118771 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  5. Figure5

    Data.gov (United States)

    U.S. Environmental Protection Agency — This is an R statistics package script that allows the reproduction of Figure 5. The script includes the links to large NetCDF files that the figures access for O3,...

  6. LBA-ECO CD-01 Simulated Atmospheric Circulation, CO2 Variation, Tapajos: August 2001

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This data set consists of a single NetCDF file containing simulated three dimensional winds and CO2 concentrations centered on the Tapajos National Forest...

  7. LBA-ECO CD-01 Simulated Atmospheric Circulation, CO2 Variation, Tapajos: August 2001

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set consists of a single NetCDF file containing simulated three dimensional winds and CO2 concentrations centered on the Tapajos National Forest in Brazil...

  8. Oceanographic and surface meteorological data collected from station RECON Erie, Cleveland (CLV), by Great Lakes Environmental Research Laboratory and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-24 to 2016-06-30 (NODC Accession 0123652)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123652 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  9. Biological, chemical, physical and time series data collected from station WQB04 by University of Hawai'i at Hilo and assembled by Pacific Islands Ocean Observing System (PacIOOS) in the North Pacific Ocean from 2010-10-23 to 2016-12-31 (NCEI Accession 0161523)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0161523 contains biological, chemical, physical and time series data in netCDF formatted files, which follow the Climate and Forecast metadata...

  10. Physical oceanographic data collected from moorings deployed at Double Point by Gulf of the Farallones National Marine Sanctuary (GFNMS) and Bodega Marine Laboratory (BML) in the North Pacific Ocean from 2007-05-30 to 2011-08-18 (NODC Accession 0104199)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These are netCDF format data collected by GFNMS and BML to understand the physical processes at Double Point and their potential effects on marine ecology. The...

  11. Ensemble standar deviation of wind speed and direction of the FDDA input to WRF

    Data.gov (United States)

    U.S. Environmental Protection Agency — NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input. variable U_NDG_OLD contains standard...

  12. Figure4

    Data.gov (United States)

    U.S. Environmental Protection Agency — NetCDF files of PBL height (m), Shortwave Radiation, 10 m wind speed from WRF and Ozone from CMAQ. The data is the standard deviation of these variables for each...

  13. AIRS-CloudSat cloud mask, radar reflectivities, and cloud classification matchups V3.2

    Data.gov (United States)

    National Aeronautics and Space Administration — This is AIRS-CloudSat collocated subset, in NetCDF 4 format. These data contain collocated: AIRS Level 1b radiances spectra, CloudSat radar reflectivities, and MODIS...

  14. AIRS-AMSU variables-CloudSat cloud mask, radar reflectivities, and cloud classification matchups V3.2

    Data.gov (United States)

    National Aeronautics and Space Administration — This is AIRS-CloudSat collocated subset, in NetCDF 4 format. These data contain collocated: AIRS/AMSU retrievals at AMSU footprints, CloudSat radar reflectivities,...

  15. AIRS-CloudSat cloud mask, radar reflectivities, and cloud classification matchups V3.2 (AIRS_CPR_MAT) at GES DISC

    Data.gov (United States)

    National Aeronautics and Space Administration — This is AIRS-CloudSat collocated subset, in NetCDF-4 format. These data contain collocated: AIRS Level 1b radiances spectra, CloudSat radar reflectivities, and MODIS...

  16. AIRS-AMSU variables-CloudSat cloud mask, radar reflectivities, and cloud classification matchups V3.2 (AIRSM_CPR_MAT) at GES DISC

    Data.gov (United States)

    National Aeronautics and Space Administration — This is AIRS-CloudSat collocated subset, in NetCDF 4 format. These data contain collocated: AIRS/AMSU retrievals at AMSU footprints, CloudSat radar reflectivities,...

  17. Oceanographic and surface meteorological data collected from station redbaypoint by Florida Department of Environmental Protection (FLDEP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-04-28 (NODC Accession 0118778)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118778 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  18. Oceanographic and surface meteorological data collected from station gulfofmexico by Sanibel-Captiva Conservation Foundation River, Estuary and Coastal Observing Network (SCCF) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118782)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118782 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  19. Oceanographic and surface meteorological data collected from Gibraltar Island Station by Ohio State University; Stone Laboratory and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2015-05-26 to 2017-08-31 (NCEI Accession 0130545)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0130545 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  20. Oceanographic and surface meteorological data collected from Toledo Low Service Pump Station by LimnoTech and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2015-05-12 to 2017-08-31 (NCEI Accession 0130072)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0130072 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  1. Oceanographic and surface meteorological data collected from Ottawa County Pump Station by Ottawa County Regional Water Treatment Plant and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2015-06-28 to 2017-08-31 (NCEI Accession 0130587)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0130587 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  2. Oceanographic and surface meteorological data collected from station bgsusd2, Sandusky Bay 2, by Bowling Green State University and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2017-06-10 to 2017-08-31 (NCEI Accession 0163831)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0163831 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  3. Oceanographic and surface meteorological data collected from station Little Cedar Point by University of Toledo and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2015-07-03 to 2017-08-31 (NCEI Accession 0155545)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0155545 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  4. Oceanographic and surface meteorological data collected from station City of Toledo Water Intake Crib by LimnoTech and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2015-05-20 to 2017-08-31 (NCEI Accession 0130548)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0130548 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  5. Oceanographic and surface meteorological data collected from Oregon Pump Station by City of Oregon and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2015-06-20 to 2017-08-31 (NCEI Accession 0130547)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0130547 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  6. Oceanographic and surface meteorological data collected from Avon Lake Pump Station by Avon Lake Regional Water and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2015-06-28 to 2017-08-31 (NCEI Accession 0130546)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0130546 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  7. Oceanographic and surface meteorological data collected from station Sandusky Bay by Bowling Green State University and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2015-07-04 to 2017-08-31 (NCEI Accession 0155656)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0155656 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  8. Oceanographic and surface meteorological data collected from station 45165, Monroe, MI, by LimnoTech and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-08-07 to 2017-08-31 (NODC Accession 0123661)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123661 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  9. Aqua AIRS-MODIS Matchup Indexes V1.0 (AIRS_MDS_IND) at GES_DISC

    Data.gov (United States)

    National Aeronautics and Space Administration — This is Aqua AIRS-MODIS collocation indexes, in netCDF-4 format. These data map AIRS profile indexes to those of MODIS. The basic task is to bring together...

  10. Oceanographic and surface meteorological data collected from station ilm3 by Coastal Ocean Research and Monitoring Program (CORMP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the North Atlantic Ocean from 2014-02-13 to 2016-02-01 (NODC Accession 0118742)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Accession 0118742 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention (CF)...

  11. Oceanographic and surface meteorological data collected from station RECON Michigan, Muskegon, by Great Lakes Environmental Research Laboratory and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-23 to 2017-08-31 (NODC Accession 0123651)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123651 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  12. Oceanographic and surface meteorological data collected from station ATW20 by University of Wisconsin-Milwaukee and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2017-08-31 (NODC Accession 0123639)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123639 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  13. Oceanographic and surface meteorological data collected from station lbrf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118755)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118755 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  14. Gridded bathymetry of Kaneohe Bay, Windward Side Oahu, Main Hawaiian Islands, USA.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — 4-m grid of bathymetric data of Kaneohe Bay, Windward Side Oahu, Main Hawaiian Islands, USA. These netCDF and ASCII grids include multibeam bathymetry from the Reson...

  15. Six Kilometer Coastal Ocean Current Predictions, Region 9, 2014, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This data is derived from the NetCDF files that come from http://hfrnet.ucsd.edu/. EPA Region 9 has developed a series of python scripts to download the data hourly,...

  16. Oceanographic and surface meteorological data collected from station lobo by Florida Atlantic University (FAU) Land/Ocean Biogeochemical Observatory (LOBO) (FAU) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida and North Atlantic Ocean from 2014-02-21 to 2014-11-04 (NODC Accession 0118768)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NODC Accession 0118768 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  17. Oceanographic and surface meteorological data collected from station gordonriverinlet by Florida Department of Environmental Protection (FLDEP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-04-29 (NODC Accession 0118772)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118772 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  18. Oceanographic and surface meteorological data collected from station lrkf1 by Everglades National Park (ENP) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the Coastal Waters of Florida, Gulf of Mexico and North Atlantic Ocean from 2014-02-13 to 2016-05-31 (NODC Accession 0118759)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0118759 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  19. Oceanographic and surface meteorological data collected from station 2ndave by Long Bay Hypoxia Monitoring Consortium (LBHMC) and assembled by Southeast Coastal Ocean Observing Regional Association (SECOORA) in the North Atlantic Ocean from 2014-02-13 to 2015-06-01 (NODC Accession 0118793)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NODC Accession 0118793 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  20. Physical oceanographic data collected from moorings deployed at Southeast Farallon Island by Gulf of the Farallones National Marine Sanctuary (GFNMS) and Bodega Marine Laboratory (BML) in the North Pacific Ocean from 2005-06-27 to 2011-08-19 (NODC Accession 0104198)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These are netCDF format data collected by GFNMS and BML to understand the physical processes at Southeast Farallon Island and their potential effects on marine...

  1. GPM GROUND VALIDATION NOAA S-BAND PROFILER ORIGINAL DWELL DATA MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The S-band Profiler Original Dwell dataset in the netCDF format was gathered during the Midlatitude Continental Convective Clouds Experiment (MC3E) in Oklahoma...

  2. Oceanographic and surface meteorological data collected from station Perdido Pass, AL by Dauphin Island Sea Laboratory (DISL) and assembled by Gulf of Mexico Coastal Ocean Observing System (GCOOS) in the Coastal waters of Alabama and Gulf of Mexico from 2011-11-07 to 2017-04-30 (NCEI Accession 0163767)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0163767 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  3. Oceanographic and surface meteorological data collected from U-GLOS Station 45026, Near Cook Nuclear Plant, by LimnoTech and assembled by Great Lakes Observing System (GLOS) in the Great Lakes region from 2014-07-01 to 2017-08-31 (NODC Accession 0123647)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NCEI Accession 0123647 contains oceanographic and surface meteorological data in netCDF formatted files, which follow the Climate and Forecast metadata convention...

  4. Figures6&7_Tables2&3

    Data.gov (United States)

    U.S. Environmental Protection Agency — This file contains three netCDF formatted files containing simulation model results used to produce Figures 6 and 7 and tables 3 and 4. These data can be accessed...

  5. Figure 4, Cropland Reallocation

    Data.gov (United States)

    U.S. Environmental Protection Agency — This is a netCDF formatted data file. All data values are reported as grid cell area percent (%). Since all simulation grid cells are of uniform area. Reallocation...

  6. Underway sea surface temperature and salinity data from thermosalinographs collected from multiple platforms assembled by NOAA Atlantic Oceanographic and Meteorological Laboratory (AOML)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This collection contains sea surface oceanographic data in netCDF and ASCII formatted files assembled by the NOAA Atlantic Oceanographic and Meteorological...

  7. Two Kilometer Coastal Ocean Current Predictions, Region 9, 2014, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This data is derived from the NetCDF files that come from http://hfrnet.ucsd.edu/. EPA Region 9 has developed a series of python scripts to download the data hourly,...

  8. 5 m Gridded multibeam bathymetry of Rota Island, Commonwealth of the Northern Mariana Islands (CNMI)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Rota Island, CNMI. Bottom coverage was achieved in depths between 0 and -1905 meters; this 5-m grid has data only to -400 m. The netCDF and Arc ASCII grids include...

  9. Gridded multibeam bathymetry and SHOALS LIDAR bathymetry of Penguin Bank, Hawaii, USA

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry (5 m cell size) of Penguin Bank, Hawaii, USA. The netCDF grid and ArcGIS ASCII file include multibeam bathymetry from the Simrad EM3002d, and...

  10. Gridded bathymetry of Penguin Bank, Hawaii, USA

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry (5 m cell size) of Penguin Bank, Hawaii, USA. The netCDF grid and ArcGIS ASCII file include multibeam bathymetry from the Simrad EM3002d, and...

  11. Multibeam Bathymetric Gridded Data for selected U.S. locations in the Pacific since 2003

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry for selected U.S. locations in the Pacific. The netCDF and Arc ASCII grids include multibeam bathymetry from the Simrad EM300, Simrad EM3002D, and...

  12. A data model of the Climate and Forecast metadata conventions (CF-1.6 with a software implementation (cf-python v2.1

    Directory of Open Access Journals (Sweden)

    D. Hassell

    2017-12-01

    Full Text Available The CF (Climate and Forecast metadata conventions are designed to promote the creation, processing, and sharing of climate and forecasting data using Network Common Data Form (netCDF files and libraries. The CF conventions provide a description of the physical meaning of data and of their spatial and temporal properties, but they depend on the netCDF file encoding which can currently only be fully understood and interpreted by someone familiar with the rules and relationships specified in the conventions documentation. To aid in development of CF-compliant software and to capture with a minimal set of elements all of the information contained in the CF conventions, we propose a formal data model for CF which is independent of netCDF and describes all possible CF-compliant data. Because such data will often be analysed and visualised using software based on other data models, we compare our CF data model with the ISO 19123 coverage model, the Open Geospatial Consortium CF netCDF standard, and the Unidata Common Data Model. To demonstrate that this CF data model can in fact be implemented, we present cf-python, a Python software library that conforms to the model and can manipulate any CF-compliant dataset.

  13. A data model of the Climate and Forecast metadata conventions (CF-1.6) with a software implementation (cf-python v2.1)

    Science.gov (United States)

    Hassell, David; Gregory, Jonathan; Blower, Jon; Lawrence, Bryan N.; Taylor, Karl E.

    2017-12-01

    The CF (Climate and Forecast) metadata conventions are designed to promote the creation, processing, and sharing of climate and forecasting data using Network Common Data Form (netCDF) files and libraries. The CF conventions provide a description of the physical meaning of data and of their spatial and temporal properties, but they depend on the netCDF file encoding which can currently only be fully understood and interpreted by someone familiar with the rules and relationships specified in the conventions documentation. To aid in development of CF-compliant software and to capture with a minimal set of elements all of the information contained in the CF conventions, we propose a formal data model for CF which is independent of netCDF and describes all possible CF-compliant data. Because such data will often be analysed and visualised using software based on other data models, we compare our CF data model with the ISO 19123 coverage model, the Open Geospatial Consortium CF netCDF standard, and the Unidata Common Data Model. To demonstrate that this CF data model can in fact be implemented, we present cf-python, a Python software library that conforms to the model and can manipulate any CF-compliant dataset.

  14. GHRSST v2 Level 3U Global Skin Sea Surface Temperature from the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite created by the NOAA Advanced Clear-Sky Processor for Ocean (ACSPO) (GDS version 2)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The ACSPO VIIRS L3U (Level 3 Uncollated) product is a gridded version of the ACSPO VIIRS L2P product Data files are 10min granules in netcdf4 format compliant with...

  15. Figure 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This is a NetCDF file in ioapi format that contains the probability that ozone is above the 8 hr max O3 standard for the four days of the simulation. This dataset is...

  16. Gridded bathymetry of Niihau Island, Hawaii, USA

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Gridded bathymetry shelf, bank and slope environments of Ni'ihau Island. This 5 m grid contains data between 0 and 100 meters. The netCDF and Arc ASCII grids include...

  17. A Prototype Web-based system for GOES-R Space Weather Data

    Science.gov (United States)

    Sundaravel, A.; Wilkinson, D. C.

    2010-12-01

    The Geostationary Operational Environmental Satellite-R Series (GOES-R) makes use of advanced instruments and technologies to monitor the Earth's surface and provide with accurate space weather data. The first GOES-R series satellite is scheduled to be launched in 2015. The data from the satellite will be widely used by scientists for space weather modeling and predictions. This project looks into the ways of how these datasets can be made available to the scientists on the Web and to assist them on their research. We are working on to develop a prototype web-based system that allows users to browse, search and download these data. The GOES-R datasets will be archived in NetCDF (Network Common Data Form) and CSV (Comma Separated Values) format. The NetCDF is a self-describing data format that contains both the metadata information and the data. The data is stored in an array-oriented fashion. The web-based system will offer services in two ways: via a web application (portal) and via web services. Using the web application, the users can download data in NetCDF or CSV format and can also plot a graph of the data. The web page displays the various categories of data and the time intervals for which the data is available. The web application (client) sends the user query to the server, which then connects to the data sources to retrieve the data and delivers it to the users. Data access will also be provided via SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) web services. These provide functions which can be used by other applications to fetch data and use the data for further processing. To build the prototype system, we are making use of proxy data from existing GOES and POES space weather datasets. Java is the programming language used in developing tools that formats data to NetCDF and CSV. For the web technology we have chosen Grails to develop both the web application and the services. Grails is an open source web application

  18. Scientific Data Storage for Cloud Computing

    Science.gov (United States)

    Readey, J.

    2014-12-01

    Traditionally data storage used for geophysical software systems has centered on file-based systems and libraries such as NetCDF and HDF5. In contrast cloud based infrastructure providers such as Amazon AWS, Microsoft Azure, and the Google Cloud Platform generally provide storage technologies based on an object based storage service (for large binary objects) complemented by a database service (for small objects that can be represented as key-value pairs). These systems have been shown to be highly scalable, reliable, and cost effective. We will discuss a proposed system that leverages these cloud-based storage technologies to provide an API-compatible library for traditional NetCDF and HDF5 applications. This system will enable cloud storage suitable for geophysical applications that can scale up to petabytes of data and thousands of users. We'll also cover other advantages of this system such as enhanced metadata search.

  19. The ISMAR high frequency coastal radar network: Monitoring surface currents for management of marine resources

    DEFF Research Database (Denmark)

    Carlson, Daniel Frazier

    2015-01-01

    The Institute of Marine Sciences (ISMAR) of the National Research Council of Italy (CNR) established a High Frequency (HF) Coastal Radar Network for the measurement of the velocity of surface currents in coastal seas. The network consists of four HF radar systems located on the coast of the Gargano...... Promontory (Southern Adriatic, Italy). The network has been operational since May 2013 and covers an area of approximately 1700 square kilometers in the Gulf of Manfredonia. Quality Assessment (QA) procedures are applied for the systems deployment and maintenance and Quality Control (QC) procedures...... of geospatial data, a netCDF architecture has been defined on the basis of the Radiowave Operators Working Group (US ROWG) recommendations and compliant to the Climate and Forecast (CF) Metadata Conventions CF-1.6. The hourly netCDF files are automatically attached to a Thematic Real-time Environmental...

  20. Supporting Meteorological Field Experiment Missions and Postmission Analysis with Satellite Digital Data and Products

    Science.gov (United States)

    2011-08-01

    aVhrr 1–5 4 raw l1b Via noaa class system dMSp olS ViS, ir 2.8 raw netcdF Via FnMoc/aFwa ModiS ViS, ir 1 albedo /T °c netcdF Via noaa nrtpe Microwave...coincident MTSAT, Moderate Resolution Imaging Spectroradiometer ( MODIS ) VIS/IR imag- ery, and AMSR-E 89-GHz brightness temperatures (Mitrescu et al...Jpeg MTSAT-1R 3, 4 4 km 30 min Jpeg MTSAT-1R cloud-drift winds low level/upper level 6 hourly tiFF ModiS 1,2 250 m ~2 per day Jpeg ModiS true color

  1. Obtaining and processing Daymet data using Python and ArcGIS

    Science.gov (United States)

    Bohms, Stefanie

    2013-01-01

    This set of scripts was developed to automate the process of downloading and mosaicking daily Daymet data to a user defined extent using ArcGIS and Python programming language. The three steps are downloading the needed Daymet tiles for the study area extent, converting the netcdf file to a tif raster format, and mosaicking those rasters to one file. The set of scripts is intended for all levels of experience with Python programming language and requires no scripting by the user.

  2. A Debris Backwards Flow Simulation System for Malaysia Airlines Flight 370

    OpenAIRE

    Eichhorn, Mike; Haertel, Alexander

    2017-01-01

    This paper presents a system based on a Two-Way Particle-Tracking Model to analyze possible crash positions of flight MH370. The particle simulator includes a simple flow simulation of the debris based on a Lagrangian approach and a module to extract appropriated ocean current data from netCDF files. The influence of wind, waves, immersion depth and hydrodynamic behavior are not considered in the simulation.

  3. Reformatting Meteorological Data for use in the Hazard Prediction and Assessment Capability

    Science.gov (United States)

    2004-11-01

    forecast data from mesoscale model runs. In Australia, this meteorological data is produced by the Bureau of Meteorology (BoM). HPAC was developed in the...be interpreted by HPAC. The Bureau of Meteorology (BoM) collects large amounts of observational data from across the country and uses Numerical Weather...supplied by the Bureau of Meteorology are stored in a binary form and contain the variables shown below in Table 3. Table 3: Current NetCDF format

  4. Hydratools, a MATLAB® based data processing package for Sontek Hydra data

    Science.gov (United States)

    Martini, M.; Lightsom, F.L.; Sherwood, C.R.; Xu, Jie; Lacy, J.R.; Ramsey, A.; Horwitz, R.

    2005-01-01

    The U.S. Geological Survey (USGS) has developed a set of MATLAB tools to process and convert data collected by Sontek Hydra instruments to netCDF, which is a format used by the USGS to process and archive oceanographic time-series data. The USGS makes high-resolution current measurements within 1.5 meters of the bottom. These data are used in combination with other instrument data from sediment transport studies to develop sediment transport models. Instrument manufacturers provide software which outputs unique binary data formats. Multiple data formats are cumbersome. The USGS solution is to translate data streams into a common data format: netCDF. The Hydratools toolbox is written to create netCDF format files following EPIC conventions, complete with embedded metadata. Data are accepted from both the ADV and the PCADP. The toolbox will detect and remove bad data, substitute other sources of heading and tilt measurements if necessary, apply ambiguity corrections, calculate statistics, return information about data quality, and organize metadata. Standardized processing and archiving makes these data more easily and routinely accessible locally and over the Internet. In addition, documentation of the techniques used in the toolbox provides a baseline reference for others utilizing the data.

  5. The Weather and Climate Toolkit

    Science.gov (United States)

    Ansari, S.; Del Greco, S.; Hankins, B.

    2010-12-01

    The Weather and Climate Toolkit (WCT) is free, platform independent software distributed from NOAA’s National Climatic Data Center (NCDC). The WCT allows the visualization and data export of weather and climate data, including Radar, Satellite and Model data. By leveraging the NetCDF for Java library and Common Data Model, the WCT is extremely scalable and capable of supporting many new datasets in the future. Gridded NetCDF files (regular and irregularly spaced, using Climate-Forecast (CF) conventions) are supported, along with many other formats including GRIB. The WCT provides tools for custom data overlays, Web Map Service (WMS) background maps, animations and basic filtering. The export of images and movies is provided in multiple formats. The WCT Data Export Wizard allows for data export in both vector polygon/point (Shapefile, Well-Known Text) and raster (GeoTIFF, ESRI Grid, VTK, Gridded NetCDF) formats. These data export features promote the interoperability of weather and climate information with various scientific communities and common software packages such as ArcGIS, Google Earth, MatLAB, GrADS and R. The WCT also supports an embedded, integrated Google Earth instance. The Google Earth Browser Plugin allows seamless visualization of data on a native 3-D Google Earth instance linked to the standard 2-D map. Level-II NEXRAD data for Hurricane Katrina GPCP (Global Precipitation Product), visualized in 2-D and internal Google Earth view.

  6. Scientific Data Management Integrated Software Infrastructure Center

    Energy Technology Data Exchange (ETDEWEB)

    Choudhary, A.; Liao, W.K.

    2008-10-29

    This work provides software that enables scientific applications to more efficiently access available storage resources at different levels of interfaces. We developed scalable techniques and optimizations for PVFS parallel file systems, MPI I/O, and parallel netCDF I/O library. These implementations were evaluated using production application I/O kernels as well as popular I/O benchmarks and demonstrated promising results. The software developed under this work has been made available to the public via MCS, ANL web sites.

  7. SchemaOnRead Manual

    Energy Technology Data Exchange (ETDEWEB)

    North, Michael J. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-09-30

    SchemaOnRead provides tools for implementing schema-on-read including a single function call (e.g., schemaOnRead("filename")) that reads text (TXT), comma separated value (CSV), raster image (BMP, PNG, GIF, TIFF, and JPG), R data (RDS), HDF5, NetCDF, spreadsheet (XLS, XLSX, ODS, and DIF), Weka Attribute-Relation File Format (ARFF), Epi Info (REC), Pajek network (PAJ), R network (NET), Hypertext Markup Language (HTML), SPSS (SAV), Systat (SYS), and Stata (DTA) files. It also recursively reads folders (e.g., schemaOnRead("folder")), returning a nested list of the contained elements.

  8. IVS Working Group 4: VLBI Data Structures

    Science.gov (United States)

    Gipson, J.

    2012-12-01

    I present an overview of the "openDB format" for storing, archiving, and processing VLBI data. In this scheme, most VLBI data is stored in NetCDF files. NetCDF has the advantage that there are interfaces to most common computer languages including Fortran, Fortran-90, C, C++, Perl, etc, and the most common operating systems including Linux, Windows, and Mac. The data files for a particular session are organized by special ASCII "wrapper" files which contain pointers to the data files. This allows great flexibility in the processing and analysis of VLBI data. For example it allows you to easily change subsets of the data used in the analysis such as troposphere modeling, ionospheric calibration, editing, and ambiguity resolution. It also allows for extending the types of data used, e.g., source maps. I present a roadmap to transition to this new format. The new format can already be used by VieVS and by the global mode of solve. There are plans in work for other software packages to be able to use the new format.

  9. Situational Lightning Climatologies for Central Florida: Phase III

    Science.gov (United States)

    Barrett, Joe H., III

    2008-01-01

    This report describes work done by the Applied Meteorology Unit (AMU) to add composite soundings to the Advanced Weather Interactive Processing System (AWIPS). This allows National Weather Service (NWS) forecasters to compare the current atmospheric state with climatology. In a previous phase, the AMU created composite soundings for four rawinsonde observation stations in Florida, for each of eight flow regimes. The composite soundings were delivered to the NWS Melbourne (MLB) office for display using the NSHARP software program. NWS MLB requested that the AMU make the composite soundings available for display in AWIPS. The AMU first created a procedure to customize AWIPS so composite soundings could be displayed. A unique four-character identifier was created for each of the 32 composite soundings. The AMU wrote a Tool Command Language/Tool Kit (TcVTk) software program to convert the composite soundings from NSHARP to Network Common Data Form (NetCDF) format. The NetCDF files were then displayable by AWIPS.

  10. Advances in a distributed approach for ocean model data interoperability

    Science.gov (United States)

    Signell, Richard P.; Snowden, Derrick P.

    2014-01-01

    An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.

  11. Data Container Study for Handling Array-based Data Using Rasdaman, Hive, Spark, and MongoDB

    Science.gov (United States)

    Xu, M.; Hu, F.; Yu, M.; Scheele, C.; Liu, K.; Huang, Q.; Yang, C. P.; Little, M. M.

    2016-12-01

    Geoscience communities have come up with various big data storage solutions, such as Rasdaman and Hive, to address the grand challenges for massive Earth observation data management and processing. To examine the readiness of current solutions in supporting big Earth observation, we propose to investigate and compare four popular data container solutions, including Rasdaman, Hive, Spark, and MongoDB. Using different types of spatial and non-spatial queries, datasets stored in common scientific data formats (e.g., NetCDF and HDF), and two applications (i.e. dust storm simulation data mining and MERRA data analytics), we systematically compare and evaluate the feature and performance of these four data containers in terms of data discover and access. The computing resources (e.g. CPU, memory, hard drive, network) consumed while performing various queries and operations are monitored and recorded for the performance evaluation. The initial results show that 1) Rasdaman has the best performance for queries on statistical and operational functions, and supports NetCDF data format better than HDF; 2) Rasdaman clustering configuration is more complex than the others; 3) Hive performs better on single pixel extraction from multiple images; and 4) Except for the single pixel extractions, Spark performs better than Hive and its performance is close to Rasdaman. A comprehensive report will detail the experimental results, and compare their pros and cons regarding system performance, ease of use, accessibility, scalability, compatibility, and flexibility.

  12. Advances in a Distributed Approach for Ocean Model Data Interoperability

    Directory of Open Access Journals (Sweden)

    Richard P. Signell

    2014-03-01

    Full Text Available An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC Sensor Observation Service (SOS, a metadata standard for unstructured grid model output (UGRID, and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS® Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.

  13. CMEMS (Copernicus Marine Environment Monitoring Service) In Situ Thematic Assembly Centre: A service for operational Oceanography

    Science.gov (United States)

    Manzano Muñoz, Fernando; Pouliquen, Sylvie; Petit de la Villeon, Loic; Carval, Thierry; Loubrieu, Thomas; Wedhe, Henning; Sjur Ringheim, Lid; Hammarklint, Thomas; Tamm, Susanne; De Alfonso, Marta; Perivoliotis, Leonidas; Chalkiopoulos, Antonis; Marinova, Veselka; Tintore, Joaquin; Troupin, Charles

    2016-04-01

    Copernicus, previously known as GMES (Global Monitoring for Environment and Security), is the European Programme for the establishment of a European capacity for Earth Observation and Monitoring. Copernicus aims to provide a sustainable service for Ocean Monitoring and Forecasting validated and commissioned by users. From May 2015, the Copernicus Marine Environment Monitoring Service (CMEMS) is working on an operational mode through a contract with services engagement (result is regular data provision). Within CMEMS, the In Situ Thematic Assembly Centre (INSTAC) distributed service integrates in situ data from different sources for operational oceanography needs. CMEMS INSTAC is collecting and carrying out quality control in a homogeneous manner on data from providers outside Copernicus (national and international networks), to fit the needs of internal and external users. CMEMS INSTAC has been organized in 7 regional Dissemination Units (DUs) to rely on the EuroGOOS ROOSes. Each DU aggregates data and metadata provided by a series of Production Units (PUs) acting as an interface for providers. Homogeneity and standardization are key features to ensure coherent and efficient service. All DUs provide data in the OceanSITES NetCDF format 1.2 (based on NetCDF 3.6), which is CF compliant, relies on SeaDataNet vocabularies and is able to handle profile and time-series measurements. All the products, both near real-time (NRT) and multi-year (REP), are available online for every CMEMS registered user through an FTP service. On top of the FTP service, INSTAC products are available through Oceanotron, an open-source data server dedicated to marine observations dissemination. It provides services such as aggregation on spatio-temporal coordinates and observed parameters, and subsetting on observed parameters and metadata. The accuracy of the data is checked on various levels. Quality control procedures are applied for the validity of the data and correctness tests for the

  14. JADDS - towards a tailored global atmospheric composition data service for CAMS forecasts and reanalysis

    Science.gov (United States)

    Stein, Olaf; Schultz, Martin G.; Rambadt, Michael; Saini, Rajveer; Hoffmann, Lars; Mallmann, Daniel

    2017-04-01

    server and address the major issues identified when relocating large four-dimensional datasets into a RASDAMAN raster array database. So far the RASDAMAN support for data available in netCDF format is limited with respect to metadata related to variables and axes. For community-wide accepted solutions, selected data coverages shall result in downloadable netCDF files including metadata complying with the netCDF CF Metadata Conventions standard (http://cfconventions.org/). This can be achieved by adding custom metadata elements for RASDAMAN bands (model levels) on data ingestion. Furthermore, an optimization strategy for ingestion of several TB of 4D model output data will be outlined.

  15. Using OPeNDAP's Data-Services Framework to Lift Mash-Ups above Blind Dates

    Science.gov (United States)

    Gallagher, J. H. R.; Fulker, D. W.

    2015-12-01

    OPeNDAP's data-as-service framework (Hyrax) matches diverse sources with many end-user tools and contexts. Keys to its flexibility include: A data model embracing tabular data alongside n-dim arrays and other structures useful in geoinformatics. A REST-like protocol that supports—via suffix notation—a growing set of output forms (netCDF, XML, etc.) plus a query syntax for subsetting. Subsetting applies (via constraints on column values) to tabular data or (via constraints on indices or coordinates) to array-style data . A handler-style architecture that admits a growing set of input types. Community members may contribute handlers, making Hyrax effective as middleware, where N sources are mapped to M outputs with order N+M effort (not NxM). Hyrax offers virtual aggregations of source data, enabling granularity aimed at users, not data-collectors. OPeNDAP-access libraries exist in multiple languages, including Python, Java, and C++. Recent enhancements are increasing this framework's interoperability (i.e., its mash-up) potential. Extensions implemented as servlets—running adjacent to Hyrax—are enriching the forms of aggregation and enabling new protocols: User-specified aggregations, namely, applying a query to (huge) lists of source granules, and receiving one (large) table or zipped netCDF file. OGC (Open Geospatial Consortium) protocols, WMS and WCS. A Webification (W10n) protocol that returns JavaScript Object Notation (JSON). Extensions to OPeNDAP's query language are reducing transfer volumes and enabling new forms of inspection. Advances underway include: Functions that, for triangular-mesh sources, return sub-meshes spec'd via geospatial bounding boxes. Functions that, for data from multiple, satellite-borne sensors (with differing orbits), select observations based on coincidence. Calculations of means, histograms, etc. that greatly reduce output volumes.. Paths for communities to contribute new server functions (in Python, e.g.) that data

  16. Ramses-GPU: Second order MUSCL-Handcock finite volume fluid solver

    Science.gov (United States)

    Kestener, Pierre

    2017-10-01

    RamsesGPU is a reimplementation of RAMSES (ascl:1011.007) which drops the adaptive mesh refinement (AMR) features to optimize 3D uniform grid algorithms for modern graphics processor units (GPU) to provide an efficient software package for astrophysics applications that do not need AMR features but do require a very large number of integration time steps. RamsesGPU provides an very efficient C++/CUDA/MPI software implementation of a second order MUSCL-Handcock finite volume fluid solver for compressible hydrodynamics as a magnetohydrodynamics solver based on the constraint transport technique. Other useful modules includes static gravity, dissipative terms (viscosity, resistivity), and forcing source term for turbulence studies, and special care was taken to enhance parallel input/output performance by using state-of-the-art libraries such as HDF5 and parallel-netcdf.

  17. Analyst Tools and Quality Control Software for the ARM Data System

    Energy Technology Data Exchange (ETDEWEB)

    Moore, S.T.

    2004-12-14

    ATK Mission Research develops analyst tools and automated quality control software in order to assist the Atmospheric Radiation Measurement (ARM) Data Quality Office with their data inspection tasks. We have developed a web-based data analysis and visualization tool, called NCVweb, that allows for easy viewing of ARM NetCDF files. NCVweb, along with our library of sharable Interactive Data Language procedures and functions, allows even novice ARM researchers to be productive with ARM data with only minimal effort. We also contribute to the ARM Data Quality Office by analyzing ARM data streams, developing new quality control metrics, new diagnostic plots, and integrating this information into DQ HandS - the Data Quality Health and Status web-based explorer. We have developed several ways to detect outliers in ARM data streams and have written software to run in an automated fashion to flag these outliers.

  18. Optimizing Extender Code for NCSX Analyses

    Energy Technology Data Exchange (ETDEWEB)

    M. Richman, S. Ethier, and N. Pomphrey

    2008-01-22

    Extender is a parallel C++ code for calculating the magnetic field in the vacuum region of a stellarator. The code was optimized for speed and augmented with tools to maintain a specialized NetCDF database. Two parallel algorithms were examined. An even-block work-distribution scheme was comparable in performance to a master-slave scheme. Large speedup factors were achieved by representing the plasma surface with a spline rather than Fourier series. The accuracy of this representation and the resulting calculations relied on the density of the spline mesh. The Fortran 90 module db access was written to make it easy to store Extender output in a manageable database. New or updated data can be added to existing databases. A generalized PBS job script handles the generation of a database from scratch

  19. Task 28: Web Accessible APIs in the Cloud Trade Study

    Science.gov (United States)

    Gallagher, James; Habermann, Ted; Jelenak, Aleksandar; Lee, Joe; Potter, Nathan; Yang, Muqun

    2017-01-01

    This study explored three candidate architectures for serving NASA Earth Science Hierarchical Data Format Version 5 (HDF5) data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the project are: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and Network Common Data Format Version 4 (netCDF4) data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3).Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.

  20. Development of Web GIS for complex processing and visualization of climate geospatial datasets as an integral part of dedicated Virtual Research Environment

    Science.gov (United States)

    Gordov, Evgeny; Okladnikov, Igor; Titov, Alexander

    2017-04-01

    For comprehensive usage of large geospatial meteorological and climate datasets it is necessary to create a distributed software infrastructure based on the spatial data infrastructure (SDI) approach. Currently, it is generally accepted that the development of client applications as integrated elements of such infrastructure should be based on the usage of modern web and GIS technologies. The paper describes the Web GIS for complex processing and visualization of geospatial (mainly in NetCDF and PostGIS formats) datasets as an integral part of the dedicated Virtual Research Environment for comprehensive study of ongoing and possible future climate change, and analysis of their implications, providing full information and computing support for the study of economic, political and social consequences of global climate change at the global and regional levels. The Web GIS consists of two basic software parts: 1. Server-side part representing PHP applications of the SDI geoportal and realizing the functionality of interaction with computational core backend, WMS/WFS/WPS cartographical services, as well as implementing an open API for browser-based client software. Being the secondary one, this part provides a limited set of procedures accessible via standard HTTP interface. 2. Front-end part representing Web GIS client developed according to a "single page application" technology based on JavaScript libraries OpenLayers (http://openlayers.org/), ExtJS (https://www.sencha.com/products/extjs), GeoExt (http://geoext.org/). It implements application business logic and provides intuitive user interface similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Boundless/OpenGeo architecture was used as a basis for Web-GIS client development. According to general INSPIRE requirements to data visualization Web GIS provides such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map

  1. Using Browser Notebooks to Analyse Big Atmospheric Data-sets in the Cloud

    Science.gov (United States)

    Robinson, N.; Tomlinson, J.; Arribas, A.; Prudden, R.

    2016-12-01

    We are presenting an account of our experience building an ecosystem for the analysis of big atmospheric data-sets. By using modern technologies we have developed a prototype platform which is scaleable and capable of analysing very large atmospheric datasets. We tested different big-data ecosystems such as Hadoop MapReduce, Spark and Dask, in order to find the one which was best suited for analysis of multidimensional binary data such as NetCDF. We make extensive use of infrastructure-as-code and containerisation to provide a platform which is reusable, and which can scale to accommodate changes in demand. We make this platform readily accessible using browser based notebooks. As a result, analysts with minimal technology experience can, in tens of lines of Python, make interactive data-visualisation web pages, which can analyse very large amounts of data using cutting edge big-data technology

  2. Unleashing Geophysics Data with Modern Formats and Services

    Science.gov (United States)

    Ip, Alex; Brodie, Ross C.; Druken, Kelsey; Bastrakova, Irina; Evans, Ben; Kemp, Carina; Richardson, Murray; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    Geoscience Australia (GA) is the national steward of large volumes of geophysical data extending over the entire Australasian region and spanning many decades. The volume and variety of data which must be managed, coupled with the increasing need to support machine-to-machine data access, mean that the old "click-and-ship" model delivering data as downloadable files for local analysis is rapidly becoming unviable - a "big data" problem not unique to geophysics. The Australian Government, through the Research Data Services (RDS) Project, recently funded the Australian National Computational Infrastructure (NCI) to organize a wide range of Earth Systems data from diverse collections including geoscience, geophysics, environment, climate, weather, and water resources onto a single High Performance Data (HPD) Node. This platform, which now contains over 10 petabytes of data, is called the National Environmental Research Data Interoperability Platform (NERDIP), and is designed to facilitate broad user access, maximise reuse, and enable integration. GA has contributed several hundred terabytes of geophysical data to the NERDIP. Historically, geophysical datasets have been stored in a range of formats, with metadata of varying quality and accessibility, and without standardised vocabularies. This has made it extremely difficult to aggregate original data from multiple surveys (particularly un-gridded geophysics point/line data) into standard formats suited to High Performance Computing (HPC) environments. To address this, it was decided to use the NERDIP-preferred Hierarchical Data Format (HDF) 5, which is a proven, standard, open, self-describing and high-performance format supported by extensive software tools, libraries and data services. The Network Common Data Form (NetCDF) 4 API facilitates the use of data in HDF5, whilst the NetCDF Climate & Forecasting conventions (NetCDF-CF) further constrain NetCDF4/HDF5 data so as to provide greater inherent interoperability

  3. EverVIEW: a visualization platform for hydrologic and Earth science gridded data

    Science.gov (United States)

    Romañach, Stephanie S.; McKelvy, James M.; Suir, Kevin J.; Conzelmann, Craig

    2015-01-01

    The EverVIEW Data Viewer is a cross-platform desktop application that combines and builds upon multiple open source libraries to help users to explore spatially-explicit gridded data stored in Network Common Data Form (NetCDF). Datasets are displayed across multiple side-by-side geographic or tabular displays, showing colorized overlays on an Earth globe or grid cell values, respectively. Time-series datasets can be animated to see how water surface elevation changes through time or how habitat suitability for a particular species might change over time under a given scenario. Initially targeted toward Florida's Everglades restoration planning, EverVIEW has been flexible enough to address the varied needs of large-scale planning beyond Florida, and is currently being used in biological planning efforts nationally and internationally.

  4. HadISD: a quality-controlled global synoptic report database for selected variables at long-term stations from 1973--2011

    CERN Document Server

    Dunn, Robert J H; Thorne, Peter W; Woolley, Emma V; Durre, Imke; Dai, Aiguo; Parker, David E; Vose, Russ E; 10.5194/cp-8-1649-2012

    2012-01-01

    [Abridged] This paper describes the creation of HadISD: an automatically quality-controlled synoptic resolution dataset of temperature, dewpoint temperature, sea-level pressure, wind speed, wind direction and cloud cover from global weather stations for 1973--2011. The full dataset consists of over 6000 stations, with 3427 long-term stations deemed to have sufficient sampling and quality for climate applications requiring sub-daily resolution. As with other surface datasets, coverage is heavily skewed towards Northern Hemisphere mid-latitudes. The dataset is constructed from a large pre-existing ASCII flatfile data bank that represents over a decade of substantial effort at data retrieval, reformatting and provision. These raw data have had varying levels of quality control applied to them by individual data providers. The work proceeded in several steps: merging stations with multiple reporting identifiers; reformatting to netCDF; quality control; and then filtering to form a final dataset. Particular attent...

  5. Introduction to modern Fortran for the Earth system sciences

    CERN Document Server

    Chirila, Dragos B

    2014-01-01

    This work provides a short "getting started" guide to Fortran 90/95. The main target audience consists of newcomers to the field of numerical computation within Earth system sciences (students, researchers or scientific programmers). Furthermore, readers accustomed to other programming languages may also benefit from this work, by discovering how some programming techniques they are familiar with map to Fortran 95. The main goal is to enable readers to quickly start using Fortran 95 for writing useful programs. It also introduces a gradual discussion of Input/Output facilities relevant for Earth system sciences, from the simplest ones to the more advanced netCDF library (which has become a de facto standard for handling the massive datasets used within Earth system sciences). While related works already treat these disciplines separately (each often providing much more information than needed by the beginning practitioner), the reader finds in this book a shorter guide which links them. Compared to other book...

  6. Development of a Multilayer MODIS IST-Albedo Product of Greenland

    Science.gov (United States)

    Hall, D. K.; Comiso, J. C.; Cullather, R. I.; Digirolamo, N. E.; Nowicki, S. M.; Medley, B. C.

    2017-01-01

    A new multilayer IST-albedo Moderate Resolution Imaging Spectroradiometer (MODIS) product of Greenland was developed to meet the needs of the ice sheet modeling community. The multiple layers of the product enable the relationship between IST and albedo to be evaluated easily. Surface temperature is a fundamental input for dynamical ice sheet models because it is a component of the ice sheet radiation budget and mass balance. Albedo influences absorption of incoming solar radiation. The daily product will combine the existing standard MODIS Collection-6 ice-surface temperature, derived melt maps, snow albedo and water vapor products. The new product is available in a polar stereographic projection in NetCDF format. The product will ultimately extend from March 2000 through the end of 2017.

  7. Damsel: A Data Model Storage Library for Exascale Science

    Energy Technology Data Exchange (ETDEWEB)

    Choudhary, Alok [Northwestern Univ., Evanston, IL (United States); Liao, Wei-keng [Northwestern Univ., Evanston, IL (United States)

    2014-07-11

    Computational science applications have been described as having one of seven motifs (the “seven dwarfs”), each having a particular pattern of computation and communication. From a storage and I/O perspective, these applications can also be grouped into a number of data model motifs describing the way data is organized and accessed during simulation, analysis, and visualization. Major storage data models developed in the 1990s, such as Network Common Data Format (netCDF) and Hierarchical Data Format (HDF) projects, created support for more complex data models. Development of both netCDF and HDF5 was influenced by multi-dimensional dataset storage requirements, but their access models and formats were designed with sequential storage in mind (e.g., a POSIX I/O model). Although these and other high-level I/O libraries have had a beneficial impact on large parallel applications, they do not always attain a high percentage of peak I/O performance due to fundamental design limitations, and they do not address the full range of current and future computational science data models. The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. The project consists of three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community. The product of this project, Damsel library, is openly available for download from http://cucis.ece.northwestern.edu/projects/DAMSEL. Several case studies and application programming interface

  8. Global Ocean Currents Database

    Science.gov (United States)

    Boyer, T.; Sun, L.

    2016-02-01

    The NOAA's National Centers for Environmental Information has released an ocean currents database portal that aims 1) to integrate global ocean currents observations from a variety of instruments with different resolution, accuracy and response to spatial and temporal variability into a uniform network common data form (NetCDF) format and 2) to provide a dedicated online data discovery, access to NCEI-hosted and distributed data sources for ocean currents data. The portal provides a tailored web application that allows users to search for ocean currents data by platform types and spatial/temporal ranges of their interest. The dedicated web application is available at http://www.nodc.noaa.gov/gocd/index.html. The NetCDF format supports widely-used data access protocols and catalog services such as OPeNDAP (Open-source Project for a Network Data Access Protocol) and THREDDS (Thematic Real-time Environmental Distributed Data Services), which the GOCD users can use data files with their favorite analysis and visualization client software without downloading to their local machine. The potential users of the ocean currents database include, but are not limited to, 1) ocean modelers for their model skills assessments, 2) scientists and researchers for studying the impact of ocean circulations on the climate variability, 3) ocean shipping industry for safety navigation and finding optimal routes for ship fuel efficiency, 4) ocean resources managers while planning for the optimal sites for wastes and sewages dumping and for renewable hydro-kinematic energy, and 5) state and federal governments to provide historical (analyzed) ocean circulations as an aid for search and rescue

  9. Operational Data Management Within the IPY Framework, Relations to WIS and DAMOCLES

    Science.gov (United States)

    Godøy, O.

    2007-12-01

    The Norwegian Meteorological Institute (METNO) is hosting a data coordination service for operational data during IPY. Using WMO Information System technology, not only local datasets, but also remote datasets are visible and available for download by users if allowed by the data centre hosting them. It is the intention of METNO to add OGC services to the WIS-implementation chosen. With IPY services in mind METNO has evaluated a WIS demonstrator named SIMDAT which is being developed at the European Centre for Medium Range Weather Forecasting. The core part of the SIMDAT system is the catalogue node which exchanges data and metadata with other WIS implementations (e.g. CDP) and maintains a metadatabase of its own. The European Union sponsored project DAMOCLES is an IPY project, but preparations and development started before IPY requirements were clear. The main requirement when developing the data management system was that it should be a low cost solution leaving as much funding as possible to science. The basic implementation is using a PostGRES database for metadata and a data repository accessed also by HTTP/OpeNDAP. The metadatabase is automatically fed by extracting metadata from netCDF files following the CF standard. netCDF are used both for point/trajectory and gridded datasets. The implementation is quite similar to the one used by projects like MERSEA, Coriolis. The DAMOCLES data management portal provides standard search mechanisms quite similar to GCMD, direct linking of OpeNDAP objects, and output presentation configurable by the user. The latter is also available as two-way tables where e.g. data sets can be presented for various regions or topic categories. DAMOCLES and SEARCH is linked through a European Union Specific Support Action called SEARCH for DAMOCLES (S4D). Data management coordination between SEARCH and DAMOCLES through S4D relies on IPY data management mechanisms.

  10. Autoplot: a Browser for Science Data on the Web

    Science.gov (United States)

    Faden, J.; Weigel, R. S.; West, E. E.; Merka, J.

    2008-12-01

    Autoplot (www.autoplot.org) is software for plotting data from many different sources and in many different file formats. Data from CDF, CEF, Fits, NetCDF, and OpenDAP can be plotted, along with many other sources such as ASCII tables and Excel spreadsheets. This is done by adapting these various data formats and APIs into a common data model that borrows from the netCDF and CDF data models. Autoplot uses a web browser metaphor to simplify use. The user specifies a parameter URL, for example a CDF file accessible via http with a parameter name appended, and the file resource is downloaded and the parameter is rendered in a scientifically meaningful way. When data span multiple files, the user can use a file name template in the URL to aggregate (combine) a set of remote files. So the problem of aggregating data across file boundaries is handled on the client side, allowing simple web servers to be used. The das2 graphics library provides rich controls for exploring the data. Scripting is supported through Python, providing not just programmatic control, but for calculating new parameters in a language that will look familiar to IDL and Matlab users. Autoplot is Java-based software, and will run on most computers without a burdensome installation process. It can also used as an applet or as a servlet that serves static images. Autoplot was developed as part of the Virtual Radiation Belt Observatory (ViRBO) project, and is also being used for the Virtual Magnetospheric Observatory (VMO). It is expected that this flexible, general-purpose plotting tool will be useful for allowing a data provider to add instant visualization capabilities to a directory of files or for general use in the Virtual Observatory environment.

  11. Development of an Operational TS Dataset Production System for the Data Assimilation System

    Science.gov (United States)

    Kim, Sung Dae; Park, Hyuk Min; Kim, Young Ho; Park, Kwang Soon

    2017-04-01

    An operational TS (Temperature and Salinity) dataset production system was developed to provide near real-time data to the data assimilation system periodically. It collects the latest 15 days' TS data of the north western pacific area (20°N - 55°N, 110°E - 150°E), applies QC tests to the archived data and supplies them to numerical prediction models of KIOST (Korea Institute of Ocean Science and Technology). The latest real-time TS data are collected from Argo GDAC and GTSPP data server every week. Argo data are downloaded from /latest_data directory of Argo GDAC. Because many duplicated data exist when all profile data are extracted from all Argo netCDF files, DB system is used to avoid duplication. All metadata (float ID, location, observation date and time, etc) of all Argo floats is stored into Database system and a Matlab program was developed to manipulate DB data, to check the duplication and to exclude duplicated data. GTSPP data are downloaded from /realtime directory of GTSPP data service. The latest data except ARGO data are extracted from the original data. Another Matlab program was coded to inspect all collected data using 10 QC tests and produce final dataset which can be used by the assimilation system. Three regional range tests to inspect annual, seasonal and monthly variations are included in the QC procedures. The C program was developed to provide regional ranges to data managers. It can calculate upper limit and lower limit of temperature and salinity at depth from 0 to 1550m. The final TS dataset contains the latest 15 days' TS data in netCDF format. It is updated every week and transmitted to numerical modeler of KIOST for operational use.

  12. Workflow-Oriented Cyberinfrastructure for Sensor Data Analytics

    Science.gov (United States)

    Orcutt, J. A.; Rajasekar, A.; Moore, R. W.; Vernon, F.

    2015-12-01

    Sensor streams comprise an increasingly large part of Earth Science data. Analytics based on sensor data require an easy way to perform operations such as acquisition, conversion to physical units, metadata linking, sensor fusion, analysis and visualization on distributed sensor streams. Furthermore, embedding real-time sensor data into scientific workflows is of growing interest. We have implemented a scalable networked architecture that can be used to dynamically access packets of data in a stream from multiple sensors, and perform synthesis and analysis across a distributed network. Our system is based on the integrated Rule Oriented Data System (irods.org), which accesses sensor data from the Antelope Real Time Data System (brtt.com), and provides virtualized access to collections of data streams. We integrate real-time data streaming from different sources, collected for different purposes, on different time and spatial scales, and sensed by different methods. iRODS, noted for its policy-oriented data management, brings to sensor processing features and facilities such as single sign-on, third party access control lists ( ACLs), location transparency, logical resource naming, and server-side modeling capabilities while reducing the burden on sensor network operators. Rich integrated metadata support also makes it straightforward to discover data streams of interest and maintain data provenance. The workflow support in iRODS readily integrates sensor processing into any analytical pipeline. The system is developed as part of the NSF-funded Datanet Federation Consortium (datafed.org). APIs for selecting, opening, reaping and closing sensor streams are provided, along with other helper functions to associate metadata and convert sensor packets into NetCDF and JSON formats. Near real-time sensor data including seismic sensors, environmental sensors, LIDAR and video streams are available through this interface. A system for archiving sensor data and metadata in NetCDF

  13. GENESIS SciFlo: Choreographing Interoperable Web Services on the Grid using a Semantically-Enabled Dataflow Execution Environment

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Xing, Z.

    2007-12-01

    Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.

  14. Web mapping system for complex processing and visualization of environmental geospatial datasets

    Science.gov (United States)

    Titov, Alexander; Gordov, Evgeny; Okladnikov, Igor

    2016-04-01

    Environmental geospatial datasets (meteorological observations, modeling and reanalysis results, etc.) are used in numerous research applications. Due to a number of objective reasons such as inherent heterogeneity of environmental datasets, big dataset volume, complexity of data models used, syntactic and semantic differences that complicate creation and use of unified terminology, the development of environmental geodata access, processing and visualization services as well as client applications turns out to be quite a sophisticated task. According to general INSPIRE requirements to data visualization geoportal web applications have to provide such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. It should be noted that modern web mapping systems as integrated geoportal applications are developed based on the SOA and might be considered as complexes of interconnected software tools for working with geospatial data. In the report a complex web mapping system including GIS web client and corresponding OGC services for working with geospatial (NetCDF, PostGIS) dataset archive is presented. There are three basic tiers of the GIS web client in it: 1. Tier of geospatial metadata retrieved from central MySQL repository and represented in JSON format 2. Tier of JavaScript objects implementing methods handling: --- NetCDF metadata --- Task XML object for configuring user calculations, input and output formats --- OGC WMS/WFS cartographical services 3. Graphical user interface (GUI) tier representing JavaScript objects realizing web application business logic Metadata tier consists of a number of JSON objects containing technical information describing geospatial datasets (such as spatio-temporal resolution, meteorological parameters, valid processing methods, etc). The middleware tier of JavaScript objects implementing methods for handling geospatial

  15. Improving the Interoperability of NASA HDF and HDF-EOS data

    Science.gov (United States)

    Yang, M.

    2010-12-01

    HDF is a set of data formats and software libraries for storing scientific data with an emphasis on standards, storage, and I/O efficiency. The HDF-EOS version 2 (HDF-EOS2) profile and library, built on top of HDF version 4 (HDF4), define and implement the standard data format for the NASA Earth Science Data and Information System (ESDIS). Since the launch of Terra in 1999, the EOS Data and Information System (EOSDIS) has produced more than three terabytes of EOS earth science data daily. More than five hundred data products in NASA data centers are stored in HDF. HDF5 is a newer data format. It has been embraced as an important data format for Earth science. HDF-EOS5, which is built on top of HDF5, is the primary data format for data from the Aura satellite. The new version of netCDF, netCDF-4, is built on top of HDF5. The OPeNDAP Data Access Protocol and its related software have emerged as important components of the earth science data system infrastructure. The OPeNDAP protocol is widely used to remotely access earth science data. Several third-party visualization and analysis tools that can read data from OPeNDAP servers, such as IDV, Panoply, GrADS, Ferret, NCL, MATLAB, and IDL, are widely used by many earth scientists and educators to access HDF earth science data. Ensuring easy access to HDF4, HDF5 and HDF-EOS data via OPeNDAP client tools will reduce the time for HDF users to visualize the data in their favorite way and accordingly improve their working efficiencies. In the past three years, under the support of NASA ESDIS and ACCESS projects, The HDF Group implemented the HDF4-OPeNDAP and HDF5-OPeNDAP data handlers so that many NASA HDF and HDF-EOS Swath and Grid data can be accessed by widely used visualization and analysis tools such as IDV, Panoply, GrADS, Ferret, NCL and IDL via OPeNDAP. We will share the challenges we have encountered and the solutions on how to address these challenges in the process of implementing the HDF OPeNDAP handlers. We also

  16. ParCAT: A Parallel Climate Analysis Toolkit

    Science.gov (United States)

    Haugen, B.; Smith, B.; Steed, C.; Ricciuto, D. M.; Thornton, P. E.; Shipman, G.

    2012-12-01

    Climate science has employed increasingly complex models and simulations to analyze the past and predict the future of our climate. The size and dimensionality of climate simulation data has been growing with the complexity of the models. This growth in data is creating a widening gap between the data being produced and the tools necessary to analyze large, high dimensional data sets. With single run data sets increasing into 10's, 100's and even 1000's of gigabytes, parallel computing tools are becoming a necessity in order to analyze and compare climate simulation data. The Parallel Climate Analysis Toolkit (ParCAT) provides basic tools that efficiently use parallel computing techniques to narrow the gap between data set size and analysis tools. ParCAT was created as a collaborative effort between climate scientists and computer scientists in order to provide efficient parallel implementations of the computing tools that are of use to climate scientists. Some of the basic functionalities included in the toolkit are the ability to compute spatio-temporal means and variances, differences between two runs and histograms of the values in a data set. ParCAT is designed to facilitate the "heavy lifting" that is required for large, multidimensional data sets. The toolkit does not focus on performing the final visualizations and presentation of results but rather, reducing large data sets to smaller, more manageable summaries. The output from ParCAT is provided in commonly used file formats (NetCDF, CSV, ASCII) to allow for simple integration with other tools. The toolkit is currently implemented as a command line utility, but will likely also provide a C library for developers interested in tighter software integration. Elements of the toolkit are already being incorporated into projects such as UV-CDAT and CMDX. There is also an effort underway to implement portions of the CCSM Land Model Diagnostics package using ParCAT in conjunction with Python and gnuplot. Par

  17. Exposing Coverage Data to the Semantic Web within the MELODIES project: Challenges and Solutions

    Science.gov (United States)

    Riechert, Maik; Blower, Jon; Griffiths, Guy

    2016-04-01

    Coverage data, typically big in data volume, assigns values to a given set of spatiotemporal positions, together with metadata on how to interpret those values. Existing storage formats like netCDF, HDF and GeoTIFF all have various restrictions that prevent them from being preferred formats for use over the web, especially the semantic web. Factors that are relevant here are the processing complexity, the semantic richness of the metadata, and the ability to request partial information, such as a subset or just the appropriate metadata. Making coverage data available within web browsers opens the door to new ways for working with such data, including new types of visualization and on-the-fly processing. As part of the European project MELODIES (http://melodiesproject.eu) we look into the challenges of exposing such coverage data in an interoperable and web-friendly way, and propose solutions using a host of emerging technologies like JSON-LD, the DCAT and GeoDCAT-AP ontologies, the CoverageJSON format, and new approaches to REST APIs for coverage data. We developed the CoverageJSON format within the MELODIES project as an additional way to expose coverage data to the web, next to having simple rendered images available using standards like OGC's WMS. CoverageJSON partially incorporates JSON-LD but does not encode individual data values as semantic resources, making use of the technology in a practical manner. The development also focused on it being a potential output format for OGC WCS. We will demonstrate how existing netCDF data can be exposed as CoverageJSON resources on the web together with a REST API that allows users to explore the data and run operations such as spatiotemporal subsetting. We will show various use cases from the MELODIES project, including reclassification of a Land Cover dataset client-side within the browser with the ability for the user to influence the reclassification result by making use of the above technologies.

  18. A web portal for accessing, viewing and comparing in situ observations, EO products and model output data

    Science.gov (United States)

    Vines, Aleksander; Hamre, Torill; Lygre, Kjetil

    2014-05-01

    The GreenSeas project (Development of global plankton data base and model system for eco-climate early warning) aims to advance the knowledge and predictive capacities of how marine ecosystems will respond to global change. A main task has been to set up a data delivery and monitoring core service following the open and free data access policy implemented in the Global Monitoring for the Environment and Security (GMES) programme. A key feature of the system is its ability to compare data from different datasets, including an option to upload one's own netCDF files. The user can for example search in an in situ database for different variables (like temperature, salinity, different elements, light, specific plankton types or rate measurements) with different criteria (bounding box, date/time, depth, Longhurst region, cruise/transect) and compare the data with model data. The user can choose model data or Earth observation data from a list, or upload his/her own netCDF files to use in the comparison. The data can be visualized on a map, as graphs and plots (e.g. time series and property-property plots), or downloaded in various formats. The aim is to ensure open and free access to historical plankton data, new data (EO products and in situ measurements), model data (including estimates of simulation error) and biological, environmental and climatic indicators to a range of stakeholders, such as scientists, policy makers and environmental managers. We have implemented a web-based GIS(Geographical Information Systems) system and want to demonstrate the use of this. The tool is designed for a wide range of users: Novice users, who want a simple way to be able to get basic information about the current state of the marine planktonic ecosystem by utilizing predefined queries and comparisons with models. Intermediate level users who want to explore the database on their own and customize the prefedined setups. Advanced users who want to perform complex queries and

  19. An Innovative Open Data-driven Approach for Improved Interpretation of Coverage Data at NASA JPL's PO.DAA

    Science.gov (United States)

    McGibbney, L. J.; Armstrong, E. M.

    2016-12-01

    Figuratively speaking, Scientific Datasets (SD) are shared by data producers in a multitude of shapes, sizes and flavors. Primarily however they exist as machine-independent manifestations supporting the creation, access, and sharing of array-oriented SD that can on occasion be spread across multiple files. Within the Earth Sciences, the most notable general examples include the HDF family, NetCDF, etc. with other formats such as GRIB being used pervasively within specific domains such as the Oceanographic, Atmospheric and Meteorological sciences. Such file formats contain Coverage Data e.g. a digital representation of some spatio-temporal phenomenon. A challenge for large data producers such as NASA and NOAA as well as consumers of coverage datasets (particularly surrounding visualization and interactive use within web clients) is that this is still not a straight-forward issue due to size, serialization and inherent complexity. Additionally existing data formats are either unsuitable for the Web (like netCDF files) or hard to interpret independently due to missing standard structures and metadata (e.g. the OPeNDAP protocol). Therefore alternative, Web friendly manifestations of such datasets are required.CoverageJSON is an emerging data format for publishing coverage data to the web in a web-friendly, way which fits in with the linked data publication paradigm hence lowering the barrier for interpretation by consumers via mobile devices and client applications, etc. as well as data producers who can build next generation Web friendly Web services around datasets. This work will detail how CoverageJSON is being evaluated at NASA JPL's PO.DAAC as an enabling data representation format for publishing SD as Linked Open Data embedded within SD landing pages as well as via semantic data repositories. We are currently evaluating how utilization of CoverageJSON within SD landing pages addresses the long-standing acknowledgement that SD producers are not currently

  20. A polarimetric scattering database for non-spherical ice particles at microwave wavelengths

    Science.gov (United States)

    Lu, Yinghui; Jiang, Zhiyuan; Aydin, Kultegin; Verlinde, Johannes; Clothiaux, Eugene E.; Botta, Giovanni

    2016-10-01

    graupel - and direction of incident radiation but is limited to four frequencies (X-, Ku-, Ka-, and W-bands), does not include temperature dependencies of the single-scattering properties, and does not include scattering properties averaged over randomly oriented ice particles. Rules for constructing the morphologies of ice particles from one database to the next often differ; consequently, analyses that incorporate all of the different databases will contain the most variability, while illuminating important differences between them. Publication of this database is in support of future analyses of this nature and comes with the hope that doing so helps contribute to the development of a database standard for ice-particle scattering properties, like the NetCDF (Network Common Data Form) CF (Climate and Forecast) or NetCDF CF/Radial metadata conventions.

  1. ESA Atmospheric Toolbox

    Science.gov (United States)

    Niemeijer, Sander

    2017-04-01

    The ESA Atmospheric Toolbox (BEAT) is one of the ESA Sentinel Toolboxes. It consists of a set of software components to read, analyze, and visualize a wide range of atmospheric data products. In addition to the upcoming Sentinel-5P mission it supports a wide range of other atmospheric data products, including those of previous ESA missions, ESA Third Party missions, Copernicus Atmosphere Monitoring Service (CAMS), ground based data, etc. The toolbox consists of three main components that are called CODA, HARP and VISAN. CODA provides interfaces for direct reading of data from earth observation data files. These interfaces consist of command line applications, libraries, direct interfaces to scientific applications (IDL and MATLAB), and direct interfaces to programming languages (C, Fortran, Python, and Java). CODA provides a single interface to access data in a wide variety of data formats, including ASCII, binary, XML, netCDF, HDF4, HDF5, CDF, GRIB, RINEX, and SP3. HARP is a toolkit for reading, processing and inter-comparing satellite remote sensing data, model data, in-situ data, and ground based remote sensing data. The main goal of HARP is to assist in the inter-comparison of datasets. By appropriately chaining calls to HARP command line tools one can pre-process datasets such that two datasets that need to be compared end up having the same temporal/spatial grid, same data format/structure, and same physical unit. The toolkit comes with its own data format conventions, the HARP format, which is based on netcdf/HDF. Ingestion routines (based on CODA) allow conversion from a wide variety of atmospheric data products to this common format. In addition, the toolbox provides a wide range of operations to perform conversions on the data such as unit conversions, quantity conversions (e.g. number density to volume mixing ratios), regridding, vertical smoothing using averaging kernels, collocation of two datasets, etc. VISAN is a cross-platform visualization and

  2. SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data

    Science.gov (United States)

    Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These

  3. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    Science.gov (United States)

    Wilson, B. D.; Palamuttam, R. S.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; Verma, R.; Waliser, D. E.; Lee, H.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 10 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. We have implemented a parallel data ingest capability in which the user specifies desired variables (arrays) as several time-sorted lists of URL's (i.e. using OPeNDAP model.nc?varname, or local files). The specified variables are partitioned by time/space and then each Spark node pulls its bundle of arrays into memory to begin a computation pipeline. We also investigated the performance of several N-dim. array libraries (scala breeze, java jblas & netlib-java, and ND4J). We are currently developing science codes using ND4J and studying memory behavior on the JVM. On the pyspark side, many of our science codes already use the numpy and SciPy ecosystems. The talk will cover: the architecture of SciSpark, the design of the scientific RDD (sRDD) data structure, our

  4. GIS Services, Visualization Products, and Interoperability at the National Oceanic and Atmospheric Administration (NOAA) National Climatic Data Center (NCDC)

    Science.gov (United States)

    Baldwin, R.; Ansari, S.; Reid, G.; Lott, N.; Del Greco, S.

    2007-12-01

    The main goal in developing and deploying Geographic Information System (GIS) services at NOAA's National Climatic Data Center (NCDC) is to provide users with simple access to data archives while integrating new and informative climate products. Several systems at NCDC provide a variety of climatic data in GIS formats and/or map viewers. The Online GIS Map Services provide users with data discovery options which flow into detailed product selection maps, which may be queried using standard "region finder" tools or gazetteer (geographical dictionary search) functions. Each tabbed selection offers steps to help users progress through the systems. A series of additional base map layers or data types have been added to provide companion information. New map services include: Severe Weather Data Inventory, Local Climatological Data, Divisional Data, Global Summary of the Day, and Normals/Extremes products. THREDDS Data Server technology is utilized to provide access to gridded multidimensional datasets such as Model, Satellite and Radar. This access allows users to download data as a gridded NetCDF file, which is readable by ArcGIS. In addition, users may subset the data for a specific geographic region, time period, height range or variable prior to download. The NCDC Weather Radar Toolkit (WRT) is a client tool which accesses Weather Surveillance Radar 1988 Doppler (WSR-88D) data locally or remotely from the NCDC archive, NOAA FTP server or any URL or THREDDS Data Server. The WRT Viewer provides tools for custom data overlays, Web Map Service backgrounds, animations and basic filtering. The export of images and movies is provided in multiple formats. The WRT Data Exporter allows for data export in both vector polygon (Shapefile, Well-Known Text) and raster (GeoTIFF, ESRI Grid, VTK, NetCDF, GrADS) formats. As more users become accustom to GIS, questions of better, cheaper, faster access soon follow. Expanding use and availability can best be accomplished through

  5. 'tomo_display' and 'vol_tools': IDL VM Packages for Tomography Data Reconstruction, Processing, and Visualization

    Science.gov (United States)

    Rivers, M. L.; Gualda, G. A.

    2009-05-01

    One of the challenges in tomography is the availability of suitable software for image processing and analysis in 3D. We present here 'tomo_display' and 'vol_tools', two packages created in IDL that enable reconstruction, processing, and visualization of tomographic data. They complement in many ways the capabilities offered by Blob3D (Ketcham 2005 - Geosphere, 1: 32-41, DOI: 10.1130/GES00001.1) and, in combination, allow users without programming knowledge to perform all steps necessary to obtain qualitative and quantitative information using tomographic data. The package 'tomo_display' was created and is maintained by Mark Rivers. It allows the user to: (1) preprocess and reconstruct parallel beam tomographic data, including removal of anomalous pixels, ring artifact reduction, and automated determination of the rotation center, (2) visualization of both raw and reconstructed data, either as individual frames, or as a series of sequential frames. The package 'vol_tools' consists of a series of small programs created and maintained by Guilherme Gualda to perform specific tasks not included in other packages. Existing modules include simple tools for cropping volumes, generating histograms of intensity, sample volume measurement (useful for porous samples like pumice), and computation of volume differences (for differential absorption tomography). The module 'vol_animate' can be used to generate 3D animations using rendered isosurfaces around objects. Both packages use the same NetCDF format '.volume' files created using code written by Mark Rivers. Currently, only 16-bit integer volumes are created and read by the packages, but floating point and 8-bit data can easily be stored in the NetCDF format as well. A simple GUI to convert sequences of tiffs into '.volume' files is available within 'vol_tools'. Both 'tomo_display' and 'vol_tools' include options to (1) generate onscreen output that allows for dynamic visualization in 3D, (2) save sequences of tiffs to disk

  6. eWaterCycle: Building an operational global Hydrological forecasting system based on standards and open source software

    Science.gov (United States)

    Drost, Niels; Bierkens, Marc; Donchyts, Gennadii; van de Giesen, Nick; Hummel, Stef; Hut, Rolf; Kockx, Arno; van Meersbergen, Maarten; Sutanudjaja, Edwin; Verlaan, Martin; Weerts, Albrecht; Winsemius, Hessel

    2015-04-01

    At EGU 2015, the eWaterCycle project (www.ewatercycle.org) will launch an operational high-resolution Hydrological global model, including 14 day ensemble forecasts. Within the eWaterCycle project we aim to use standards and open source software as much as possible. This ensures the sustainability of the software created, and the ability to swap out components as newer technologies and solutions become available. It also allows us to build the system much faster than would otherwise be the case. At the heart of the eWaterCycle system is the PCRGLOB-WB Global Hydrological model (www.globalhydrology.nl) developed at Utrecht University. Version 2.0 of this model is implemented in Python, and models a wide range of Hydrological processes at 10 x 10km (and potentially higher) resolution. To assimilate near-real time satellite data into the model, and run an ensemble forecast we use the OpenDA system (www.openda.org). This allows us to make use of different data assimilation techniques without the need to implement these from scratch. As a data assimilation technique we currently use (variant of) an Ensemble Kalman Filter, specifically optimized for High Performance Computing environments. Coupling of the model with the DA is done with the Basic Model Interface (BMI), developed in the framework of the Community Surface Dynamics Modeling System (CSDMS) (csdms.colorado.edu). We have added support for BMI to PCRGLOB-WB, and developed a BMI adapter for OpenDA, allowing OpenDA to use any BMI compatible model. We currently use multiple different BMI models with OpenDA, already showing the benefits of using this standard. Throughout the system, all file based input and output is done via NetCDF files. We use several standard tools to be used for pre- and post-processing data. Finally we use ncWMS, an NetCDF based implementation of the Web Map Service (WMS) protocol to serve the forecasting result. We have build a 3D web application based on Cesium.js to visualize the output. In

  7. ACCESSING HDF DATA VIA OPENDAP

    Science.gov (United States)

    Yang, M.; Lee, H.; Folk, M. J.

    2009-12-01

    HDF is a set of data formats and software libraries for storing scientific data with an emphasis on standards, storage, and I/O efficiency. The HDF-EOS version 2 (HDF-EOS2) profile and library, built on top of HDF version 4 (HDF4), define and implement the standard data format for the NASA Earth Science Data and Information System (ESDIS). Since the launch of Terra in 1999, the EOS Data and Information System (EOSDIS) has produced more than three terabytes of EOS earth science data daily. More than five hundred data products in NASA data centers are stored in HDF4. HDF5 is a newer data format. It has been embraced as an important data format for Earth science, HDF-EOS5, which is built on top of HDF5, is the primary data format for data from the Aura satellite. HDF5 is being used as the data format for data products produced from the National Polar Orbiting Environmental Satellite System (NPOESS). The newer version of netCDF, netCDF-4, is built on top of HDF5. The OPeNDAP Data Access Protocol (DAP) and its related software (servers and clients) have emerged as important components of the earth science data system infrastructure. The OPeNDAP protocol is widely used to remotely access earth science data. Several third-party visualization and analysis tools that can read data from OPeNDAP servers, such as IDV, GrADS, Ferret, NCL, MATLAB, and IDL, are widely used by many earth scientists, researchers, and educators to access HDF earth science data. Ensuring easy access to HDF4, HDF5 and HDF-EOS data via the above tools through OPeNDAP will reduce the time for HDF users to visualize the data in their favorite way and improve their working efficiencies accordingly. In the past two years, under the support of NASA ESDIS and ACCESS projects, The HDF Group implemented the HDF5-OPeNDAP data handler so that some NASA HDF-EOS5 Aura Swath and Grid data can be accessed by widely used visualization and analysis tools such as IDV, GrADS, Ferret, NCL and IDL via OPeNDAP. The HDF

  8. Data Interoperability and Standards Within A Large International Remote Sensing Project

    Science.gov (United States)

    Armstrong, E. M.; Vazquez, J.; Casey, K.

    2008-12-01

    The Group for High Resolution Sea Surface Temperature (GHRSST) project is an international collaboration which, in 2002, initiated a pilot project under the auspices of GODAE (Global Ocean Data Assimilation Experiment) to address an emerging need for accurate high resolution satellite-based sea surface temperature (SST) products for ocean modeling. The GHRSST project brings together international space agencies, research institutes, universities, and government agencies to collectively address the scientific, logistical and managerial challenges posed by creating the SST data products and services. Currently, the project produces over 30 unique SST products from over 11 different satellite sensors at varying spatial scales and processing levels on a daily basis for a variety of applications including ocean modeling, weather forecasting, climate research and fisheries management. Commensurate with the large data volumes and diversity of satellite, in situ and ancillary oceanographic data for the GHRSST, a significant investment was made in its data management infrastructure. This has included task sharing between NASA and NOAA on distribution and archiving, adherence to community standards with regard to data and metadata protocols and interoperability, and use of contemporary distribution and data discovery mechanisms. We will describe some of these components in detail, review some of the lessons learned and give an overview of some of emerging protocols under consideration including the ISO 19115-2 metadata format and the netCDF version 4 file format.

  9. The Climate Data Analytic Services (CDAS) Framework.

    Science.gov (United States)

    Maxwell, T. P.; Duffy, D.

    2016-12-01

    Faced with unprecedented growth in climate data volume and demand, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute data processing workflows combining common analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using vetted climate data analysis tools (ESMF, CDAT, NCO, etc.). A dynamic caching architecture enables interactive response times. CDAS utilizes Apache Spark for parallelization and a custom array framework for processing huge datasets within limited memory spaces. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be accessed using either direct web service calls, a python script, a unix-like shell client, or a javascript-based web application. Client packages in python, scala, or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends and variability, and compare multiple reanalysis datasets.

  10. SchemaOnRead: A Package for Schema-on-Read in R

    Energy Technology Data Exchange (ETDEWEB)

    North, Michael J.

    2016-08-01

    Schema-on-read is an agile approach to data storage and retrieval that defers investments in data organization until production queries need to be run by working with data directly in native form. Schema-on-read functions have been implemented in a wide range of analytical systems, most notably Hadoop. SchemaOnRead is a CRAN package that uses R’s flexible data representations to provide transparent and convenient support for the schema-on-read paradigm in R. The schema-on- read tools within the package include a single function call that recursively reads folders with text, comma separated value, raster image, R data, HDF5, NetCDF, spreadsheet, Weka, Epi Info, Pajek network, R network, HTML, SPSS, Systat, and Stata files. The provided tools can be used as-is or easily adapted to implement customized schema-on-read tool chains in R. This paper’s contribution is that it introduces and describes SchemaOnRead, the first R package specifically focused on providing explicit schema-on-read support in R.

  11. Development of a gridded meteorological dataset over Java island, Indonesia 1985-2014

    Science.gov (United States)

    Yanto; Livneh, Ben; Rajagopalan, Balaji

    2017-05-01

    We describe a gridded daily meteorology dataset consisting of precipitation, minimum and maximum temperature over Java Island, Indonesia at 0.125°×0.125° (~14 km) resolution spanning 30 years from 1985-2014. Importantly, this data set represents a marked improvement from existing gridded data sets over Java with higher spatial resolution, derived exclusively from ground-based observations unlike existing satellite or reanalysis-based products. Gap-infilling and gridding were performed via the Inverse Distance Weighting (IDW) interpolation method (radius, r, of 25 km and power of influence, α, of 3 as optimal parameters) restricted to only those stations including at least 3,650 days (~10 years) of valid data. We employed MSWEP and CHIRPS rainfall products in the cross-validation. It shows that the gridded rainfall presented here produces the most reasonable performance. Visual inspection reveals an increasing performance of gridded precipitation from grid, watershed to island scale. The data set, stored in a network common data form (NetCDF), is intended to support watershed-scale and island-scale studies of short-term and long-term climate, hydrology and ecology.

  12. Geoscience data standards, software implementations, and the Internet. Where we came from and where we might be going.

    Science.gov (United States)

    Blodgett, D. L.

    2014-12-01

    Geographic information science and the coupled database and software systems that have grown from it have been evolving since the early 1990s. The multi-file shapefile package, invented early in this evolution, is an example of a highly generalized file format that can be used as an archival, interchange, and format for program execution. There are other formats, such as GeoTIFF and NetCDF that have similar characteristics. These de-facto standard (in contrast to the formally defined and published standards) formats, while not initially designed for machine-readable web-services, are used in them extensively. Relying on these formats allows legacy software to be adapted to web-services, but may require complicate software development to handle dynamic introspection of these legacy file formats' metadata. A generalized system of web-service types that offer archive, interchange, and run-time capabilities based on commonly implemented file formats and established web-service specifications has emerged from exemplar implementations. For example, an Open Geospatial Consortium (OGC) Web Feature Service is used to serve sites or model polygons and an OGC Sensor Observation Service provides time series data for the sites. The broad system of data formats, web-service types, and freely available software that implements the system will be described. The presentation will include a perspective on the future of this basic system and how it relates to scientific domain specific information models such as the Open Geospatial Consortium standards for geographic, hydrologic, and hydrogeologic data.

  13. Optimization and Control of Burning Plasmas Through High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, Alexei

    2017-12-18

    This project has revived the FACETS code, that has been developed under SciDAC fund- ing in 2008-2012. The code has been dormant for a number of years after the SciDAC funding stopped. FACETS depends on external packages. The external packages and libraries such as PETSc, FFTW, HDF5 and NETCDF that are included in FACETS have evolved during these years. Some packages in FACETS are also parts of other codes such as PlasmaState, NUBEAM, GACODES, and UEDGE. These packages have been also evolved together with their host codes which include TRANSP, TGYRO and XPTOR. Finally, there is also a set of packages in FACETS that are being developed and maintained by Tech-X. These packages include BILDER, SciMake, and FcioWrappers. Many of these packages evolved significantly during the last several years and FACETS had to be updated to synchronize with the re- cent progress in the external packages. The PI has introduced new changes to the BILDER package to support the updated interfaces to the external modules. During the last year of the project, the FACETS version of the UEDGE code has been extracted from FACETS as a standalone package. The PI collaborates with the scientists from LLNL on the updated UEDGE model in FACETS. Drs. T. Rognlien, M. Umansky and A. Dimits from LLNL are contributing to this task.

  14. Building the Column: Ground-Up Integration of Multi-Sensor Precipitation Observations

    Science.gov (United States)

    Wingo, S. M.; Marks, D. A.; Wolff, D. B.; Petersen, W. A.

    2016-12-01

    As part of NASA Wallops Flight Facility's GPM Ground-Validation work, the Precipitation Research Facility maintains and operates a suite of ground-based instrumentation comprised of a wide spectrum (S- to W-band) of scanning and vertically pointing radars and multiple types of disdrometers and rain gauges. Presently, routine data collection occurs at locations across the coastal Virginia/Maryland region, and the majority of these platforms were also deployed in each of the GPM field campaigns (HyMEx, GCPEx, MC3E, IFloodS, IPHEx, and OLYMPEX). To enable more efficient statistical, cross-platform, and ground-validation studies, a system for integrating these multi-sensor precipitation measurements throughout the atmospheric column, including space-based GPM GMI and DPR observations, has been developed. Within any set column grid area (user defines the grid center location, horizontal and vertical spacing and total extent), coincident observations are extracted from native data formats and placed into the column framework. After data from all available platforms is set into the column grid, a new data product is written in NetCDF format, affording versatility and portability. These column data files also include original details on each platform's operating parameters obtained from the native data (eg: exact location, scanning sequence, timestamps, etc) as attributes. This presentation will provide an overview of the development of the column building framework and touch on research avenues utilizing the new data product.

  15. The Aegean sea marine security decision support system

    Science.gov (United States)

    Perivoliotis, L.; Krokos, G.; Nittis, K.; Korres, G.

    2011-10-01

    As part of the integrated ECOOP (European Coastal Sea Operational observing and Forecasting System) project, HCMR upgraded the already existing standalone Oil Spill Forecasting System for the Aegean Sea, initially developed for the Greek Operational Oceanography System (POSEIDON), into an active element of the European Decision Support System (EuroDeSS). The system is accessible through a user friendly web interface where the case scenarios can be fed into the oil spill drift model component, while the synthetic output contains detailed information about the distribution of oil spill particles and the oil spill budget and it is provided both in text based ECOOP common output format and as a series of sequential graphics. The main development steps that were necessary for this transition were the modification of the forcing input data module in order to allow the import of other system products which are usually provided in standard formats such as NetCDF and the transformation of the model's calculation routines to allow use of current, density and diffusivities data in z instead of sigma coordinates. During the implementation of the Aegean DeSS, the system was used in operational mode in order to support the Greek marine authorities in handling a real accident that took place in North Aegean area. Furthermore, the introduction of common input and output files by all the partners of EuroDeSS extended the system's interoperability thus facilitating data exchanges and comparison experiments.

  16. A Columnar Storage Strategy with Spatiotemporal Index for Big Climate Data

    Science.gov (United States)

    Hu, F.; Bowen, M. K.; Li, Z.; Schnase, J. L.; Duffy, D.; Lee, T. J.; Yang, C. P.

    2015-12-01

    Large collections of observational, reanalysis, and climate model output data may grow to as large as a 100 PB in the coming years, so climate dataset is in the Big Data domain, and various distributed computing frameworks have been utilized to address the challenges by big climate data analysis. However, due to the binary data format (NetCDF, HDF) with high spatial and temporal dimensions, the computing frameworks in Apache Hadoop ecosystem are not originally suited for big climate data. In order to make the computing frameworks in Hadoop ecosystem directly support big climate data, we propose a columnar storage format with spatiotemporal index to store climate data, which will support any project in the Apache Hadoop ecosystem (e.g. MapReduce, Spark, Hive, Impala). With this approach, the climate data will be transferred into binary Parquet data format, a columnar storage format, and spatial and temporal index will be built and attached into the end of Parquet files to enable real-time data query. Then such climate data in Parquet data format could be available to any computing frameworks in Hadoop ecosystem. The proposed approach is evaluated using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. Experimental results show that this approach could efficiently overcome the gap between the big climate data and the distributed computing frameworks, and the spatiotemporal index could significantly accelerate data querying and processing.

  17. A flexible tool for diagnosing water, energy, and entropy budgets in climate models

    Science.gov (United States)

    Lembo, Valerio; Lucarini, Valerio

    2017-04-01

    We have developed a new flexible software for studying the global energy budget, the hydrological cycle, and the material entropy production of global climate models. The program receives as input radiative, latent and sensible energy fluxes, with the requirement that the variable names are in agreement with the Climate and Forecast (CF) conventions for the production of NetCDF datasets. Annual mean maps, meridional sections and time series are computed by means of Climate Data Operators (CDO) collection of command line operators developed at Max-Planck Institute for Meteorology (MPI-M). If a land-sea mask is provided, the program also computes the required quantities separately on the continents and oceans. Depending on the user's choice, the program also calls the MATLAB software to compute meridional heat transports and location and intensities of the peaks in the two hemispheres. We are currently planning to adapt the program in order to be included in the Earth System Model eValuation Tool (ESMValTool) community diagnostics.

  18. NASA World Wind Near Real Time Data for Earth

    Science.gov (United States)

    Hogan, P.

    2013-12-01

    Innovation requires open standards for data exchange, not to mention ^access to data^ so that value-added, the information intelligence, can be continually created and advanced by the larger community. Likewise, innovation by academia and entrepreneurial enterprise alike, are greatly benefited by an open platform that provides the basic technology for access and visualization of that data. NASA World Wind Java, and now NASA World Wind iOS for the iPhone and iPad, provides that technology. Whether the interest is weather science or climate science, emergency response or supply chain, seeing spatial data in its native context of Earth accelerates understanding and improves decision-making. NASA World Wind open source technology provides the basic elements for 4D visualization, using Open Geospatial Consortium (OGC) protocols, while allowing for customized access to any data, big or small, including support for NetCDF. NASA World Wind includes access to a suite of US Government WMS servers with near real time data. The larger community can readily capitalize on this technology, building their own value-added applications, either open or proprietary. Night lights heat map Glacier National Park

  19. Tools and strategies for instrument monitoring, data mining and data access

    Science.gov (United States)

    van Hees, R. M., ,, Dr

    2009-04-01

    particular, the packet table API allows very compact storage of compound data sets and very fast read/write access. Details about this implementation and pitfalls will be given in the presentation. [Data Mining] The ability to select relevant data is a requirement that all data centers have to offer. The NL-SCIA-DC allows the users to select data using several criteria including: time, geo-location, type of observation and data quality. The result of the query are [i] location and name of relevant data products (files), or [ii] listing of meta data of the relevant measurements, or [iii] listing of the measurements (level 2 or higher). For this application, we need the power of a relational database, the SQL language, and the availability of spatial functions. PostgreSQL, extended with postGIS support turned out to be a good choice. Common queries on tables with millions of rows can be executed within seconds. [Data Deployment] The dissemination of scientific data is often cumbersome by the usage of many different formats to store the products. Therefore, time-consuming and inefficient conversions are needed to use data products from different origin. Within the Atmospheric Data Access for the Geospatial User Community (ADAGUC) project we provide selected space borne atmospheric and land data sets in the same data format and consistent internal structure, so that users can easily use and combine data. The common format for storage is HDF5, but the netCDF-4 API is used to create the data sets. The standard for metadata and dataset attributes follow the netCDF Climate and Forecast conventions, in addition metadata complies to the ISO 19115:2003 INSPIRE profile are added. The advantage of netCDF-4 is that the API is essentially equal to netCDF-3 (with a few extensions), while the data format is HDF5 (recognized by many scientific tools). The added metadata ensures product traceability. Details will be given in the presentation and several posters.

  20. A Long-Term and Reproducible Passive Microwave Sea Ice Concentration Data Record for Climate Studies and Monitoring

    Science.gov (United States)

    Peng, G.; Meier, W. N.; Scott, D. J.; Savoie, M. H.

    2013-01-01

    A long-term, consistent, and reproducible satellite-based passive microwave sea ice concentration climate data record (CDR) is available for climate studies, monitoring, and model validation with an initial operation capability (IOC). The daily and monthly sea ice concentration data are on the National Snow and Ice Data Center (NSIDC) polar stereographic grid with nominal 25 km × 25 km grid cells in both the Southern and Northern Hemisphere polar regions from 9 July 1987 to 31 December 2007. The data files are available in the NetCDF data format at http://nsidc.org/data/g02202.html and archived by the National Climatic Data Center (NCDC) of the National Oceanic and Atmospheric Administration (NOAA) under the satellite climate data record program (http://www.ncdc.noaa.gov/cdr/operationalcdrs.html). The description and basic characteristics of the NOAA/NSIDC passive microwave sea ice concentration CDR are presented here. The CDR provides similar spatial and temporal variability as the heritage products to the user communities with the additional documentation, traceability, and reproducibility that meet current standards and guidelines for climate data records. The data set, along with detailed data processing steps and error source information, can be found at http://dx.doi.org/10.7265/N5B56GN3.

  1. US Geoscience Information Network, Web Services for Geoscience Information Discovery and Access

    Science.gov (United States)

    Richard, S.; Allison, L.; Clark, R.; Coleman, C.; Chen, G.

    2012-04-01

    The US Geoscience information network has developed metadata profiles for interoperable catalog services based on ISO19139 and the OGC CSW 2.0.2. Currently data services are being deployed for the US Dept. of Energy-funded National Geothermal Data System. These services utilize OGC Web Map Services, Web Feature Services, and THREDDS-served NetCDF for gridded datasets. Services and underlying datasets (along with a wide variety of other information and non information resources are registered in the catalog system. Metadata for registration is produced by various workflows, including harvest from OGC capabilities documents, Drupal-based web applications, transformation from tabular compilations. Catalog search is implemented using the ESRI Geoportal open-source server. We are pursuing various client applications to demonstrated discovery and utilization of the data services. Currently operational applications allow catalog search and data acquisition from map services in an ESRI ArcMap extension, a catalog browse and search application built on openlayers and Django. We are developing use cases and requirements for other applications to utilize geothermal data services for resource exploration and evaluation.

  2. FluxnetLSM R package (v1.0): a community tool for processing FLUXNET data for use in land surface modelling

    Science.gov (United States)

    Ukkola, Anna M.; Haughton, Ned; De Kauwe, Martin G.; Abramowitz, Gab; Pitman, Andy J.

    2017-09-01

    Flux towers measure ecosystem-scale surface-atmosphere exchanges of energy, carbon dioxide and water vapour. The network of flux towers now encompasses ˜ 900 sites, spread across every continent. Consequently, these data have become an essential benchmarking tool for land surface models (LSMs). However, these data as released are not immediately usable for driving, evaluating and benchmarking LSMs. Flux tower data must first be transformed into a LSM-readable file format, a process which involves changing units, screening missing data and varying degrees of additional gap-filling. All of this often leads to an under-utilisation of these data in model benchmarking. To resolve some of these issues, and to help make flux tower measurements more widely used, we present a reproducible, open-source R package that transforms the FLUXNET2015 and La Thuile data releases into community standard NetCDF files that are directly usable by LSMs. We note that these data would also be useful for any other user or community seeking to independently quality control, gap-fill or use the FLUXNET data.

  3. Himawari Support In The CSPP-GEO Direct Broadcast Package

    Science.gov (United States)

    Cureton, G. P.; Martin, G.

    2016-12-01

    The Cooperative Institute for Meteorological Satellite Studies (CIMSS) has a long history of supporting the Direct Broadcast (DB) community for various sensors, recently with the International MODIS/AIRS Processing Package (IMAPP) for the NASA EOS polar orbiters Terra and Aqua, and the Community Satellite Processing Package (CSPP) for the NOAA polar orbiter Suomi-NPP. CSPP has been significant in encouraging the early usage of Suomi-NPP data by US and international weather agencies, and it is hoped that a new package, CSPP-GEO, will similarly encourage usage of DB data from GOES-R, Himawari, and other geostationary satellites. The support of Himawari-8 provides several challenges for the CSPP-GEO-Geocat package, which generally revolve around the greatly increased data rate associated with the subsatellite point footprint approaching 1km. CSPP-GEO-Geocat takes advantage of python shared-memory multiprocessor support to divide Himawari data into managable pieces, which are then farmed out to indvidual cores for processing by the underlying geocat code. The resulting product segments are then stitched together to make the final product NetCDF4 files. CSPP-GEO-Geocat will support high-data-rate HRIT input, as well as the reduced resolution HimwariCast direct broadcast data stream. Products supported by CSPP-GEO-Geocat include the level-1 reflective and emissive bands, as well as level-2 products like cloud mask, cloud type, optical depth and particle size, cloud top temperature and pressure.

  4. Model Sharing and Collaboration using HydroShare

    Science.gov (United States)

    Goodall, J. L.; Morsy, M. M.; Castronova, A. M.; Miles, B.; Merwade, V.; Tarboton, D. G.

    2015-12-01

    HydroShare is a web-based system funded by the National Science Foundation (NSF) for sharing hydrologic data and models as resources. Resources in HydroShare can either be assigned a generic type, meaning the resource only has Dublin Core metadata properties, or one of a growing number of specific resource types with enhanced metadata profiles defined by the HydroShare development team. Examples of specific resource types in the current release of HydroShare (http://www.hydroshare.org) include time series, geographic raster, Multidimensional (NetCDF), model program, and model instance. Here we describe research and development efforts in HydroShare project for model-related resources types. This work has included efforts to define metadata profiles for common modeling resources, execute models directly through the HydroShare user interface using Docker containers, and interoperate with the 3rd party application SWATShare for model execution and visualization. These examples demonstrate the benefit of HydroShare to support model sharing and address collaborative problems involving modeling. The presentation will conclude with plans for future modeling-related development in HydroShare including supporting the publication of workflow resources, enhanced metadata for additional hydrologic models, and linking model resources with other resources in HydroShare to capture model provenance.

  5. MHD stability module for the National Transport Code Collaboration Library

    Science.gov (United States)

    Pletzer, A.; Manickam, J.; Jardin, S. C.; McCune, D.; Ludescher, Ch.; Klasky, S.; Randerson, L.

    1999-11-01

    There is a need to provide numerical tools to the fusion community that are robust, portable, easy to use, documented, and reviewed by independent peers. A web site (http://w3.pppl.gov/NTCC) where modules can be freely downloaded has been set up for that purpose [Status of the NTCC Modules Library (D McCune)]. The existence of such a library is in addition motivated by the increasing demand for programs that can be plugged into large packages with minimal effort. In particular, there has been some requests to make MHD stability codes such as the PEST, which are capable of simulating large scale plasma phenomena, available at the NTCC module library. Progress on the work to convert PEST to satisfy the NTCC module standards is presented. The resulting, new PEST interface is a collection of subroutines, which initialize, modify and extract data. Dynamic memory allocation is introduced to minimize memory requirements and allow for multiple runs. Embedded graphics routines are disabled and dependence on native binary files replaced by portable NetCDF files. To illustrate the flexibility of the module approach, numerical results obtained by integrating PEST-3, the mapping code DMAP and the equilibrium JSOLVER modules into a C++ and Java environment with remote database connectivity are presented.

  6. Cloud-Enabled Climate Analytics-as-a-Service using Reanalysis data: A case study.

    Science.gov (United States)

    Nadeau, D.; Duffy, D.; Schnase, J. L.; McInerney, M.; Tamkin, G.; Potter, G. L.; Thompson, J. H.

    2014-12-01

    The NASA Center for Climate Simulation (NCCS) maintains advanced data capabilities and facilities that allow researchers to access the enormous volume of data generated by weather and climate models. The NASA Climate Model Data Service (CDS) and the NCCS are merging their efforts to provide Climate Analytics-as-a-Service for the comparative study of the major reanalysis projects: ECMWF ERA-Interim, NASA/GMAO MERRA, NOAA/NCEP CFSR, NOAA/ESRL 20CR, JMA JRA25, and JRA55. These reanalyses have been repackaged to netCDF4 file format following the CMIP5 Climate and Forecast (CF) metadata convention prior to be sequenced into the Hadoop Distributed File System ( HDFS ). A small set of operations that represent a common starting point in many analysis workflows was then created: min, max, sum, count, variance and average. In this example, Reanalysis data exploration was performed with the use of Hadoop MapReduce and accessibility was achieved using the Climate Data Service(CDS) application programming interface (API) created at NCCS. This API provides a uniform treatment of large amount of data. In this case study, we have limited our exploration to 2 variables, temperature and precipitation, using 3 operations, min, max and avg and using 30-year of Reanalysis data for 3 regions of the world: global, polar, subtropical.

  7. Expanding HadISD: quality-controlled, sub-daily station data from 1931

    Science.gov (United States)

    Dunn, Robert J. H.; Willett, Kate M.; Parker, David E.; Mitchell, Lorna

    2016-09-01

    HadISD is a sub-daily, station-based, quality-controlled dataset designed to study past extremes of temperature, pressure and humidity and allow comparisons to future projections. Herein we describe the first major update to the HadISD dataset. The temporal coverage of the dataset has been extended to 1931 to present, doubling the time range over which data are provided. Improvements made to the station selection and merging procedures result in 7677 stations being provided in version 2.0.0.2015p of this dataset. The selection of stations to merge together making composites has also been improved and made more robust. The underlying structure of the quality control procedure is the same as for HadISD.1.0.x, but a number of improvements have been implemented in individual tests. Also, more detailed quality control tests for wind speed and direction have been added. The data will be made available as NetCDF files at http://www.metoffice.gov.uk/hadobs/hadisd and updated annually.

  8. A Geospatial Database that Supports Derivation of Climatological Features of Severe Weather

    Science.gov (United States)

    Phillips, M.; Ansari, S.; Del Greco, S.

    2007-12-01

    The Severe Weather Data Inventory (SWDI) at NOAA's National Climatic Data Center (NCDC) provides user access to archives of several datasets critical to the detection and evaluation of severe weather. These datasets include archives of: · NEXRAD Level-III point features describing general storm structure, hail, mesocyclone and tornado signatures · National Weather Service Storm Events Database · National Weather Service Local Storm Reports collected from storm spotters · National Weather Service Warnings · Lightning strikes from Vaisala's National Lightning Detection Network (NLDN) SWDI archives all of these datasets in a spatial database that allows for convenient searching and subsetting. These data are accessible via the NCDC web site, Web Feature Services (WFS) or automated web services. The results of interactive web page queries may be saved in a variety of formats, including plain text, XML, Google Earth's KMZ, standards-based NetCDF and Shapefile. NCDC's Storm Risk Assessment Project (SRAP) uses data from the SWDI database to derive gridded climatology products that show the spatial distributions of the frequency of various events. SRAP also can relate SWDI events to other spatial data such as roads, population, watersheds, and other geographic, sociological, or economic data to derive products that are useful in municipal planning, emergency management, the insurance industry, and other areas where there is a need to quantify and qualify how severe weather patterns affect people and property.

  9. MAST's Integrated Data Access Management system: IDAM

    Energy Technology Data Exchange (ETDEWEB)

    Muir, D.G. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom)], E-mail: david.g.muir@ukaea.org.uk; Appel, L.; Conway, N.J.; Kirk, A.; Martin, R.; Meyer, H.; Storrs, J.; Taylor, D.; Thomas-Davies, N.; Waterhouse, J. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom)

    2008-04-15

    A new Integrated Data Access Management system, IDAM, has been created to address specific data management issues of the MAST spherical Tokamak. For example, this system enables access to numerous file formats, both legacy and modern (IDA, Ufile, netCDF, HDF5, MDSPlus, PPF, JPF). It adds data quality values at the signal level, and automatically corrects for problems in data: in timings, calibrations, and labelling. It also builds new signals from signal components. The IDAM data server uses a hybrid XML-relational database to record how data are accessed, whether locally or remotely, and how alias and generic signal names are mapped to true names. Also, XML documents are used to encode the details of data corrections, as well as definitions of composite signals and error models. The simple, user friendly, API and accessor function library, written in C on Linux, is available for applications in C, C++, IDL and Fortran-90/95/2003 with good performance: a MAST plasma current trace (28 kbytes of data), requested using a generic name and with data corrections applied, is delivered over a 100 Mbit/s network in {approx}13 ms.

  10. Ten Years of Cloud Properties from MODIS: Global Statistics and Use in Climate Model Evaluation

    Science.gov (United States)

    Platnick, Steven E.

    2011-01-01

    The NASA Moderate Resolution Imaging Spectroradiometer (MODIS), launched onboard the Terra and Aqua spacecrafts, began Earth observations on February 24, 2000 and June 24,2002, respectively. Among the algorithms developed and applied to this sensor, a suite of cloud products includes cloud masking/detection, cloud-top properties (temperature, pressure), and optical properties (optical thickness, effective particle radius, water path, and thermodynamic phase). All cloud algorithms underwent numerous changes and enhancements between for the latest Collection 5 production version; this process continues with the current Collection 6 development. We will show example MODIS Collection 5 cloud climatologies derived from global spatial . and temporal aggregations provided in the archived gridded Level-3 MODIS atmosphere team product (product names MOD08 and MYD08 for MODIS Terra and Aqua, respectively). Data sets in this Level-3 product include scalar statistics as well as 1- and 2-D histograms of many cloud properties, allowing for higher order information and correlation studies. In addition to these statistics, we will show trends and statistical significance in annual and seasonal means for a variety of the MODIS cloud properties, as well as the time required for detection given assumed trends. To assist in climate model evaluation, we have developed a MODIS cloud simulator with an accompanying netCDF file containing subsetted monthly Level-3 statistical data sets that correspond to the simulator output. Correlations of cloud properties with ENSO offer the potential to evaluate model cloud sensitivity; initial results will be discussed.

  11. Extending Climate Analytics-As to the Earth System Grid Federation

    Science.gov (United States)

    Tamkin, G.; Schnase, J. L.; Duffy, D.; McInerney, M.; Nadeau, D.; Li, J.; Strong, S.; Thompson, J. H.

    2015-12-01

    We are building three extensions to prior-funded work on climate analytics-as-a-service that will benefit the Earth System Grid Federation (ESGF) as it addresses the Big Data challenges of future climate research: (1) We are creating a cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables from six major reanalysis data sets. This near real-time capability will enable advanced technologies like the Cloudera Impala-based Structured Query Language (SQL) query capabilities and Hadoop-based MapReduce analytics over native NetCDF files while providing a platform for community experimentation with emerging analytic technologies. (2) We are building a full-featured Reanalysis Ensemble Service comprising monthly means data from six reanalysis data sets. The service will provide a basic set of commonly used operations over the reanalysis collections. The operations will be made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services (CDS) API. (3) We are establishing an Open Geospatial Consortium (OGC) WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation ESGF capabilities. The CDS API will be extended to accommodate the new WPS Web service endpoints as well as ESGF's Web service endpoints. These activities address some of the most important technical challenges for server-side analytics and support the research community's requirements for improved interoperability and improved access to reanalysis data.

  12. Data visualization in interactive maps and time series

    Science.gov (United States)

    Maigne, Vanessa; Evano, Pascal; Brockmann, Patrick; Peylin, Philippe; Ciais, Philippe

    2014-05-01

    State-of-the-art data visualization has nothing to do with plots and maps we used few years ago. Many opensource tools are now available to provide access to scientific data and implement accessible, interactive, and flexible web applications. Here we will present a web site opened November 2013 to create custom global and regional maps and time series from research models and datasets. For maps, we explore and get access to data sources from a THREDDS Data Server (TDS) with the OGC WMS protocol (using the ncWMS implementation) then create interactive maps with the OpenLayers javascript library and extra information layers from a GeoServer. Maps become dynamic, zoomable, synchroneaously connected to each other, and exportable to Google Earth. For time series, we extract data from a TDS with the Netcdf Subset Service (NCSS) then display interactive graphs with a custom library based on the Data Driven Documents javascript library (D3.js). This time series application provides dynamic functionalities such as interpolation, interactive zoom on different axes, display of point values, and export to different formats. These tools were implemented for the Global Carbon Atlas (http://www.globalcarbonatlas.org): a web portal to explore, visualize, and interpret global and regional carbon fluxes from various model simulations arising from both human activities and natural processes, a work led by the Global Carbon Project.

  13. McrEngine: A Scalable Checkpointing System Using Data-Aware Aggregation and Compression

    Directory of Open Access Journals (Sweden)

    Tanzima Zerin Islam

    2013-01-01

    Full Text Available High performance computing (HPC systems use checkpoint-restart to tolerate failures. Typically, applications store their states in checkpoints on a parallel file system (PFS. As applications scale up, checkpoint-restart incurs high overheads due to contention for PFS resources. The high overheads force large-scale applications to reduce checkpoint frequency, which means more compute time is lost in the event of failure. We alleviate this problem through a scalable checkpoint-restart system, mcrEngine. McrEngine aggregates checkpoints from multiple application processes with knowledge of the data semantics available through widely-used I/O libraries, e.g., HDF5 and netCDF, and compresses them. Our novel scheme improves compressibility of checkpoints up to 115% over simple concatenation and compression. Our evaluation with large-scale application checkpoints show that mcrEngine reduces checkpointing overhead by up to 87% and restart overhead by up to 62% over a baseline with no aggregation or compression.

  14. The Aegean sea marine security decision support system

    Directory of Open Access Journals (Sweden)

    L. Perivoliotis

    2011-10-01

    Full Text Available As part of the integrated ECOOP (European Coastal Sea Operational observing and Forecasting System project, HCMR upgraded the already existing standalone Oil Spill Forecasting System for the Aegean Sea, initially developed for the Greek Operational Oceanography System (POSEIDON, into an active element of the European Decision Support System (EuroDeSS. The system is accessible through a user friendly web interface where the case scenarios can be fed into the oil spill drift model component, while the synthetic output contains detailed information about the distribution of oil spill particles and the oil spill budget and it is provided both in text based ECOOP common output format and as a series of sequential graphics. The main development steps that were necessary for this transition were the modification of the forcing input data module in order to allow the import of other system products which are usually provided in standard formats such as NetCDF and the transformation of the model's calculation routines to allow use of current, density and diffusivities data in z instead of sigma coordinates. During the implementation of the Aegean DeSS, the system was used in operational mode in order to support the Greek marine authorities in handling a real accident that took place in North Aegean area. Furthermore, the introduction of common input and output files by all the partners of EuroDeSS extended the system's interoperability thus facilitating data exchanges and comparison experiments.

  15. Climate tools in mainstream Linux distributions

    Science.gov (United States)

    McKinstry, Alastair

    2015-04-01

    Debian/meterology is a project to integrate climate tools and analysis software into the mainstream Debian/Ubuntu Linux distributions. This work describes lessons learnt, and recommends practices for scientific software to be adopted and maintained in OS distributions. In addition to standard analysis tools (cdo,, grads, ferret, metview, ncl, etc.), software used by the Earth System Grid Federation was chosen for integraion, to enable ESGF portals to be built on this base; however exposing scientific codes via web APIs enables security weaknesses, normally ignorable, to be exposed. How tools are hardened, and what changes are required to handle security upgrades, are described. Secondly, to enable libraries and components (e.g. Python modules) to be integrated requires planning by writers: it is not sufficient to assume users can upgrade their code when you make incompatible changes. Here, practices are recommended to enable upgrades and co-installability of C, C++, Fortran and Python codes. Finally, software packages such as NetCDF and HDF5 can be built in multiple configurations. Tools may then expect incompatible versions of these libraries (e.g. serial and parallel) to be simultaneously available; how this was solved in Debian using "pkg-config" and shared library interfaces is described, and best practices for software writers to enable this are summarised.

  16. Access to Emissions Distributions and Related Ancillary Data through the ECCAD database

    Science.gov (United States)

    Darras, Sabine; Granier, Claire; Liousse, Catherine; De Graaf, Erica; Enriquez, Edgar; Boulanger, Damien; Brissebrat, Guillaume

    2017-04-01

    The ECCAD database (Emissions of atmospheric Compounds and Compilation of Ancillary Data) provides a user-friendly access to global and regional surface emissions for a large set of chemical compounds and ancillary data (land use, active fires, burned areas, population,etc). The emissions inventories are time series gridded data at spatial resolution from 1x1 to 0.1x0.1 degrees. ECCAD is the emissions database of the GEIA (Global Emissions InitiAtive) project and a sub-project of the French Atmospheric Data Center AERIS (http://www.aeris-data.fr). ECCAD has currently more than 2200 users originating from more than 80 countries. The project benefits from this large international community of users to expand the number of emission datasets made available. ECCAD provides detailed metadata for each of the datasets and various tools for data visualization, for computing global and regional totals and for interactive spatial and temporal analysis. The data can be downloaded as interoperable NetCDF CF-compliant files, i.e. the data are compatible with many other client interfaces. The presentation will provide information on the datasets available within ECCAD, as well as examples of the analysis work that can be done online through the website: http://eccad.aeris-data.fr.

  17. NTFD—a stand-alone application for the non-targeted detection of stable isotope-labeled compounds in GC/MS data

    Science.gov (United States)

    Hiller, Karsten; Wegner, André; Weindl, Daniel; Cordes, Thekla; Metallo, Christian M.; Kelleher, Joanne K.; Stephanopoulos, Gregory

    2013-01-01

    Summary: Most current stable isotope-based methodologies are targeted and focus only on the well-described aspects of metabolic networks. Here, we present NTFD (non-targeted tracer fate detection), a software for the non-targeted analysis of all detectable compounds derived from a stable isotope-labeled tracer present in a GC/MS dataset. In contrast to traditional metabolic flux analysis approaches, NTFD does not depend on any a priori knowledge or library information. To obtain dynamic information on metabolic pathway activity, NTFD determines mass isotopomer distributions for all detected and labeled compounds. These data provide information on relative fluxes in a metabolic network. The graphical user interface allows users to import GC/MS data in netCDF format and export all information into a tab-separated format. Availability: NTFD is C++- and Qt4-based, and it is freely available under an open-source license. Pre-compiled packages for the installation on Debian- and Redhat-based Linux distributions, as well as Windows operating systems, along with example data, are provided for download at http://ntfd.mit.edu/. Contact: gregstep@mit.edu PMID:23479350

  18. NTFD--a stand-alone application for the non-targeted detection of stable isotope-labeled compounds in GC/MS data.

    Science.gov (United States)

    Hiller, Karsten; Wegner, André; Weindl, Daniel; Cordes, Thekla; Metallo, Christian M; Kelleher, Joanne K; Stephanopoulos, Gregory

    2013-05-01

    Most current stable isotope-based methodologies are targeted and focus only on the well-described aspects of metabolic networks. Here, we present NTFD (non-targeted tracer fate detection), a software for the non-targeted analysis of all detectable compounds derived from a stable isotope-labeled tracer present in a GC/MS dataset. In contrast to traditional metabolic flux analysis approaches, NTFD does not depend on any a priori knowledge or library information. To obtain dynamic information on metabolic pathway activity, NTFD determines mass isotopomer distributions for all detected and labeled compounds. These data provide information on relative fluxes in a metabolic network. The graphical user interface allows users to import GC/MS data in netCDF format and export all information into a tab-separated format. NTFD is C++- and Qt4-based, and it is freely available under an open-source license. Pre-compiled packages for the installation on Debian- and Redhat-based Linux distributions, as well as Windows operating systems, along with example data, are provided for download at http://ntfd.mit.edu/.

  19. The Value of Data and Metadata Standardization for Interoperability in Giovanni Or: Why Your Product's Metadata Causes Us Headaches!

    Science.gov (United States)

    Smit, Christine; Hegde, Mahabaleshwara; Strub, Richard; Bryant, Keith; Li, Angela; Petrenko, Maksym

    2017-01-01

    Giovanni is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization.

  20. Technical Note: Harmonizing met-ocean model data via standard web services within small research groups

    Science.gov (United States)

    Signell, Richard; Camossi, E.

    2016-01-01

    Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.

  1. Advancements in Open Geospatial Standards for Photogrammetry and Remote Sensing from Ogc

    Science.gov (United States)

    Percivall, George; Simonis, Ingo

    2016-06-01

    The necessity of open standards for effective sharing and use of remote sensing continues to receive increasing emphasis in policies of agencies and projects around the world. Coordination on the development of open standards for geospatial information is a vital step to insure that the technical standards are ready to support the policy objectives. The mission of the Open Geospatial Consortium (OGC) is to advance development and use of international standards and supporting services that promote geospatial interoperability. To accomplish this mission, OGC serves as the global forum for the collaboration of geospatial data / solution providers and users. Photogrammetry and remote sensing are sources of the largest and most complex geospatial information. Some of the most mature OGC standards for remote sensing include the Sensor Web Enablement (SWE) standards, the Web Coverage Service (WCS) suite of standards, encodings such as NetCDF, GMLJP2 and GeoPackage, and the soon to be approved Discrete Global Grid Systems (DGGS) standard. In collaboration with ISPRS, OGC working with government, research and industrial organizations continue to advance the state of geospatial standards for full use of photogrammetry and remote sensing.

  2. Climate Data Guide - Modern Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2)

    Science.gov (United States)

    Cullather, Richard; Bosilovich, Michael

    2017-01-01

    The Modern-Era Retrospective analysis for Research and Applications, version 2 (MERRA-2) is a global atmospheric reanalysis produced by the NASA Global Modeling and Assimilation Office (GMAO). It spans the satellite observing era from 1980 to the present. The goals of MERRA-2 are to provide a regularly-gridded, homogeneous record of the global atmosphere, and to incorporate additional aspects of the climate system including trace gas constituents (stratospheric ozone), and improved land surface representation, and cryospheric processes. MERRA-2 is also the first satellite-era global reanalysis to assimilate space-based observations of aerosols and represent their interactions with other physical processes in the climate system. The inclusion of these additional components are consistent with the overall objectives of an Integrated Earth System Analysis (IESA). MERRA-2 is intended to replace the original MERRA product, and reflects recent advances in atmospheric modeling and data assimilation. Modern hyperspectral radiance and microwave observations, along with GPS-Radio Occultation and NASA ozone datasets are now assimilated in MERRA-2. Much of the structure of the data files remains the same in MERRA-2. While the original MERRA data format was HDF-EOS, the MERRA-2 supplied binary data format is now NetCDF4 (with lossy compression to save space).

  3. Nine martian years of dust optical depth observations: A reference dataset

    Science.gov (United States)

    Montabone, Luca; Forget, Francois; Kleinboehl, Armin; Kass, David; Wilson, R. John; Millour, Ehouarn; Smith, Michael; Lewis, Stephen; Cantor, Bruce; Lemmon, Mark; Wolff, Michael

    2016-07-01

    We present a multi-annual reference dataset of the horizontal distribution of airborne dust from martian year 24 to 32 using observations of the martian atmosphere from April 1999 to June 2015 made by the Thermal Emission Spectrometer (TES) aboard Mars Global Surveyor, the Thermal Emission Imaging System (THEMIS) aboard Mars Odyssey, and the Mars Climate Sounder (MCS) aboard Mars Reconnaissance Orbiter (MRO). Our methodology to build the dataset works by gridding the available retrievals of column dust optical depth (CDOD) from TES and THEMIS nadir observations, as well as the estimates of this quantity from MCS limb observations. The resulting (irregularly) gridded maps (one per sol) were validated with independent observations of CDOD by PanCam cameras and Mini-TES spectrometers aboard the Mars Exploration Rovers "Spirit" and "Opportunity", by the Surface Stereo Imager aboard the Phoenix lander, and by the Compact Reconnaissance Imaging Spectrometer for Mars aboard MRO. Finally, regular maps of CDOD are produced by spatially interpolating the irregularly gridded maps using a kriging method. These latter maps are used as dust scenarios in the Mars Climate Database (MCD) version 5, and are useful in many modelling applications. The two datasets (daily irregularly gridded maps and regularly kriged maps) for the nine available martian years are publicly available as NetCDF files and can be downloaded from the MCD website at the URL: http://www-mars.lmd.jussieu.fr/mars/dust_climatology/index.html

  4. Visualization of ocean forecast in BYTHOS

    Science.gov (United States)

    Zhuk, E.; Zodiatis, G.; Nikolaidis, A.; Stylianou, S.; Karaolia, A.

    2016-08-01

    The Cyprus Oceanography Center has been constantly searching for new ideas for developing and implementing innovative methods and new developments concerning the use of Information Systems in Oceanography, to suit both the Center's monitoring and forecasting products. Within the frame of this scope two major online managing and visualizing data systems have been developed and utilized, those of CYCOFOS and BYTHOS. The Cyprus Coastal Ocean Forecasting and Observing System - CYCOFOS provides a variety of operational predictions such as ultra high, high and medium resolution ocean forecasts in the Levantine Basin, offshore and coastal sea state forecasts in the Mediterranean and Black Sea, tide forecasting in the Mediterranean, ocean remote sensing in the Eastern Mediterranean and coastal and offshore monitoring. As a rich internet application, BYTHOS enables scientists to search, visualize and download oceanographic data online and in real time. The recent improving of BYTHOS system is the extension with access and visualization of CYCOFOS data and overlay forecast fields and observing data. The CYCOFOS data are stored at OPENDAP Server in netCDF format. To search, process and visualize it the php and python scripts were developed. Data visualization is achieved through Mapserver. The BYTHOS forecast access interface allows to search necessary forecasting field by recognizing type, parameter, region, level and time. Also it provides opportunity to overlay different forecast and observing data that can be used for complex analyze of sea basin aspects.

  5. A long-term Northern Hemisphere snow cover extent data record for climate studies and monitoring

    Science.gov (United States)

    Estilow, T. W.; Young, A. H.; Robinson, D. A.

    2015-06-01

    This paper describes the long-term, satellite-based visible snow cover extent National Oceanic and Atmospheric Administration (NOAA) climate data record (CDR) currently available for climate studies, monitoring, and model validation. This environmental data product is developed from weekly Northern Hemisphere snow cover extent data that have been digitized from snow cover maps onto a Cartesian grid draped over a polar stereographic projection. The data have a spatial resolution of 190.6 km at 60° latitude, are updated monthly, and span the period from 4 October 1966 to the present. The data comprise the longest satellite-based CDR of any environmental variable. Access to the data is provided in Network Common Data Form (netCDF) and archived by NOAA's National Climatic Data Center (NCDC) under the satellite Climate Data Record Program (doi:10.7289/V5N014G9). The basic characteristics, history, and evolution of the data set are presented herein. In general, the CDR provides similar spatial and temporal variability to its widely used predecessor product. Key refinements included in the CDR improve the product's grid accuracy and documentation and bring metadata into compliance with current standards for climate data records.

  6. A visualization tool to support decision making in environmental and biological planning

    Science.gov (United States)

    Romañach, Stephanie S.; McKelvy, James M.; Conzelmann, Craig; Suir, Kevin J.

    2014-01-01

    Large-scale ecosystem management involves consideration of many factors for informed decision making. The EverVIEW Data Viewer is a cross-platform desktop decision support tool to help decision makers compare simulation model outputs from competing plans for restoring Florida's Greater Everglades. The integration of NetCDF metadata conventions into EverVIEW allows end-users from multiple institutions within and beyond the Everglades restoration community to share information and tools. Our development process incorporates continuous interaction with targeted end-users for increased likelihood of adoption. One of EverVIEW's signature features is side-by-side map panels, which can be used to simultaneously compare species or habitat impacts from alternative restoration plans. Other features include examination of potential restoration plan impacts across multiple geographic or tabular displays, and animation through time. As a result of an iterative, standards-driven approach, EverVIEW is relevant to large-scale planning beyond Florida, and is used in multiple biological planning efforts in the United States.

  7. Owgis 2.0: Open Source Java Application that Builds Web GIS Interfaces for Desktop Andmobile Devices

    Science.gov (United States)

    Zavala Romero, O.; Chassignet, E.; Zavala-Hidalgo, J.; Pandav, H.; Velissariou, P.; Meyer-Baese, A.

    2016-12-01

    OWGIS is an open source Java and JavaScript application that builds easily configurable Web GIS sites for desktop and mobile devices. The current version of OWGIS generates mobile interfaces based on HTML5 technology and can be used to create mobile applications. The style of the generated websites can be modified using COMPASS, a well known CSS Authoring Framework. In addition, OWGIS uses several Open Geospatial Consortium standards to request datafrom the most common map servers, such as GeoServer. It is also able to request data from ncWMS servers, allowing the websites to display 4D data from NetCDF files. This application is configured by XML files that define which layers, geographic datasets, are displayed on the Web GIS sites. Among other features, OWGIS allows for animations; streamlines from vector data; virtual globe display; vertical profiles and vertical transects; different color palettes; the ability to download data; and display text in multiple languages. OWGIS users are mainly scientists in the oceanography, meteorology and climate fields.

  8. Technical note: Harmonizing met-ocean model data via standard web services within small research groups

    Science.gov (United States)

    Signell, R. P.; Camossi, E.

    2015-11-01

    Work over the last decade has resulted in standardized web-services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by: (1) making it simple for providers to enable web service access to existing output files; (2) using technology that is free, and that is easy to deploy and configure; and (3) providing tools to communicate with web services that work in existing research environments. We present a simple, local brokering approach that lets modelers continue producing custom data, but virtually aggregates and standardizes the data using NetCDF Markup Language. The THREDDS Data Server is used for data delivery, pycsw for data search, NCTOOLBOX (Matlab®1) and Iris (Python) for data access, and Ocean Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.1 Mention of trade names or commercial products does not constitute endorsement or recommendation for use by the US Government.

  9. Development of a gridded meteorological dataset over Java island, Indonesia 1985-2014.

    Science.gov (United States)

    Yanto; Livneh, Ben; Rajagopalan, Balaji

    2017-05-23

    We describe a gridded daily meteorology dataset consisting of precipitation, minimum and maximum temperature over Java Island, Indonesia at 0.125°×0.125° (~14 km) resolution spanning 30 years from 1985-2014. Importantly, this data set represents a marked improvement from existing gridded data sets over Java with higher spatial resolution, derived exclusively from ground-based observations unlike existing satellite or reanalysis-based products. Gap-infilling and gridding were performed via the Inverse Distance Weighting (IDW) interpolation method (radius, r, of 25 km and power of influence, α, of 3 as optimal parameters) restricted to only those stations including at least 3,650 days (~10 years) of valid data. We employed MSWEP and CHIRPS rainfall products in the cross-validation. It shows that the gridded rainfall presented here produces the most reasonable performance. Visual inspection reveals an increasing performance of gridded precipitation from grid, watershed to island scale. The data set, stored in a network common data form (NetCDF), is intended to support watershed-scale and island-scale studies of short-term and long-term climate, hydrology and ecology.

  10. Design of FastQuery: How to Generalize Indexing and Querying System for Scientific Data

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jerry; Wu, Kesheng

    2011-04-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies such as FastBit are critical for facilitating interactive exploration of large datasets. These technologies rely on adding auxiliary information to existing datasets to accelerate query processing. To use these indices, we need to match the relational data model used by the indexing systems with the array data model used by most scientific data, and to provide an efficient input and output layer for reading and writing the indices. In this work, we present a flexible design that can be easily applied to most scientific data formats. We demonstrate this flexibility by applying it to two of the most commonly used scientific data formats, HDF5 and NetCDF. We present two case studies using simulation data from the particle accelerator and climate simulation communities. To demonstrate the effectiveness of the new design, we also present a detailed performance study using both synthetic and real scientific workloads.

  11. Construction of Hierarchical Models for Fluid Dynamics in Earth and Planetary Sciences : DCMODEL project

    Science.gov (United States)

    Takahashi, Y. O.; Takehiro, S.; Sugiyama, K.; Odaka, M.; Ishiwatari, M.; Sasaki, Y.; Nishizawa, S.; Ishioka, K.; Nakajima, K.; Hayashi, Y.

    2012-12-01

    Toward the understanding of fluid motions of planetary atmospheres and planetary interiors by performing multiple numerical experiments with multiple models, we are now proceeding ``dcmodel project'', where a series of hierarchical numerical models with various complexity is developed and maintained. In ``dcmodel project'', a series of the numerical models are developed taking care of the following points: 1) a common ``style'' of program codes assuring readability of the software, 2) open source codes of the models to the public, 3) scalability of the models assuring execution on various scales of computational resources, 4) stressing the importance of documentation and presenting a method for writing reference manuals. The lineup of the models and utility programs of the project is as follows: Gtool5, ISPACK/SPML, SPMODEL, Deepconv, Dcpam, and Rdoc-f95. In the followings, features of each component are briefly described. Gtool5 (Ishiwatari et al., 2012) is a Fortran90 library, which provides data input/output interfaces and various utilities commonly used in the models of dcmodel project. A self-descriptive data format netCDF is adopted as a IO format of Gtool5. The interfaces of gtool5 library can reduce the number of operation steps for the data IO in the program code of the models compared with the interfaces of the raw netCDF library. Further, by use of gtool5 library, procedures for data IO and addition of metadata for post-processing can be easily implemented in the program codes in a consolidated form independent of the size and complexity of the models. ``ISPACK'' is the spectral transformation library and ``SPML (SPMODEL library)'' (Takehiro et al., 2006) is its wrapper library. Most prominent feature of SPML is a series of array-handling functions with systematic function naming rules, and this enables us to write codes with a form which is easily deduced from the mathematical expressions of the governing equations. ``SPMODEL'' (Takehiro et al., 2006

  12. The Basic Radar Altimetry Toolbox for Sentinel 3 Users

    Science.gov (United States)

    Lucas, Bruno; Rosmorduc, Vinca; Niemeijer, Sander; Bronner, Emilie; Dinardo, Salvatore; Benveniste, Jérôme

    2013-04-01

    The Basic Radar Altimetry Toolbox (BRAT) is a collection of tools and tutorial documents designed to facilitate the processing of radar altimetry data. This project started in 2006 from the joint efforts of ESA (European Space Agency) and CNES (Centre National d'Etudes Spatiales). The latest version of the software, 3.1, was released on March 2012. The tools enable users to interact with the most common altimetry data formats, being the most used way, the Graphical User Interface (BratGui). This GUI is a front-end for the powerful command line tools that are part of the BRAT suite. BRAT can also be used in conjunction with Matlab/IDL (via reading routines) or in C/C++/Fortran via a programming API, allowing the user to obtain desired data, bypassing the data-formatting hassle. The BratDisplay (graphic visualizer) can be launched from BratGui, or used as a stand-alone tool to visualize netCDF files - it is distributed with another ESA toolbox (GUT) as the visualizer. The most frequent uses of BRAT are teaching remote sensing, altimetry data reading (all missions from ERS-1 to Saral and soon Sentinel-3), quick data visualization/export and simple computation on the data fields. BRAT can be used for importing data and having a quick look at his contents, with several different types of plotting available. One can also use it to translate the data into other formats such as netCDF, ASCII text files, KML (Google Earth) and raster images (JPEG, PNG, etc.). Several kinds of computations can be done within BratGui involving combinations of data fields that the user can save for posterior reuse or using the already embedded formulas that include the standard oceanographic altimetry formulas (MSS, -SSH, MSLA, editing of spurious data, etc.). The documentation collection includes the standard user manual explaining all the ways to interact with the set of software tools but the most important item is the Radar Altimeter Tutorial, that contains a strong introduction to

  13. Changing knowledge perspective in a changing world: The Adriatic multidisciplinary TDS approach

    Science.gov (United States)

    Bergamasco, Andrea; Carniel, Sandro; Nativi, Stefano; Signell, Richard P.; Benetazzo, Alvise; Falcieri, Francesco M.; Bonaldo, Davide; Minuzzo, Tiziano; Sclavo, Mauro

    2013-04-01

    The use and exploitation of the marine environment in recent years has been increasingly high, therefore calling for the need of a better description, monitoring and understanding of its behavior. However, marine scientists and managers often spend too much time in accessing and reformatting data instead of focusing on discovering new knowledge from the processes observed and data acquired. There is therefore the need to make more efficient our approach to data mining, especially in a world where rapid climate change imposes rapid and quick choices. In this context, it is mandatory to explore ways and possibilities to make large amounts of distributed data usable in an efficient and easy way, an effort that requires standardized data protocols, web services and standards-based tools. Following the US-IOOS approach, which has been adopted in many oceanographic and meteorological sectors, we present a CNR experience in the direction of setting up a national Italian IOOS framework (at the moment confined at the Adriatic Sea environment), using the THREDDS (THematic Real-time Environmental Distributed Data Services) Data Server (TDS). A TDS is a middleware designed to fill the gap between data providers and data users, and provides services allowing data users to find the data sets pertaining to their scientific needs, to access, visualize and use them in an easy way, without the need of downloading files to the local workspace. In order to achieve this results, it is necessary that the data providers make their data available in a standard form that the TDS understands, and with sufficient metadata so that the data can be read and searched for in a standard way. The TDS core is a NetCDF- Java Library implementing a Common Data Model (CDM), as developed by Unidata (http://www.unidata.ucar.edu), allowing the access to "array-based" scientific data. Climate and Forecast (CF) compliant NetCDF files can be read directly with no modification, while non-compliant files can

  14. The Time Series Data Server (TSDS) for Standards-Compliant, Convenient, and Efficient Access to Time Series Data

    Science.gov (United States)

    Lindholm, D. M.; Weigel, R. S.; Wilson, A.; Ware Dewolfe, A.

    2009-12-01

    Data analysis in the physical sciences is often plagued by the difficulty in acquiring the desired data. A great deal of work has been done in the area of metadata and data discovery, however, many such discoveries simply provide links that lead directly to a data file. Often these files are impractically large, containing more time samples or variables than desired, and are slow to access. Once these files are downloaded, format issues further complicate using the data. Some data servers have begun to address these problems by improving data virtualization and ease of use. However, these services often don't scale to large datasets. Also, the generic nature of the data models used by these servers, while providing greater flexibility, may complicate setting up such a service for data providers and limit sufficient semantics that would otherwise simplify use for clients, machine or human. The Time Series Data Server (TSDS) aims to address these problems within the limited, yet common, domain of time series data. With the simplifying assumption that all data products served are a function of time, the server can optimize for data access based on time subsets, a common use case. The server also supports requests for specific variables, which can be of type scalar, structure, or sequence. It also supports data types with higher level semantics, such as "spectrum." The TSDS is implemented using Java Servlet technology and can be dropped into any servlet container and customized for a data provider's needs. The interface is based on OPeNDAP (http://opendap.org) and conforms to the Data Acces Protocol (DAP) 2.0, a NASA standard (ESDS-RFC-004), which defines a simple HTTP request and response paradigm. Thus a TSDS server instance is a compliant OPeNDAP server that can be accessed by any OPeNDAP client or directly via RESTful web service requests. The TSDS reads the data that it serves into a common data model via the NetCDF Markup Language (NcML, http://www.unidata.ucar.edu/software/netcdf

  15. Development of web-GIS system for analysis of georeferenced geophysical data

    Science.gov (United States)

    Okladnikov, I.; Gordov, E. P.; Titov, A. G.; Bogomolov, V. Y.; Genina, E.; Martynova, Y.; Shulgina, T. M.

    2012-12-01

    Georeferenced datasets (meteorological databases, modeling and reanalysis results, remote sensing products, etc.) are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated web-GIS information-computational system for analysis of georeferenced climatological and meteorological data has been created. The information-computational system consists of 4 basic parts: computational kernel developed using GNU Data Language (GDL), a set of PHP-controllers run within specialized web-portal, JavaScript class libraries for development of typical components of web mapping application graphical user interface (GUI) based on AJAX technology, and an archive of geophysical datasets. Computational kernel comprises of a number of dedicated modules for querying and extraction of data, mathematical and statistical data analysis, visualization, and preparing output files in geoTIFF and netCDF format containing processing results. Specialized web-portal consists of a web-server Apache, complying OGC standards Geoserver software which is used as a base for presenting cartographical information over the Web, and a set of PHP-controllers implementing web-mapping application logic and governing computational kernel. JavaScript libraries aiming at graphical user interface development are based on GeoExt library combining ExtJS Framework and OpenLayers software. The archive of geophysical data consists of a number of structured environmental datasets represented by data files in netCDF, HDF, GRIB, ESRI Shapefile formats. For processing by the system are available: two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25

  16. Making geodetic glacier mass balances available to the community - Progress and challenges in modifying the WGMS database

    Science.gov (United States)

    Machguth, Horst; Landmann, Johannes; Zemp, Michael; Paul, Frank

    2017-04-01

    The recent years have seen a sharp increase in the publication of geodetically derived glacier mass balances. Internationally coordinated glacier monitoring, however, has so far focused mainly on direct glaciological mass balance measurements. There is thus a need to collect geodetic glacier mass balance data in a standardized format and make the data available to the scientific community. This would allow easy access and data use for, e.g., assessment of regional to global scale glacier changes, re-analysis of glaciological mass balance series, evaluation of and comparison to, other data or model results. It appears logical to build such a data archive where glaciological data are already routinely collected. In the framework of the ESA project Glaciers_cci, the World Glacier Monitoring Service (WGMS) has started an initiative to establish the expertise, the strategy and the infrastructure to make the increasing amount of geodetic glacier mass balance available to the scientific community. The focus is (i) on geodetic (glacier wide) changes as obtained from differencing digital elevation models from two epochs, and (ii) on point elevation change from altimetry. Here we outline the chosen strategy to include gridded data of surface elevation change into the WGMS database. We describe the basic strategy using the netCDF4 data format, summarize the data handling as well as the standardization and discuss major challenges in efficient inclusion of geodetic glacier changes into the WGMS database. Finally, we discuss the potential use of the data and thereby highlight how the added geodetic data influence the calculation of regional to global averages of glacier mass balance.

  17. A consistent data set of Antarctic ice sheet topography, cavity geometry, and global bathymetry

    Directory of Open Access Journals (Sweden)

    R. Timmermann

    2010-12-01

    Full Text Available Sub-ice shelf circulation and freezing/melting rates in ocean general circulation models depend critically on an accurate and consistent representation of cavity geometry. Existing global or pan-Antarctic topography data sets have turned out to contain various inconsistencies and inaccuracies. The goal of this work is to compile independent regional surveys and maps into a global data set. We use the S-2004 global 1-min bathymetry as the backbone and add an improved version of the BEDMAP topography (ALBMAP bedrock topography for an area that roughly coincides with the Antarctic continental shelf. The position of the merging line is individually chosen in different sectors in order to capture the best of both data sets. High-resolution gridded data for ice shelf topography and cavity geometry of the Amery, Fimbul, Filchner-Ronne, Larsen C and George VI Ice Shelves, and for Pine Island Glacier are carefully merged into the ambient ice and ocean topographies. Multibeam survey data for bathymetry in the former Larsen B cavity and the southeastern Bellingshausen Sea have been obtained from the data centers of Alfred Wegener Institute (AWI, British Antarctic Survey (BAS and Lamont-Doherty Earth Observatory (LDEO, gridded, and blended into the existing bathymetry map. The resulting global 1-min Refined Topography data set (RTopo-1 contains self-consistent maps for upper and lower ice surface heights, bedrock topography, and surface type (open ocean, grounded ice, floating ice, bare land surface. The data set is available in NetCDF format from the PANGAEA database at doi:10.1594/pangaea.741917.

  18. Remote web-based 3D visualization of hydrological forecasting datasets.

    Science.gov (United States)

    van Meersbergen, Maarten; Drost, Niels; Blower, Jon; Griffiths, Guy; Hut, Rolf; van de Giesen, Nick

    2015-04-01

    As the possibilities for larger and more detailed simulations of geoscientific data expand, the need for smart solutions in data visualization grow as well. Large volumes of data should be quickly accessible from anywhere in the world without the need for transferring the simulation results. We aim to provide tools for both processing and the handling of these large datasets. As an example, the eWaterCycle project (www.ewatercycle.org) aims to provide a running 14-day ensemble forecast to predict water related stress around the globe. The large volumes of simulation results with uncertainty data that are generated through ensemble hydrological predictions provide a challenge for existing visualization solutions. One possible solution for this challenge lies in the use of web-enabled technology for visualization and analysis of these datasets. Web-based visualization provides an additional benefit in that it eliminates the need for any software installation and configuration and allows for the easy communication of research results between collaborating research parties. Providing interactive tools for the exploration of these datasets will not only help in the analysis of the data by researchers, it can also aid in the dissemination of the research results to the general public. In Vienna, we will present a working open source solution for remote visualization of large volumes of global geospatial data based on the proven open-source 3D web visualization software package Cesium (cesiumjs.org), the ncWMS software package provided by the Reading e-Science Centre and the WebGL and NetCDF standards.

  19. Climatic response variability and machine learning: development of a modular technology framework for predicting bio-climatic change in pacific northwest ecosystems"

    Science.gov (United States)

    Seamon, E.; Gessler, P. E.; Flathers, E.

    2015-12-01

    The creation and use of large amounts of data in scientific investigations has become common practice. Data collection and analysis for large scientific computing efforts are not only increasing in volume as well as number, the methods and analysis procedures are evolving toward greater complexity (Bell, 2009, Clarke, 2009, Maimon, 2010). In addition, the growth of diverse data-intensive scientific computing efforts (Soni, 2011, Turner, 2014, Wu, 2008) has demonstrated the value of supporting scientific data integration. Efforts to bridge this gap between the above perspectives have been attempted, in varying degrees, with modular scientific computing analysis regimes implemented with a modest amount of success (Perez, 2009). This constellation of effects - 1) an increasing growth in the volume and amount of data, 2) a growing data-intensive science base that has challenging needs, and 3) disparate data organization and integration efforts - has created a critical gap. Namely, systems of scientific data organization and management typically do not effectively enable integrated data collaboration or data-intensive science-based communications. Our research efforts attempt to address this gap by developing a modular technology framework for data science integration efforts - with climate variation as the focus. The intention is that this model, if successful, could be generalized to other application areas. Our research aim focused on the design and implementation of a modular, deployable technology architecture for data integration. Developed using aspects of R, interactive python, SciDB, THREDDS, Javascript, and varied data mining and machine learning techniques, the Modular Data Response Framework (MDRF) was implemented to explore case scenarios for bio-climatic variation as they relate to pacific northwest ecosystem regions. Our preliminary results, using historical NETCDF climate data for calibration purposes across the inland pacific northwest region

  20. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    Science.gov (United States)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  1. Gas chromatography - mass spectrometry data processing made easy.

    Science.gov (United States)

    Johnsen, Lea G; Skou, Peter B; Khakimov, Bekzod; Bro, Rasmus

    2017-06-23

    Evaluation of GC-MS data may be challenging due to the high complexity of data including overlapped, embedded, retention time shifted and low S/N ratio peaks. In this work, we demonstrate a new approach, PARAFAC2 based Deconvolution and Identification System (PARADISe), for processing raw GC-MS data. PARADISe is a computer platform independent freely available software incorporating a number of newly developed algorithms in a coherent framework. It offers a solution for analysts dealing with complex chromatographic data. It allows extraction of chemical/metabolite information directly from the raw data. Using PARADISe requires only few inputs from the analyst to process GC-MS data and subsequently converts raw netCDF data files into a compiled peak table. Furthermore, the method is generally robust towards minor variations in the input parameters. The method automatically performs peak identification based on deconvoluted mass spectra using integrated NIST search engine and generates an identification report. In this paper, we compare PARADISe with AMDIS and ChromaTOF in terms of peak quantification and show that PARADISe is more robust to user-defined settings and that these are easier (and much fewer) to set. PARADISe is based on non-proprietary scientifically evaluated approaches and we here show that PARADISe can handle more overlapping signals, lower signal-to-noise peaks and do so in a manner that requires only about an hours worth of work regardless of the number of samples. We also show that there are no non-detects in PARADISe, meaning that all compounds are detected in all samples. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  2. mizuRoute version 1: A river network routing tool for a continental domain water resources applications

    Science.gov (United States)

    Mizukami, Naoki; Clark, Martyn P.; Sampson, Kevin; Nijssen, Bart; Mao, Yixin; McMillan, Hilary; Viger, Roland; Markstrom, Steven; Hay, Lauren E.; Woods, Ross; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    This paper describes the first version of a stand-alone runoff routing tool, mizuRoute. The mizuRoute tool post-processes runoff outputs from any distributed hydrologic model or land surface model to produce spatially distributed streamflow at various spatial scales from headwater basins to continental-wide river systems. The tool can utilize both traditional grid-based river network and vector-based river network data. Both types of river network include river segment lines and the associated drainage basin polygons, but the vector-based river network can represent finer-scale river lines than the grid-based network. Streamflow estimates at any desired location in the river network can be easily extracted from the output of mizuRoute. The routing process is simulated as two separate steps. First, hillslope routing is performed with a gamma-distribution-based unit-hydrograph to transport runoff from a hillslope to a catchment outlet. The second step is river channel routing, which is performed with one of two routing scheme options: (1) a kinematic wave tracking (KWT) routing procedure; and (2) an impulse response function – unit-hydrograph (IRF-UH) routing procedure. The mizuRoute tool also includes scripts (python, NetCDF operators) to pre-process spatial river network data. This paper demonstrates mizuRoute's capabilities to produce spatially distributed streamflow simulations based on river networks from the United States Geological Survey (USGS) Geospatial Fabric (GF) data set in which over 54 000 river segments and their contributing areas are mapped across the contiguous United States (CONUS). A brief analysis of model parameter sensitivity is also provided. The mizuRoute tool can assist model-based water resources assessments including studies of the impacts of climate change on streamflow.

  3. LaRC Near-Real-Time Satellite-Derived products - A Web-based Resource for Geophysical Applications

    Science.gov (United States)

    Palikonda, R.; Minnis, P.; Spangenberg, D. A.; Baojuan Shan, B.; Chee, T.; Nordeen, M.; Boeke, R.; Nguyen, L.; Kato, S.; Rose, F. G.

    2011-12-01

    Real-time satellite derived clouds products are gaining acceptance in model assimilation and now-casting applications. At NASA Langley, we are producing and archiving real-time products from geostationary and polar orbiting satellites. The products from full-disk geostationary satellites, GOES-11, GOES-13, Meteosat-9, MTSAT-2R, FY2E are merged together to derive 3-hourly global products between 60°N to 60°S. MODIS products augment the polar region. Research work is currently underway on how to best assimilate the cloud and surface information into the Goddard Earth Observing System Model (GEOS-5) analysis to improve the forecasts. NASA Langley's (LaRC) North American domain cloud products from MODIS, GOES-11, and GOES-13 are served to the National Centers for Environmental Prediction Operations Center. Currently, they are assimilated in the operational Rapid Refresh model (RR) from NOAA and the developmental CIP model at NCAR to better-forecast weather and aircraft icing conditions, in particular, for the aviation community. Additionally real-time satellite cloud products are available over multiple regional domains from GOES (ARM's Southern Great Plains & North Slope of Alaska, CONUS), Meteosat (Western Europe), MTSAT (Tropical WP) at high spatial and temporal resolution centered over ground sites and used to validate our algorithms by comparing with measurements from ground-based instruments. Since these datasets are archived over time, regional and global daily, monthly and annual datasets of the cloud products are being generated that can help study climatology and pursue investigations of historical events. Our products are also used to support field experiments like SPARTICUS, CALNEX, and STORMVEX in daily mission planning operations and post-experiment science analyses. Tools to view our data on the web and download them, in user-friendly formats like netcdf etc. are also available. This paper describes the currently available products and their

  4. Creating big data from small: using semantic web technology to facilitate the aggregation of diverse European contaminant data for regulatory assessments

    Science.gov (United States)

    Thomas, R.; Kokkinaki, A.; Lowry, R. K.

    2016-12-01

    The European Marine Strategy Framework Directive (MSFD) requires evidence-based reporting to assess the quality of European seas by member states to determine whether they are achieving Good Ecological Status by 2020. One descriptor addresses contaminants; fertilizers, pesticides, antifoulants, heavy metals, etc. There are large amounts of contaminant data available to support this process: >600000 data granules identified, ingested and made available from 303 organizations in 38 countries through the EU funded EMODNet Chemistry program, built on the SeaDataNet (SDN) infrastructure. However when marked up consistently with SDN vocabularies the number of unique parameters available is huge (>3000). While many parameters might superficially appear similar the concentrations reported cannot always be considered equivalent, particularly in sediment and biota. The planned regional-scale data products risked being limited to localized patterns. The strategy adopted to make meaningful aggregations for data product development was to capture the knowledge of domain experts about what could be considered equivalent and publish this knowledge as a thesaurus (or SKOS schema) through the NERC Vocabulary Server (NVS). Of the >3000 parameters identified, so far 1095 have been mapped to 222 aggregated terms. This "captured domain knowledge" has been used to harmonize the data granules into aggregated data collections. The publication of this knowledge through NVS allows transparency and reproducibility of the aggregation process. Gridded data products are derived from the data collection with visualizations available as products generated from the gridded data collections: currently 140 products available either as WFS visualization or netCDF file download. This approach shows how small data sets integrated into larger-scale products, some of which can be targeted at non-scientists, have much greater value than envisaged when the data were originally collected.

  5. Pathfinder Sea Surface Temperature Climate Data Record

    Science.gov (United States)

    Baker-Yeboah, S.; Saha, K.; Zhang, D.; Casey, K. S.

    2016-02-01

    Global sea surface temperature (SST) fields are important in understanding ocean and climate variability. The NOAA National Centers for Environmental Information (NCEI) develops and maintains a high resolution, long-term, climate data record (CDR) of global satellite SST. These SST values are generated at approximately 4 km resolution using Advanced Very High Resolution Radiometer (AVHRR) instruments aboard NOAA polar-orbiting satellites going back to 1981. The Pathfinder SST algorithm is based on the Non-Linear SST algorithm using the modernized NASA SeaWiFS Data Analysis System (SeaDAS). Coefficients for this SST product were generated using regression analyses with co-located in situ and satellite measurements. Previous versions of Pathfinder included level 3 collated (L3C) products. Pathfinder Version 5.3 includes level 2 pre-processed (L2P), level 3 Uncollated (L3C), and L3C products. Notably, the data were processed in the cloud using Amazon Web Services and are made available through all of the modern web visualization and subset services provided by the THREDDS Data Server, the Live Access Server, and the OPeNDAP Hyrax Server.In this version of Pathfinder SST, anomalous hot-spots at land-water boundaries are better identified and the dataset includes updated land masks and sea ice data over the Antarctic ice shelves. All quality levels of SST values are generated, giving the user greater flexibility and the option to apply their own cloud-masking procedures. Additional improvements include consistent cloud tree tests for NOAA-07 and NOAA-19 with respect to the other sensors, improved SSTs in sun glint areas, and netCDF file format improvements to ensure consistency with the latest Group for High Resolution SST (GHRSST) requirements. This quality controlled satellite SST field is a reference environmental data record utilized as a primary resource of SST for numerous regional and global marine efforts.

  6. C-GLORSv5: an improved multipurpose global ocean eddy-permitting physical reanalysis

    Science.gov (United States)

    Storto, Andrea; Masina, Simona

    2016-11-01

    Global ocean reanalyses combine in situ and satellite ocean observations with a general circulation ocean model to estimate the time-evolving state of the ocean, and they represent a valuable tool for a variety of applications, ranging from climate monitoring and process studies to downstream applications, initialization of long-range forecasts and regional studies. The purpose of this paper is to document the recent upgrade of C-GLORS (version 5), the latest ocean reanalysis produced at the Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC) that covers the meteorological satellite era (1980-present) and it is being updated in delayed time mode. The reanalysis is run at eddy-permitting resolution (1/4° horizontal resolution and 50 vertical levels) and consists of a three-dimensional variational data assimilation system, a surface nudging and a bias correction scheme. With respect to the previous version (v4), C-GLORSv5 contains a number of improvements. In particular, background- and observation-error covariances have been retuned, allowing a flow-dependent inflation in the globally averaged background-error variance. An additional constraint on the Arctic sea-ice thickness was introduced, leading to a realistic ice volume evolution. Finally, the bias correction scheme and the initialization strategy were retuned. Results document that the new reanalysis outperforms the previous version in many aspects, especially in representing the variability of global heat content and associated steric sea level in the last decade, the top 80 m ocean temperature biases and root mean square errors, and the Atlantic Ocean meridional overturning circulation; slight worsening in the high-latitude salinity and deep ocean temperature emerge though, providing the motivation for further tuning of the reanalysis system. The dataset is available in NetCDF format at doi:10.1594/PANGAEA.857995.

  7. A Numerical Implementation of a Nonlinear Mild Slope Model for Shoaling Directional Waves

    Directory of Open Access Journals (Sweden)

    Justin R. Davis

    2014-02-01

    Full Text Available We describe the numerical implementation of a phase-resolving, nonlinear spectral model for shoaling directional waves over a mild sloping beach with straight parallel isobaths. The model accounts for non-linear, quadratic (triad wave interactions as well as shoaling and refraction. The model integrates the coupled, nonlinear hyperbolic evolution equations that describe the transformation of the complex Fourier amplitudes of the deep-water directional wave field. Because typical directional wave spectra (observed or produced by deep-water forecasting models such as WAVEWATCH III™ do not contain phase information, individual realizations are generated by associating a random phase to each Fourier mode. The approach provides a natural extension to the deep-water spectral wave models, and has the advantage of fully describing the shoaling wave stochastic process, i.e., the evolution of both the variance and higher order statistics (phase correlations, the latter related to the evolution of the wave shape. The numerical implementation (a Fortran 95/2003 code includes unidirectional (shore-perpendicular propagation as a special case. Interoperability, both with post-processing programs (e.g., MATLAB/Tecplot 360 and future model coupling (e.g., offshore wave conditions from WAVEWATCH III™, is promoted by using NetCDF-4/HD5 formatted output files. The capabilities of the model are demonstrated using a JONSWAP spectrum with a cos2s directional distribution, for shore-perpendicular and oblique propagation. The simulated wave transformation under combined shoaling, refraction and nonlinear interactions shows the expected generation of directional harmonics of the spectral peak and of infragravity (frequency <0.05 Hz waves. Current development efforts focus on analytic testing, development of additional physics modules essential for applications and validation with laboratory and field observations.

  8. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment.

    Science.gov (United States)

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  9. A Real-time, Two-way, Coupled, Refined, Forecasting System to Predict Coastal Storm Impacts

    Science.gov (United States)

    Armstrong, B. N.; Warner, J. C.; Signell, R. P.

    2012-12-01

    Storms are one of the primary environmental forces causing coastal change. These discrete events often produce large waves, storm surges, and flooding, resulting in coastal erosion. In addition, strong storm-generated currents may pose threats to life, property, and navigation. The ability to predict these events, their location, duration, and magnitude allows resource managers to better prepare for the storm impacts as well as guide post-storm survey assessments and recovery efforts. As a step towards increasing our event prediction capability we have developed an automated system to run a daily forecast of the Coupled Ocean - Atmosphere - Wave - Sediment Transport (COAWST) Modeling System, which includes the ocean model ROMS and the wave model SWAN. Management of the system is controlled on a high-performance computing cluster. Data required to drive the modeling system include wave, wind, atmospheric surface inputs, and climatology fields obtained from other modeling products. The Unidata Internet Data Distribution/Local Data Manager, nctoolbox and other NetCDF tools are used to access large data sets from the National Centers for Environmental Prediction (NCEP) and the Nomads http://nomads.ncep.noaa.gov site. The data are used to create forcings and boundary conditions for the ROMS and SWAN models that run on both a 5 km US east coast and a 1 km nested region in the Gulf of Maine. Improvements in the modeling system, data acquisition, and visualization methods required for the forecasting system are described. Results of the newly coupled and refined system show improvement for the prediction of the free surface due to the increased resolution from the grid refinement in the Bay of Fundy. The surface currents of the refined system are more consistent with climatology. The surface waves are permitted to interact with the surface currents and show tidal oscillations at certain locations. Additionally, wave heights during storm events are modified by wave

  10. Development of Gridded Innovations and Observations Supplement to MERRA-2

    Science.gov (United States)

    Bosilovich, Michael G.; Da Silva, Arlindo M.; Robertson, Franklin R.

    2017-01-01

    Atmospheric reanalysis have become an important source of data for weather and climate research, owing to the continuity of the data, but especially because of the multitude of observational data included (radiosondes, commercial aircraft, retrieved data products and radiances). However, the presence of assimilated observations can vary based on numerous factors, and so it is difficult or impossible for a researcher to say with any degree of certainty how many and what type of observations contributed to the reanalysis data they are using at any give point in time or space. For example, quality control, transmission interruptions, and station outages can occasionally affect data availability. While orbital paths can be known, drift in certain instruments and the large number of available instruments makes it challenging to know which satellite is observing any region at any point in the diurnal cycle. Furthermore, there is information from the statistics generated by the data assimilation that can help understand the model and the quality of the reanalysis. Typically, the assimilated observations and their innovations are in observation-space data formats and have not been made easily available to reanalysis users.A test data set has been developed to make the MERRA-2 assimilated observations available for rapid and general use, by simplifying the data format. The observations are binned to a grid similar as MERRA-2 and saved as netCDF. This data collection includes the mean and number of observations in the bin as well as its variance. The data will also include the innovations from the data assimilation, the forecast departure and the analysis increment, as well as bias correction (for satellite radiances). We refer to this proof-of-concept data as the MERRA-2 Gridded Innovations and Observations (GIO). In this paper, we present the data format and its strengths and limitations with some initial testing and validation of the methodology.

  11. NetCDF-CF-OPeNDAP: Standards for ocean data interoperability and object lessons for community data standards processes

    Science.gov (United States)

    Hankin, Steven C.; Blower, Jon D.; Carval, Thierry; Casey, Kenneth S.; Donlon, Craig; Lauret, Olivier; Loubrieu, Thomas; Srinivasan, Ashwanth; Trinanes, Joaquin; Godøy, Øystein; Mendelssohn, Roy; Signell, Richard P.; de La Beaujardiere, Jeff; Cornillon, Peter; Blanc, Frederique; Rew, Russ; Harlan, Jack; Hall, Julie; Harrison, D.E.; Stammer, Detlef

    2010-01-01

    It is generally recognized that meeting society's emerging environmental science and management needs will require the marine data community to provide simpler, more effective and more interoperable access to its data. There is broad agreement, as well, that data standards are the bedrock upon which interoperability will be built. The path that would bring the marine data community to agree upon and utilize such standards, however, is often elusive. In this paper we examine the trio of standards 1) netCDF files; 2) the Climate and Forecast (CF) metadata convention; and 3) the OPeNDAP data access protocol. These standards taken together have brought our community a high level of interoperability for "gridded" data such as model outputs, satellite products and climatological analyses, and they are gaining rapid acceptance for ocean observations. We will provide an overview of the scope of the contribution that has been made. We then step back from the information technology considerations to examine the community or "social" process by which the successes were achieved. We contrast the path by which the World Meteorological Organization (WMO) has advanced the Global Telecommunications System (GTS) - netCDF/CF/OPeNDAP exemplifying a "bottom up" standards process whereas GTS is "top down". Both of these standards are tales of success at achieving specific purposes, yet each is hampered by technical limitations. These limitations sometimes lead to controversy over whether alternative technological directions should be pursued. Finally we draw general conclusions regarding the factors that affect the success of a standards development effort - the likelihood that an IT standard will meet its design goals and will achieve community-wide acceptance. We believe that a higher level of thoughtful awareness by the scientists, program managers and technology experts of the vital role of standards and the merits of alternative standards processes can help us as a community to

  12. Autonomous Underwater Vehicle Data Management and Metadata Interoperability for Coastal Ocean Studies

    Science.gov (United States)

    McCann, M. P.; Ryan, J. P.; Chavez, F. P.; Rienecker, E.

    2004-12-01

    Data from over 1000 km of Autonomous Underwater Vehicle (AUV) surveys of Monterey Bay have been collected and cataloged in an ocean observatory data management system. The Monterey Bay Aquarium Institute's AUV is equipped with a suite of instruments that include a conductivity, temperature, depth (CTD) instrument, transmissometers, a fluorometer, a nitrate sensor, and an inertial navigation system. Data are logged on the vehicle and upon completion of a survey XML descriptions of the data are submitted to the Shore Side Data System (SSDS). Instrument data are then processed on shore to apply calibrations and produce scientifically useful data products. The SSDS employs a data model that tracks data from the instrument that created it through all the consuming processes that generate derived products. SSDS employs OPeNDAP and netCDF to provide data set interoperability at the data level. The core of SSDS is the metadata that is the catalog of these data sets and their relation to all other relevant data. The metadata is managed in a relational database and governed by a Enterprise Java Bean (EJB) server application. Cross-platform Java applications have been written to manage and visualize these data. A Java Swing application - the Hierarchical Ocean Observatory Visualization and Editing System (HOOVES) - has been developed to provide visualization of data set pedigree and data set variables. Because the SSDS data model is generalized according to "Data Producers" and "Data Containers" many different types of data can be represented in SSDS allowing for interoperability at a metadata level. Comparisons of appropriate data sets, whether they are from an autonomous underwater vehicle or from a fixed mooring are easily made using SSDS. The authors will present the SSDS data model and show examples of how the model helps organize data set metadata allowing for data discovery and interoperability. With improved discovery and interoperability the system is helping us

  13. Promoting discovery and access to real time observations produced by regional coastal ocean observing systems

    Science.gov (United States)

    Anderson, D. M.; Snowden, D. P.; Bochenek, R.; Bickel, A.

    2015-12-01

    In the U.S. coastal waters, a network of eleven regional coastal ocean observing systems support real-time coastal and ocean observing. The platforms supported and variables acquired are diverse, ranging from current sensing high frequency (HF) radar to autonomous gliders. The system incorporates data produced by other networks and experimental systems, further increasing the breadth of the collection. Strategies promoted by the U.S. Integrated Ocean Observing System (IOOS) ensure these data are not lost at sea. Every data set deserves a description. ISO and FGDC compliant metadata enables catalog interoperability and record-sharing. Extensive use of netCDF with the Climate and Forecast convention (identifying both metadata and a structured format) is shown to be a powerful strategy to promote discovery, interoperability, and re-use of the data. To integrate specialized data which are often obscure, quality control protocols are being developed to homogenize the QC and make these data more integrate-able. Data Assembly Centers have been established to integrate some specialized streams including gliders, animal telemetry, and HF radar. Subsets of data that are ingested into the National Data Buoy Center are also routed to the Global Telecommunications System (GTS) of the World Meteorological Organization to assure wide international distribution. From the GTS, data are assimilated into now-cast and forecast models, fed to other observing systems, and used to support observation-based decision making such as forecasts, warnings, and alerts. For a few years apps were a popular way to deliver these real-time data streams to phones and tablets. Responsive and adaptive web sites are an emerging flexible strategy to provide access to the regional coastal ocean observations.

  14. Distributed data discovery, access and visualization services to Improve Data Interoperability across different data holdings

    Science.gov (United States)

    Palanisamy, G.; Krassovski, M.; Devarakonda, R.; Santhana Vannan, S.

    2012-12-01

    The current climate debate is highlighting the importance of free, open, and authoritative sources of high quality climate data that are available for peer review and for collaborative purposes. It is increasingly important to allow various organizations around the world to share climate data in an open manner, and to enable them to perform dynamic processing of climate data. This advanced access to data can be enabled via Web-based services, using common "community agreed" standards without having to change their internal structure used to describe the data. The modern scientific community has become diverse and increasingly complex in nature. To meet the demands of such diverse user community, the modern data supplier has to provide data and other related information through searchable, data and process oriented tool. This can be accomplished by setting up on-line, Web-based system with a relational database as a back end. The following common features of the web data access/search systems will be outlined in the proposed presentation: - A flexible data discovery - Data in commonly used format (e.g., CSV, NetCDF) - Preparing metadata in standard formats (FGDC, ISO19115, EML, DIF etc.) - Data subseting capabilities and ability to narrow down to individual data elements - Standards based data access protocols and mechanisms (SOAP, REST, OpenDAP, OGC etc.) - Integration of services across different data systems (discovery to access, visualizations and subseting) This presentation will also include specific examples of integration of various data systems that are developed by Oak Ridge National Laboratory's - Climate Change Science Institute, their ability to communicate between each other to enable better data interoperability and data integration. References: [1] Devarakonda, Ranjeet, and Harold Shanafield. "Drupal: Collaborative framework for science research." Collaboration Technologies and Systems (CTS), 2011 International Conference on. IEEE, 2011. [2

  15. Operable Data Management for Ocean Observing Systems

    Science.gov (United States)

    Chavez, F. P.; Graybeal, J. B.; Godin, M. A.

    2004-12-01

    As oceanographic observing systems become more numerous and complex, data management solutions must follow. Most existing oceanographic data management systems fall into one of three categories: they have been developed as dedicated solutions, with limited application to other observing systems; they expect that data will be pre-processed into well-defined formats, such as netCDF; or they are conceived as robust, generic data management solutions, with complexity (high) and maturity and adoption rates (low) to match. Each approach has strengths and weaknesses; no approach yet fully addresses, nor takes advantage of, the sophistication of ocean observing systems as they are now conceived. In this presentation we describe critical data management requirements for advanced ocean observing systems, of the type envisioned by ORION and IOOS. By defining common requirements -- functional, qualitative, and programmatic -- for all such ocean observing systems, the performance and nature of the general data management solution can be characterized. Issues such as scalability, maintaining metadata relationships, data access security, visualization, and operational flexibility suggest baseline architectural characteristics, which may in turn lead to reusable components and approaches. Interoperability with other data management systems, with standards-based solutions in metadata specification and data transport protocols, and with the data management infrastructure envisioned by IOOS and ORION, can also be used to define necessary capabilities. Finally, some requirements for the software infrastructure of ocean observing systems can be inferred. Early operational results and lessons learned, from development and operations of MBARI ocean observing systems, are used to illustrate key requirements, choices, and challenges. Reference systems include the Monterey Ocean Observing System (MOOS), its component software systems (Software Infrastructure and Applications for MOOS, and

  16. Ophidia: high performance data analytics for climate change

    Science.gov (United States)

    Fiore, S.; Williams, D. N.; Foster, I.; Aloisio, G.

    2013-12-01

    This work presents the most relevant results related to the Ophidia project, a big data analytics research effort applied to climate change. It combines together high perfomance computing and database management systems to provide users with an efficient and climate-oriented data analytics platform. Ophidia extends, in terms of both Structured Query Language (SQL) primitives and data types, current relational database systems to enable efficient data analysis tasks on scientific array-based data. It exploits a proprietary storage model jointly with a parallel software framework based on the Message Passing Interface (MPI) to run from single tasks to more complex dataflows. The current version of the Ophidia framework includes more than 60 array-based primitives and about 25 operators (16 parallel and 9 sequential). Among others, the available array-based functions allow to perform data sub-setting, data aggregation, array concatenation, algebraic expressions and predicate evaluation. Nesting is also supported. On the other hand, some relevant examples related to the parallel operators include (i) data sub-setting (slicing and dicing), (ii) data aggregation, (iii) array-based primitives, (iv) dataset duplication, (v) NetCDF-import and export. The Ophidia framework is being tested on NetCDF data produced in the context of the international Coupled Model Intercomparison Project Phase 5 (CMIP5) and available through the Earth System Grid Federation infrastructure. The current set of use cases includes: 1) data subsetting (e.g. slicing an dicing); 2) time series analysis (e.g. data summary and statistics); 3) data reduction (e.g. from daily to monthly, annual data); 4) data transformation (e.g. re-gridding); 5) data intercomparison (e.g. model and scenario intercomparison) 6) a composition of the aforementioned tasks. This work will highlight the most relevant architectural and infrastructural aspects of the Ophidia project, the parallel framework, the current set of

  17. SeaView: bringing EarthCube to the Oceanographer

    Science.gov (United States)

    Stocks, K. I.; Diggs, S. C.; Arko, R. A.; Kinkade, D.; Shepherd, A.

    2016-12-01

    As new instrument types are developed, and new observational programs start, that support a growing community of "dry" oceanographers, the ability to find, access, and visualize existing data of interest becomes increasingly critical. Yet ocean data, when available, is are held in multiple data facilities, in different formats, and accessible through different pathways. This creates practical problems with integrating and working across different data sets. The SeaView project is building connections between the rich data resources in five major oceanographic data facilities - BCO-DMO, CCHDO, OBIS, OOI, and R2R* - creating a federated set of thematic data collections that are organized around common characteristics (geographic location, time, expedition, program, data type, etc.) and published online in Web Accessible Folders using standard file formats such as ODV and NetCDF. The work includes not simply reformatting data, but identifying and, where possible, addressing interoperability challenges: which common identifiers for core concepts can connect data across repositories, which terms a scientist may want to search that, if added to the data repositories, will increase discoverability; the presence of duplicate data across repositories, etc. We will present the data collections available to date, including data from the OOI Pioneer Array region, and seek scientists' input on the data types and formats they prefer, the tools they use to analyze and visualize data, and their specific recommendations for future data collections to support oceanographic science. * Biological and Chemical Oceanography Data Management Office (BCO-DMO), CLIVAR and Carbon Hydrographic Data Office (CCHDO), International Ocean Biogeographic Information System (iOBIS), Ocean Observatories Initiative (OOI), and Rolling Deck to Repository (R2R) Program.

  18. ClimateSpark: An In-memory Distributed Computing Framework for Big Climate Data Analytics

    Science.gov (United States)

    Hu, F.; Yang, C. P.; Duffy, D.; Schnase, J. L.; Li, Z.

    2016-12-01

    Massive array-based climate data is being generated from global surveillance systems and model simulations. They are widely used to analyze the environment problems, such as climate changes, natural hazards, and public health. However, knowing the underlying information from these big climate datasets is challenging due to both data- and computing- intensive issues in data processing and analyzing. To tackle the challenges, this paper proposes ClimateSpark, an in-memory distributed computing framework to support big climate data processing. In ClimateSpark, the spatiotemporal index is developed to enable Apache Spark to treat the array-based climate data (e.g. netCDF4, HDF4) as native formats, which are stored in Hadoop Distributed File System (HDFS) without any preprocessing. Based on the index, the spatiotemporal query services are provided to retrieve dataset according to a defined geospatial and temporal bounding box. The data subsets will be read out, and a data partition strategy will be applied to equally split the queried data to each computing node, and store them in memory as climateRDDs for processing. By leveraging Spark SQL and User Defined Function (UDFs), the climate data analysis operations can be conducted by the intuitive SQL language. ClimateSpark is evaluated by two use cases using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. One use case is to conduct the spatiotemporal query and visualize the subset results in animation; the other one is to compare different climate model outputs using Taylor-diagram service. Experimental results show that ClimateSpark can significantly accelerate data query and processing, and enable the complex analysis services served in the SQL-style fashion.

  19. National Climate Assessment - Land Data Assimilation System (NCA-LDAS) Data and services at NASA GES DISC

    Science.gov (United States)

    Rui, H.; Vollmer, B.; Teng, W. L.; Jasinski, M. F.; Mocko, D. M.; Kempler, S. J.

    2016-12-01

    The National Climate Assessment-Land Data Assimilation System (NCA-LDAS) is an Integrated Terrestrial Water Analysis, and is one of NASA's contributions to the NCA of the United States. The NCA-LDAS has undergone extensive development, including multi-variate assimilation of remotely-sensed water states and anomalies as well as evaluation and verification studies, led by the Goddard Space Flight Center's Hydrological Sciences Laboratory (HSL). The resulting NCA-LDAS data have recently been released to the general public and include those from the Noah land-surface model (LSM) version 3.3 (Noah-3.3) and the Catchment LSM version Fortuna-2.5 (CLSM-F2.5). Standard LSM output variables including soil moistures/temperatures, surface fluxes, snow cover/depth, groundwater, and runoff are provided, as well as streamflow using a river routing system. The NCA-LDAS data are archived at and distributed by the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). The data can be accessed via HTTP, OPeNDAP, Mirador search and download, and NASA Earthdata Search. To further facilitate access and use, the NCA-LDAS data are integrated into the NASA Giovanni, for quick visualization and analysis, and into the Data Rods system, for retrieval of time series of long time periods. The temporal and spatial resolutions of the NCA-LDAS data are, respectively, daily-averages and 0.125x0.125 degree, covering North America (25N 53N; 125W 67W) and the period January 1979 to December 2015. The data files are in self-describing, machine-independent, CF-compliant netCDF-4 format. This presentation summarizes the major characteristics of the NCA-LDAS data and describes the data services and access methods.

  20. Extending Climate Analytics as a Service to the Earth System Grid Federation Progress Report on the Reanalysis Ensemble Service

    Science.gov (United States)

    Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.

    2016-12-01

    We are extending climate analytics-as-a-service, including: (1) A high-performance Virtual Real-Time Analytics Testbed supporting six major reanalysis data sets using advanced technologies like the Cloudera Impala-based SQL and Hadoop-based MapReduce analytics over native NetCDF files. (2) A Reanalysis Ensemble Service (RES) that offers a basic set of commonly used operations over the reanalysis collections that are accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib. (3) An Open Geospatial Consortium (OGC) WPS-compliant Web service interface to CDSLib to accommodate ESGF's Web service endpoints. This presentation will report on the overall progress of this effort, with special attention to recent enhancements that have been made to the Reanalysis Ensemble Service, including the following: - An CDSlib Python library that supports full temporal, spatial, and grid-based resolution services - A new reanalysis collections reference model to enable operator design and implementation - An enhanced library of sample queries to demonstrate and develop use case scenarios - Extended operators that enable single- and multiple reanalysis area average, vertical average, re-gridding, and trend, climatology, and anomaly computations - Full support for the MERRA-2 reanalysis and the initial integration of two additional reanalyses - A prototype Jupyter notebook-based distribution mechanism that combines CDSlib documentation with interactive use case scenarios and personalized project management - Prototyped uncertainty quantification services that combine ensemble products with comparative observational products - Convenient, one-stop shopping for commonly used data products from multiple reanalyses, including basic subsetting and arithmetic operations over the data and extractions of trends, climatologies, and anomalies - The ability to compute and visualize multiple reanalysis intercomparisons

  1. An information model for managing multi-dimensional gridded data in a GIS

    Science.gov (United States)

    Xu, H.; Abdul-Kadar, F.; Gao, P.

    2016-04-01

    Earth observation agencies like NASA and NOAA produce huge volumes of historical, near real-time, and forecasting data representing terrestrial, atmospheric, and oceanic phenomena. The data drives climatological and meteorological studies, and underpins operations ranging from weather pattern prediction and forest fire monitoring to global vegetation analysis. These gridded data sets are distributed mostly as files in HDF, GRIB, or netCDF format and quantify variables like precipitation, soil moisture, or sea surface temperature, along one or more dimensions like time and depth. Although the data cube is a well-studied model for storing and analyzing multi-dimensional data, the GIS community remains in need of a solution that simplifies interactions with the data, and elegantly fits with existing database schemas and dissemination protocols. This paper presents an information model that enables Geographic Information Systems (GIS) to efficiently catalog very large heterogeneous collections of geospatially-referenced multi-dimensional rasters—towards providing unified access to the resulting multivariate hypercubes. We show how the implementation of the model encapsulates format-specific variations and provides unified access to data along any dimension. We discuss how this framework lends itself to familiar GIS concepts like image mosaics, vector field visualization, layer animation, distributed data access via web services, and scientific computing. Global data sources like MODIS from USGS and HYCOM from NOAA illustrate how one would employ this framework for cataloging, querying, and intuitively visualizing such hypercubes. ArcGIS—an established platform for processing, analyzing, and visualizing geospatial data—serves to demonstrate how this integration brings the full power of GIS to the scientific community.

  2. Development of climate data storage and processing model

    Science.gov (United States)

    Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.

    2016-11-01

    We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.

  3. OOSTethys - Open Source Software for the Global Earth Observing Systems of Systems

    Science.gov (United States)

    Bridger, E.; Bermudez, L. E.; Maskey, M.; Rueda, C.; Babin, B. L.; Blair, R.

    2009-12-01

    An open source software project is much more than just picking the right license, hosting modular code and providing effective documentation. Success in advancing in an open collaborative way requires that the process match the expected code functionality to the developer's personal expertise and organizational needs as well as having an enthusiastic and responsive core lead group. We will present the lessons learned fromOOSTethys , which is a community of software developers and marine scientists who develop open source tools, in multiple languages, to integrate ocean observing systems into an Integrated Ocean Observing System (IOOS). OOSTethys' goal is to dramatically reduce the time it takes to install, adopt and update standards-compliant web services. OOSTethys has developed servers, clients and a registry. Open source PERL, PYTHON, JAVA and ASP tool kits and reference implementations are helping the marine community publish near real-time observation data in interoperable standard formats. In some cases publishing an OpenGeospatial Consortium (OGC), Sensor Observation Service (SOS) from NetCDF files or a database or even CSV text files could take only minutes depending on the skills of the developer. OOSTethys is also developing an OGC standard registry, Catalog Service for Web (CSW). This open source CSW registry was implemented to easily register and discover SOSs using ISO 19139 service metadata. A web interface layer over the CSW registry simplifies the registration process by harvesting metadata describing the observations and sensors from the “GetCapabilities” response of SOS. OPENIOOS is the web client, developed in PERL to visualize the sensors in the SOS services. While the number of OOSTethys software developers is small, currently about 10 around the world, the number of OOSTethys toolkit implementers is larger and growing and the ease of use has played a large role in spreading the use of interoperable standards compliant web services widely

  4. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    Science.gov (United States)

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  5. SIMAT: GC-SIM-MS data analysis tool.

    Science.gov (United States)

    Ranjbar, Mohammad R Nezami; Di Poto, Cristina; Wang, Yue; Ressom, Habtom W

    2015-08-19

    Gas chromatography coupled with mass spectrometry (GC-MS) is one of the technologies widely used for qualitative and quantitative analysis of small molecules. In particular, GC coupled to single quadrupole MS can be utilized for targeted analysis by selected ion monitoring (SIM). However, to our knowledge, there are no software tools specifically designed for analysis of GC-SIM-MS data. In this paper, we introduce a new R/Bioconductor package called SIMAT for quantitative analysis of the levels of targeted analytes. SIMAT provides guidance in choosing fragments for a list of targets. This is accomplished through an optimization algorithm that has the capability to select the most appropriate fragments from overlapping chromatographic peaks based on a pre-specified library of background analytes. The tool also allows visualization of the total ion chromatograms (TIC) of runs and extracted ion chromatograms (EIC) of analytes of interest. Moreover, retention index (RI) calibration can be performed and raw GC-SIM-MS data can be imported in netCDF or NIST mass spectral library (MSL) formats. We evaluated the performance of SIMAT using two GC-SIM-MS datasets obtained by targeted analysis of: (1) plasma samples from 86 patients in a targeted metabolomic experiment; and (2) mixtures of internal standards spiked in plasma samples at varying concentrations in a method development study. Our results demonstrate that SIMAT offers alternative solutions to AMDIS and MetaboliteDetector to achieve accurate detection of targets and estimation of their relative intensities by analysis of GC-SIM-MS data. We introduce a new R package called SIMAT that allows the selection of the optimal set of fragments and retention time windows for target analytes in GC-SIM-MS based analysis. Also, various functions and algorithms are implemented in the tool to: (1) read and import raw data and spectral libraries; (2) perform GC-SIM-MS data preprocessing; and (3) plot and visualize EICs and TICs.

  6. Acoustic Doppler Current Profiler Data Processing System manual [ADCP

    Science.gov (United States)

    Cote, Jessica M.; Hotchkiss, Frances S.; Martini, Marinna A.; Denham, Charles R.; revisions by Ramsey, Andree L.; Ruane, Stephen

    2000-01-01

    This open-file report describes the data processing software currently in use by the U.S. Geological Survey (USGS), Woods Hole Coastal and Marine Science Center (WHCMSC), to process time series of acoustic Doppler current data obtained by Teledyne RD Instruments Workhorse model ADCPs. The Sediment Transport Instrumentation Group (STG) at the WHCMSC has a long-standing commitment to providing scientists high quality oceanographic data published in a timely manner. To meet this commitment, STG has created this software to aid personnel in processing and reviewing data as well as evaluating hardware for signs of instrument malfunction. The output data format for the data is network Common Data Form (netCDF), which meets USGS publication standards. Typically, ADCP data are recorded in beam coordinates. This conforms to the USGS philosophy to post-process rather than internally process data. By preserving the original data quality indicators as well as the initial data set, data can be evaluated and reprocessed for different types of analyses. Beam coordinate data are desirable for internal and surface wave experiments, for example. All the code in this software package is intended to run using the MATLAB program available from The Mathworks, Inc. As such, it is platform independent and can be adapted by the USGS and others for specialized experiments with non-standard requirements. The software is continuously being updated and revised as improvements are required. The most recent revision may be downloaded from: http://woodshole.er.usgs.gov/operations/stg/Pubs/ADCPtools/adcp_index.htm The USGS makes this software available at the user?s discretion and responsibility.

  7. Controlled Vocabulary Service Application for Environmental Data Store

    Science.gov (United States)

    Ji, P.; Piasecki, M.; Lovell, R.

    2013-12-01

    In this paper we present a controlled vocabulary service application for Environmental Data Store (EDS). The purpose for such application is to help researchers and investigators to archive, manage, share, search, and retrieve data efficiently in EDS. The Simple Knowledge Organization System (SKOS) is used in the application for the representation of the controlled vocabularies coming from EDS. The controlled vocabularies of EDS are created by collecting, comparing, choosing and merging controlled vocabularies, taxonomies and ontologies widely used and recognized in geoscience/environmental informatics community, such as Environment ontology (EnvO), Semantic Web for Earth and Environmental Terminology (SWEET) ontology, CUAHSI Hydrologic Ontology and ODM Controlled Vocabulary, National Environmental Methods Index (NEMI), National Water Information System (NWIS) codes, EPSG Geodetic Parameter Data Set, WQX domain value etc. TemaTres, an open-source, web -based thesaurus management package is employed and extended to create and manage controlled vocabularies of EDS in the application. TemaTresView and VisualVocabulary that work well with TemaTres, are also integrated in the application to provide tree view and graphical view of the structure of vocabularies. The Open Source Edition of Virtuoso Universal Server is set up to provide a Web interface to make SPARQL queries against controlled vocabularies hosted on the Environmental Data Store. The replicas of some of the key vocabularies commonly used in the community, are also maintained as part of the application, such as General Multilingual Environmental Thesaurus (GEMET), NetCDF Climate and Forecast (CF) Standard Names, etc.. The application has now been deployed as an elementary and experimental prototype that provides management, search and download controlled vocabularies of EDS under SKOS framework.

  8. Comparing apples and oranges: the Community Intercomparison Suite

    Science.gov (United States)

    Schutgens, Nick; Stier, Philip; Kershaw, Philip; Pascoe, Stephen

    2015-04-01

    Visual representation and comparison of geoscientific datasets presents a huge challenge due to the large variety of file formats and spatio-temporal sampling of data (be they observations or simulations). The Community Intercomparison Suite attempts to greatly simplify these tasks for users by offering an intelligent but simple command line tool for visualisation and colocation of diverse datasets. In addition, CIS can subset and aggregate large datasets into smaller more manageable datasets. Our philosophy is to remove as much as possible the need for specialist knowledge by the user of the structure of a dataset. The colocation of observations with model data is as simple as: "cis col ::" which will resample the simulation data to the spatio-temporal sampling of the observations, contingent on a few user-defined options that specify a resampling kernel. As an example, we apply CIS to a case study of biomass burning aerosol from the Congo. Remote sensing observations, in-situe observations and model data are shown in various plots, with the purpose of either comparing different datasets or integrating them into a single comprehensive picture. CIS can deal with both gridded and ungridded datasets of 2, 3 or 4 spatio-temporal dimensions. It can handle different spatial coordinates (e.g. longitude or distance, altitude or pressure level). CIS supports both HDF, netCDF and ASCII file formats. The suite is written in Python with entirely publicly available open source dependencies. Plug-ins allow a high degree of user-moddability. A web-based developer hub includes a manual and simple examples. CIS is developed as open source code by a specialist IT company under supervision of scientists from the University of Oxford and the Centre of Environmental Data Archival as part of investment in the JASMIN superdatacluster facility.

  9. Emerging Cyber Infrastructure for NASA's Large-Scale Climate Data Analytics

    Science.gov (United States)

    Duffy, D.; Spear, C.; Bowen, M. K.; Thompson, J. H.; Hu, F.; Yang, C. P.; Pierce, D.

    2016-12-01

    The resolution of NASA climate and weather simulations have grown dramatically over the past few years with the highest-fidelity models reaching down to 1.5 KM global resolutions. With each doubling of the resolution, the resulting data sets grow by a factor of eight in size. As the climate and weather models push the envelope even further, a new infrastructure to store data and provide large-scale data analytics is necessary. The NASA Center for Climate Simulation (NCCS) has deployed the Data Analytics Storage Service (DASS) that combines scalable storage with the ability to perform in-situ analytics. Within this system, large, commonly used data sets are stored in a POSIX file system (write once/read many); examples of data stored include Landsat, MERRA2, observing system simulation experiments, and high-resolution downscaled reanalysis. The total size of this repository is on the order of 15 petabytes of storage. In addition to the POSIX file system, the NCCS has deployed file system connectors to enable emerging analytics built on top of the Hadoop File System (HDFS) to run on the same storage servers within the DASS. Coupled with a custom spatiotemporal indexing approach, users can now run emerging analytical operations built on MapReduce and Spark on the same data files stored within the POSIX file system without having to make additional copies. This presentation will discuss the architecture of this system and present benchmark performance measurements from traditional TeraSort and Wordcount to large-scale climate analytical operations on NetCDF data.

  10. Global Precipitation Measurement (GPM) Mission Products and Services at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC)

    Science.gov (United States)

    Liu, Z.; Ostrenga, D.; Vollmer, B.; Kempler, S.; Deshong, B.; Greene, M.

    2015-01-01

    The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is also home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 17 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available: -Level-1 GPM Microwave Imager (GMI) and partner radiometer products, DPR products -Level-2 Goddard Profiling Algorithm (GPROF) GMI and partner products, DPR products -Level-3 daily and monthly products, DPR products -Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http://disc.sci.gsfc.nasa.gov/gpm). Data services that are currently and to-be available include Google-like Mirador (http://mirador.gsfc.nasa.gov/) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http://giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications. The United User Interface (UUI) is the next step in the evolution of the GES DISC web site. It attempts to provide seamless access to data, information and services through a single interface without sending the user to different applications or URLs (e.g., search, access

  11. Environmental Data Store (EDS): A multi-node Data Storage Facility for diverse sets of Geoscience Data

    Science.gov (United States)

    Piasecki, M.; Ji, P.

    2014-12-01

    Geoscience data comes in many flavors that are determined by type of data such as continous on a grid or mesh or discrete colelcted at point either as one time samples or a stream of data coming of sensors, but coudl also encompass digital files of any time type such text files, WORD or EXCEL documents, or audio and video files. We present a storage facility that is comprsed of 6 nodes each of speciaized to host a certain data type: grid based data (netCDF on a THREDDS server), GIS data (shapefiles using GeoServer), point time series data (CUAHSI ODM), sample data (EDBS), and any digital data (RAMADAA) plus a server fro Remote sensing data and its products. While there is overlap in data type storage capabilities (rasters can go into several of these nodes) we prefer to use dedicated storage facilities that are a) freeware, and b) have a good degree of maturity, and c) have shown their utility for stroing a cetain type. In addition it allows to place these commonly used software stacks and storage solutiosn side-by-side to develop interoprability strategies. We have used a DRUPAL based system to handle user regoistration and authentication, and also use the system for data submission and data search. In support for tis system we developed an extensive controlled vocabulary system that is an amalgamation of various CVs used in the geosciecne community in order to achieve as high a degree of recognition, such the CF conventions, CUAHSI Cvs, , NASA (GCMD), EPA and USGS taxonomies, GEMET, in addition to ontological representations such as SWEET.

  12. Polar2Grid 2.0: Reprojecting Satellite Data Made Easy

    Science.gov (United States)

    Hoese, D.; Strabala, K.

    2015-12-01

    Polar-orbiting multi-band meteorological sensors such as those on the Suomi National Polar-orbiting Partnership (SNPP) satellite pose substantial challenges for taking imagery the last mile to forecast offices, scientific analysis environments, and the general public. To do this quickly and easily, the Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the University of Wisconsin has created an open-source, modular application system, Polar2Grid. This bundled solution automates tools for converting various satellite products like those from VIIRS and MODIS into a variety of output formats, including GeoTIFFs, AWIPS compatible NetCDF files, and NinJo forecasting workstation compatible TIFF images. In addition to traditional visible and infrared imagery, Polar2Grid includes three perceptual enhancements for the VIIRS Day-Night Band (DNB), as well as providing the capability to create sharpened true color, sharpened false color, and user-defined RGB images. Polar2Grid performs conversions and projections in seconds on large swaths of data. Polar2Grid is currently providing VIIRS imagery over the Continental United States, as well as Alaska and Hawaii, from various Direct-Broadcast antennas to operational forecasters at the NOAA National Weather Service (NWS) offices in their AWIPS terminals, within minutes of an overpass of the Suomi NPP satellite. Three years after Polar2Grid development started, the Polar2Grid team is now releasing version 2.0 of the software; supporting more sensors, generating more products, and providing all of its features in an easy to use command line interface.

  13. Active Storage with Analytics Capabilities and I/O Runtime System for Petascale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Choudhary, Alok [Northwestern Univ., Evanston, IL (United States)

    2015-03-18

    Computational scientists must understand results from experimental, observational and computational simulation generated data to gain insights and perform knowledge discovery. As systems approach the petascale range, problems that were unimaginable a few years ago are within reach. With the increasing volume and complexity of data produced by ultra-scale simulations and high-throughput experiments, understanding the science is largely hampered by the lack of comprehensive I/O, storage, acceleration of data manipulation, analysis, and mining tools. Scientists require techniques, tools and infrastructure to facilitate better understanding of their data, in particular the ability to effectively perform complex data analysis, statistical analysis and knowledge discovery. The goal of this work is to enable more effective analysis of scientific datasets through the integration of enhancements in the I/O stack, from active storage support at the file system layer to MPI-IO and high-level I/O library layers. We propose to provide software components to accelerate data analytics, mining, I/O, and knowledge discovery for large-scale scientific applications, thereby increasing productivity of both scientists and the systems. Our approaches include 1) design the interfaces in high-level I/O libraries, such as parallel netCDF, for applications to activate data mining operations at the lower I/O layers; 2) Enhance MPI-IO runtime systems to incorporate the functionality developed as a part of the runtime system design; 3) Develop parallel data mining programs as part of runtime library for server-side file system in PVFS file system; and 4) Prototype an active storage cluster, which will utilize multicore CPUs, GPUs, and FPGAs to carry out the data mining workload.

  14. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  15. Development of hi-resolution regional climate scenarios in Japan by statistical downscaling

    Science.gov (United States)

    Dairaku, K.

    2016-12-01

    Climate information and services for Impacts, Adaptation and Vulnerability (IAV) Assessments are of great concern. To meet with the needs of stakeholders such as local governments, a Japan national project, Social Implementation Program on Climate Change Adaptation Technology (SI-CAT), launched in December 2015. It develops reliable technologies for near-term climate change predictions. Multi-model ensemble regional climate scenarios with 1km horizontal grid-spacing over Japan are developed by using CMIP5 GCMs and a statistical downscaling method to support various municipal adaptation measures appropriate for possible regional climate changes. A statistical downscaling method, Bias Correction Spatial Disaggregation (BCSD), is employed to develop regional climate scenarios based on CMIP5 RCP8.5 five GCMs (MIROC5, MRI-CGCM3, GFDL-CM3, CSIRO-Mk3-6-0, HadGEM2-ES) for the periods of historical climate (1970-2005) and near future climate (2020-2055). Downscaled variables are monthly/daily precipitation and temperature. File format is NetCDF4 (conforming to CF1.6, HDF5 compression). Developed regional climate scenarios will be expanded to meet with needs of stakeholders and interface applications to access and download the data are under developing. Statistical downscaling method is not necessary to well represent locally forced nonlinear phenomena, extreme events such as heavy rain, heavy snow, etc. To complement the statistical method, dynamical downscaling approach is also combined and applied to some specific regions which have needs of stakeholders. The added values of statistical/dynamical downscaling methods compared with parent GCMs are investigated.

  16. Building the Petascale National Environmental Research Interoperability Data Platform (NERDIP): Minimizing the 'Trough of Disillusionment' and Accelerating Pathways to the 'Plateau of Productivity'

    Science.gov (United States)

    Wyborn, L. A.; Evans, B. J. K.

    2015-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) has evolved to become Australia's peak computing centre for national computational and Data-intensive Earth system science. More recently NCI collocated 10 Petabytes of 34 major national and international environmental, climate, earth system, geophysics and astronomy data collections to create the National Environmental Research Interoperability Data Platform (NERDIP). Spatial scales of the collections range from global to local ultra-high resolution, whilst sizes range from 3PB down to a few GB. The data is highly connected to both NCI HPC and cloud resources via low latency internal networks with massive bandwidth. Now that the collections are collocated on a single data platform, the 'Hype' and expectations around potential use cases for the NERDIP are high. Not unexpected issues are emerging such as access, licensing issues, ownership, and incompatible data standards. Many communities are standardised within their domain, but achieving true interdisciplinary science will require all communities to move towards open interoperable data formats such as NetCDF4/HDF5. This transition will impact on software using proprietary or non-open standards. But before we reach the 'Plateau of Productivity', there needs to be greater 'Enlightenment' of users to encourage them to realise that this unprecedented Earth system science platform provides a rich mine of opportunities for discovery and innovation for a diverse range of both domain-specific and interdisciplinary investigations including climate and weather research, impact analysis, environment, remote sensing and geophysics and develop new and innovative interdisciplinary use cases that will guide those architecting the system and help minimise the amplitude of the 'Trough of Disillusionment' and ensure greater productivity and uptake of the collections that make NERDIP unique in the next generation of Data-intensive Science.

  17. An open-access CMIP5 pattern library for temperature and precipitation: description and methodology

    Science.gov (United States)

    Lynch, Cary; Hartin, Corinne; Bond-Lamberty, Ben; Kravitz, Ben

    2017-05-01

    Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squares regression methods. We explore the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90° N/S). Bias and mean errors between modeled and pattern-predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5 °C, but the choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. This paper describes our library of least squares regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns. The dataset and netCDF data generation code are available at doi:10.5281/zenodo.495632.

  18. ANALISA PERBANDINGAN TINGGI PERMUKAAN LAUT DARI DATA SGDR RETRACKING DAN GDR SATELIT ALTIMETRI JASON-2 TAHUN 2011 (STUDI KASUS : PESISIR PANTAI SELATAN JAWA

    Directory of Open Access Journals (Sweden)

    Dewangga Eka Mahardian

    2015-02-01

    Full Text Available Indonesia adalah negara maritim dimana memiliki wilayah laut sekitar 70% sehingga segala aktifitas sangat dipengaruhi oleh kondisi laut. Sea surface high (SSH merupakan tinggi permukaan air laut di atas ellipsoid. Pengkajian tentang sea surface high (SSH sangat penting dilakukan di Indonesia untuk memperoleh informasi spasial tentang kondisi perairannya. Saat ini telah dikembangkan suatu sistem satelit yang mempunyai obyek penelitian mengamati kondisi perairan yakni satelit altimetri Jason-2.Metode analisa perbandingan tinggi permukaan laut retracking dari data SGDR dengan tinggi permukaan laut non retracking dari data GDR menggunakan metode Center of gravity. Metode ini digunakan untuk mendapatkan  sea surface high retracking. Pengolahan data netcdf satelit altimetri jason-2 menggunakan software radar altimetry toolbox (BRAT dan matrix laboratory. Pengolahan data tinggi permukaan laut dilakukan tiap pass perbulan pada tahun 2011.Hasil penelitian SSH pada 2011 didapatkan nilai SSH onboard  6,0430 m – 28,1084 m. Banyak faktor yang menyebabkan tinggi rendah SSH pada daerah pesisir yakni Ketinggian air laut , morfologi pantai , Iklim dan cuaca. Dari beberapa faktor tersebut , ketinggian air laut dapat langsung di olah dari satelit altimetri, didapatkan Ketinggian air laut onboard berkisar 5,7611 m – 28,2212 m.Hasil penelitian dari proses retracking SSH OCOG menujukan bahwa nilai SSH pada lintasan satelit dari daratan ke lautan (pass genap lebih besar  daripada lautan ke daratan (pass ganjil. Yang disebabkan oleh pantulan dan noise dari daratan yang relatif besar. Selain itu hasil ploting SSH OCOG masih sangat noise dibandingkan SSH onboard / SSH non retracking tetapi SSH memiliki keuntungan data lebih luas mencakup wilayah pesisir daripada SSH onboard.   

  19. A new CM SAF Solar Surface Radiation Climate Data Set derived from Meteosat Satellite Observations

    Science.gov (United States)

    Trentmann, J.; Mueller, R. W.; Pfeifroth, U.; Träger-Chatterjee, C.; Cremer, R.

    2014-12-01

    The incoming surface solar radiation has been defined as an essential climate variable by GCOS. It is mandatory to monitor this part of the earth's energy balance, and thus gain insights on the state and variability of the climate system. In addition, data sets of the surface solar radiation have received increased attention over the recent years as an important source of information for the planning of solar energy applications. The EUMETSAT Satellite Application Facility on Climate Monitoring (CM SAF) is deriving surface solar radiation from geostationary and polar-orbiting satellite instruments. While CM SAF is focusing on the generation of high-quality long-term climate data records, also operationally data is provided in short time latency within 8 weeks. Here we present SARAH (Solar Surface Radiation Dataset - Heliosat), i.e. the new CM SAF Solar Surface Radiation data set based on Meteosat satellite observations. SARAH provides instantaneous, daily- and monthly-averaged data of the effective cloud albedo (CAL), the direct normalized solar radiation (DNI) and the solar irradiance (SIS) from 1983 to 2013 for the full view of the Meteosat satellite (i.e, Europe, Africa, parts of South America, and the Atlantic ocean). The data sets are generated with a high spatial resolution of 0.05 deg allowing for detailed regional studies, and are available in netcdf-format at no cost without restrictions at www.cmsaf.eu. We provide an overview of the data sets, including a validation against reference measurements from the BSRN and GEBA surface station networks.

  20. The Earth System Grid Center for Enabling Technologies: Focusing Technologies on Climate Datasets and Resource Needs

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2007-09-26

    This report discusses a project that used prototyping technology to access and analyze climate data. This project was initially funded under the DOE’s Next Generation Internet (NGI) program, with follow-on support from BER and the Mathematical, Information, and Computational Sciences (MICS) office. In this prototype, we developed Data Grid technologies for managing the movement and replication of large datasets, and applied these technologies in a practical setting (i.e., an ESG-enabled data browser based on current climate data analysis tools), achieving cross-country transfer rates of more than 500 Mb/s. Having demonstrated the potential for remotely accessing and analyzing climate data located at sites across the U.S., we won the “Hottest Infrastructure” award in the Network Challenge event. While the ESG I prototype project substantiated a proof of concept (“Turning Climate Datasets into Community Resources”), the SciDAC Earth System Grid (ESG) II project made this a reality. Our efforts targeted the development of metadata technologies (standard schema, XML metadata extraction based on netCDF, and a Metadata Catalog Service), security technologies (Web-based user registration and authentication, and community authorization), data transport technologies (GridFTPenabled OPeNDAP-G for high-performance access, robust multiple file transport and integration with mass storage systems, and support for dataset aggregation and subsetting), as well as web portal technologies to provide interactive access to climate data holdings. At this point, the technology was in place and assembled, and ESG II was poised to make a substantial impact on the climate modelling community.

  1. Access to VIRTIS / Venus-Express post-operations data archive

    Science.gov (United States)

    Erard, Stéphane; Drossart, Pierre; Piccioni, Giuseppe; Henry, Florence; Politi, Romolo

    2016-10-01

    All data acquired during the Venus-Express mission are publicly available on ESA's Planetary Science Archive (PSA). The PSA itself is being redesigned to provide more comprehensive access to its content and a new interface is expected to be ready in the coming months.However, an alternative access to the VIRTIS/VEx dataset is also provided in the PI institutes as part of the Europlanet-2020 European programme. The VESPA user interface (http://vespa.obspm.fr) provides a query mechanism based on observational conditions and instrument parameters to select data cubes of interest in the PSA and to connect them to standard plotting and analysis tools. VESPA queries will also identify related data in other datasets responsive to this mechanism, e. g., contextual images or dynamic simulations of the atmosphere, including outcomes of the EuroVenus programme funded by the EU. A specific on-line spectral cube viewer has been developed at Paris Observatory (http://voplus.obspm.fr/apericubes/js9/demo.php). Alternative ways to access the VIRTIS data are being considered, including python access to PDS3 data (https://github.com/VIRTIS-VEX/VIRTISpy) and distribution in NetCDF format on IAPS website (http://planetcdf.iaps.inaf.it). In the near future, an extended data service will provide direct access to individual spectra on the basis of viewing angles, time, and location.The next step will be to distribute products derived from data analysis, such as surface and wind maps, atmospheric profiles, movies of the polar vortices or O2 emission on the night side, etc. Such products will be accessed in a similar way, and will make VIRTIS results readily available for future Venus studies. Similar actions are taken in the frame of Europlanet concerning atmospheric data from the Mars-Express mission and Cassini observations of Titan.

  2. GRRATS: A New Approach to Inland Altimetry Processing for Major World Rivers

    Science.gov (United States)

    Coss, S. P.

    2016-12-01

    Sutcliff Efficiency (NES) values, and the median NSE value is 0.73. The median standard deviation of error (STDE) is .92 m. GRRATS will soon be publicly available in NetCDF format with CF compliant metadata.

  3. Cyberinfrastructure to support Real-time, End-to-End, High Resolution, Localized Forecasting

    Science.gov (United States)

    Ramamurthy, M. K.; Lindholm, D.; Baltzer, T.; Domenico, B.

    2004-12-01

    From natural disasters such as flooding and forest fires to man-made disasters such as toxic gas releases, the impact of weather-influenced severe events on society can be profound. Understanding, predicting, and mitigating such local, mesoscale events calls for a cyberinfrastructure to integrate multidisciplinary data, tools, and services as well as the capability to generate and use high resolution data (such as wind and precipitation) from localized models. The need for such end to end systems -- including data collection, distribution, integration, assimilation, regionalized mesoscale modeling, analysis, and visualization -- has been realized to some extent in many academic and quasi-operational environments, especially for atmospheric sciences data. However, many challenges still remain in the integration and synthesis of data from multiple sources and the development of interoperable data systems and services across those disciplines. Over the years, the Unidata Program Center has developed several tools that have either directly or indirectly facilitated these local modeling activities. For example, the community is using Unidata technologies such as the Internet Data Distribution (IDD) system, Local Data Manger (LDM), decoders, netCDF libraries, Thematic Realtime Environmental Distributed Data Services (THREDDS), and the Integrated Data Viewer (IDV) in their real-time prediction efforts. In essence, these technologies for data reception and processing, local and remote access, cataloging, and analysis and visualization coupled with technologies from others in the community are becoming the foundation of a cyberinfrastructure to support an end-to-end regional forecasting system. To build on these capabilities, the Unidata Program Center is pleased to be a significant contributor to the Linked Environments for Atmospheric Discovery (LEAD) project, a NSF-funded multi-institutional large Information Technology Research effort. The goal of LEAD is to create an

  4. ESA BRAT (Broadview Radar Altimetry Toolbox) and GUT (GOCE User Toolbox) toolboxes

    Science.gov (United States)

    Benveniste, J.; Ambrozio, A.; Restano, M.

    2016-12-01

    The Broadview Radar Altimetry Toolbox (BRAT) is a collection of tools designed to facilitate the processing of radar altimetry data from previous and current altimetry missions, including the upcoming Sentinel-3A L1 and L2 products. A tutorial is included providing plenty of use cases. BRAT's future release (4.0.0) is planned for September 2016. Based on the community feedback, the frontend has been further improved and simplified whereas the capability to use BRAT in conjunction with MATLAB/IDL or C/C++/Python/Fortran, allowing users to obtain desired data bypassing the data-formatting hassle, remains unchanged. Several kinds of computations can be done within BRAT involving the combination of data fields, that can be saved for future uses, either by using embedded formulas including those from oceanographic altimetry, or by implementing ad-hoc Python modules created by users to meet their needs. BRAT can also be used to quickly visualise data, or to translate data into other formats, e.g. from NetCDF to raster images. The GOCE User Toolbox (GUT) is a compilation of tools for the use and the analysis of GOCE gravity field models. It facilitates using, viewing and post-processing GOCE L2 data and allows gravity field data, in conjunction and consistently with any other auxiliary data set, to be pre-processed by beginners in gravity field processing, for oceanographic and hydrologic as well as for solid earth applications at both regional and global scales. Hence, GUT facilitates the extensive use of data acquired during GRACE and GOCE missions. In the current 3.0 version, GUT has been outfitted with a graphical user interface allowing users to visually program data processing workflows. Further enhancements aiming at facilitating the use of gradients, the anisotropic diffusive filtering, and the computation of Bouguer and isostatic gravity anomalies have been introduced. Packaged with GUT is also GUT's VCM (Variance-Covariance Matrix) tool for analysing GOCE

  5. Web-based Data Visualization of the MGClimDeX Climate Model Output: An Integrated Perspective of Climate Change Impact on Natural Resources in Highly Vulnerable Regions.

    Science.gov (United States)

    Martinez-Rey, J.; Brockmann, P.; Cadule, P.; Nangini, C.

    2016-12-01

    Earth System Models allow us to understand the interactions between climate and biogeological processes. These models generate a very large amount of data. These data are usually reduced to a few number of static figures shown in highly specialized scientific publications. However, the potential impacts of climate change demand a broader perspective regarding the ways in which climate model results of this kind are disseminated, particularly in the amount and variety of data, and the target audience. This issue is of great importance particularly for scientific projects that seek a large broadcast with different audiences on their key results. The MGClimDeX project, which assesses the climate change impact on La Martinique island in the Lesser Antilles, will provide tools and means to help the key stakeholders -responsible for addressing the critical social, economic, and environmental issues- to take the appropriate adaptation and mitigation measures in order to prevent future risks associated with climate variability and change, and its role on human activities. The MGClimDeX project will do so by using model output and data visualization techniques within the next year, showing the cross-connected impacts of climate change on various sectors (agriculture, forestry, ecosystems, water resources and fisheries). To address this challenge of representing large sets of data from model output, we use back-end data processing and front-end web-based visualization techniques, going from the conventional netCDF model output stored on hub servers to highly interactive web-based data-powered visualizations on browsers. We use the well-known javascript library D3.js extended with DC.js -a dimensional charting library for all the front-end interactive filtering-, in combination with Bokeh, a Python library to synthesize the data, all framed in the essential HTML+CSS scripts. The resulting websites exist as standalone information units or embedded into journals or scientific

  6. Web Services in Earth Science Data Systems: Realities of Brokering, Chaining and Federating Services

    Science.gov (United States)

    Burnett, M.; Seablom, M.; Pfister, R.

    2004-12-01

    The key to the next-generation of science will be unlocked through synergy between resources held by many differing individuals and organizations. These resources include missions, information, data, models, algorithms and related operations/services. Key infrastructure technology can enable this synergy - allowing scientists to discover and utilize these resources in innovative and efficient ways. Currently, the process to invoke data operations without the use of service brokering capabilities can be laborious. Scientists must first find data of interest, locate appropriate service(s) (e.g. subsetting), download, manage and execute the service on their own server before they can start the process of doing their job of science analysis or research. The vision of the future is that a scientist sits in front of a laptop computer thinking about a science problem. Scenario: A scientist in Illinois wants to study the re-vegetation process over recently active volcanoes and needs satellite images that are 1) mostly cloud-free, 2) exist in the green, red, and near-infrared portions of the spectrum, and 3) occur over specific, discrete regions of the Northern Hemisphere. The multi-instrument data, need to be co-registered, re-projection and delivered in NetCDF format. In this scenario, the user interacts with information about the data and services to identify data resources needed to analyze science question. Of the petabytes of data and hundreds of data services available, the scientist is able to rapidly get a comprehensive, seamless view tailored to her research and analyze the issue. It is not necessary for the scientist to know that behind the scenes exists an enterprise of seamless, distributed data and services that find the data and streams it to a series of chained services in physically disparate locations to apply data transformation algorithms that meet the user's specifications and ultimately delivers the data to the user's laptop. However, the scientist

  7. Long-Term Oceanographic Observations in Western Massachusetts Bay Offshore of Boston, Massachusetts: Data Report for 1989-2002

    Science.gov (United States)

    Butman, Bradford; Bothner, Michael H.; Alexander, P. Soupy; Lightsom, Frances L.; Martini, Marinna A.; Gutierrez, Benjamin T.; Strahle, William S.

    2004-01-01

    summary plots and statistics, and the data in NetCDF and ASCII format for the period December 1989 through December 2002 for Site A and October 1997 through December 2002 for Site B. The objective of this report is to make the data available in digital form and to provide summary plots and statistics to facilitate browsing of the long-term data set.

  8. Performance and quality assessment of the global ocean eddy-permitting physical reanalysis GLORYS2V4.

    Science.gov (United States)

    Garric, Gilles; Parent, Laurent; Greiner, Eric; Drévillon, Marie; Hamon, Mathieu; Lellouche, Jean-Michel; Régnier, Charly; Desportes, Charles; Le Galloudec, Olivier; Bricaud, Clement; Drillet, Yann; Hernandez, Fabrice; Le Traon, Pierre-Yves

    2017-04-01

    the variability of global heat and salt content and associated steric sea level in the last two decades. The dataset is available in NetCDF format and GLORYS2V4 best analysis products are distributed onto the CMEMS data portal.

  9. A big data approach for climate change indicators processing in the CLIP-C project

    Science.gov (United States)

    D'Anca, Alessandro; Conte, Laura; Palazzo, Cosimo; Fiore, Sandro; Aloisio, Giovanni

    2016-04-01

    interface. In order to address the challenges of the use cases, the implemented data analytics workflows include parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, and import/export of datasets in NetCDF format. The use cases have been implemented on a HPC cluster of 8-nodes (16-cores/node) of the Athena Cluster available at the CMCC Supercomputing Centre. Benchmark results will be also presented during the talk.

  10. Easy research data handling with an OpenEarth DataLab for geo-monitoring research

    Science.gov (United States)

    Vanderfeesten, Maurice; van der Kuil, Annemiek; Prinčič, Alenka; den Heijer, Kees; Rombouts, Jeroen

    2015-04-01

    OpenEarth DataLab is an open source-based collaboration and processing platform to enable streamlined research data management from raw data ingest and transformation to interoperable distribution. It enables geo-scientists to easily synchronise, share, compute and visualise the dynamic and most up-to-date research data, scripts and models in multi-stakeholder geo-monitoring programs. This DataLab is developed by the Research Data Services team of TU Delft Library and 3TU.Datacentrum together with coastal engineers of Delft University of Technology and Deltares. Based on the OpenEarth software stack an environment has been developed to orchestrate numerous geo-related open source software components that can empower researchers and increase the overall research quality by managing research data; enabling automatic and interoperable data workflows between all the components with track & trace, hit & run data transformation processing in cloud infrastructure using MatLab and Python, synchronisation of data and scripts (SVN), and much more. Transformed interoperable data products (KML, NetCDF, PostGIS) can be used by ready-made OpenEarth tools for further analyses and visualisation, and can be distributed via interoperable channels such as THREDDS (OpenDAP) and GeoServer. An example of a successful application of OpenEarth DataLab is the Sand Motor, an innovative method for coastal protection in the Netherlands. The Sand Motor is a huge volume of sand that has been applied along the coast to be spread naturally by wind, waves and currents. Different research disciplines are involved concerned with: weather, waves and currents, sand distribution, water table and water quality, flora and fauna, recreation and management. Researchers share and transform their data in the OpenEarth DataLab, that makes it possible to combine their data and to see influence of different aspects of the coastal protection on their models. During the project the data are available only for the

  11. GNU Data Language (GDL) - a free and open-source implementation of IDL

    Science.gov (United States)

    Arabas, Sylwester; Schellens, Marc; Coulais, Alain; Gales, Joel; Messmer, Peter

    2010-05-01

    GNU Data Language (GDL) is developed with the aim of providing an open-source drop-in replacement for the ITTVIS's Interactive Data Language (IDL). It is free software developed by an international team of volunteers led by Marc Schellens - the project's founder (a list of contributors is available on the project's website). The development is hosted on SourceForge where GDL continuously ranks in the 99th percentile of most active projects. GDL with its library routines is designed as a tool for numerical data analysis and visualisation. As its proprietary counterparts (IDL and PV-WAVE), GDL is used particularly in geosciences and astronomy. GDL is dynamically-typed, vectorized and has object-oriented programming capabilities. The library routines handle numerical calculations, data visualisation, signal/image processing, interaction with host OS and data input/output. GDL supports several data formats such as netCDF, HDF4, HDF5, GRIB, PNG, TIFF, DICOM, etc. Graphical output is handled by X11, PostScript, SVG or z-buffer terminals, the last one allowing output to be saved in a variety of raster graphics formats. GDL is an incremental compiler with integrated debugging facilities. It is written in C++ using the ANTLR language-recognition framework. Most of the library routines are implemented as interfaces to open-source packages such as GNU Scientific Library, PLPlot, FFTW, ImageMagick, and others. GDL features a Python bridge (Python code can be called from GDL; GDL can be compiled as a Python module). Extensions to GDL can be written in C++, GDL, and Python. A number of open software libraries written in IDL, such as the NASA Astronomy Library, MPFIT, CMSVLIB and TeXtoIDL are fully or partially functional under GDL. Packaged versions of GDL are available for several Linux distributions and Mac OS X. The source code compiles on some other UNIX systems, including BSD and OpenSolaris. The presentation will cover the current status of the project, the key

  12. Service architecture challenges in building the KNMI Data Centre

    Science.gov (United States)

    Som de Cerff, Wim; van de Vegte, John; Plieger, Maarten; de Vreede, Ernst; Sluiter, Raymond; Willem Noteboom, Jan; van der Neut, Ian; Verhoef, Hans; van Versendaal, Robert; van Binnendijk, Martin; Kalle, Henk; Knopper, Arthur; Calis, Gijs; Ha, Siu Siu; van Moosel, WIm; Klein Ikkink, Henk-Jan; Tosun, Tuncay

    2013-04-01

    combines Open Source software components (e.g. Geonetwork, Magnolia, MongoDB, MySQL) with in-house built software (ADAGUC, NADC) and newly developed software. Challenges faced and solved are: How to deal with the different file formats used at KNMI? (e.g. NetCDF, GRIB, BUFR, ASCII); How to deal with the different metadata profiles while hiding the complexity of this to the user? How to incorporate the existing archives? KDC is a node in several networks (WMO WIS, INSPIRE, Open Data): how to do this? In the presentation/poster we will describe what has been done for each of these challenges and how it is implemented in KDC.

  13. Implementing a Data Quality Strategy to Simplify Access to Data

    Science.gov (United States)

    Druken, K. A.; Trenham, C. E.; Evans, B. J. K.; Richards, C. J.; Wang, J.; Wyborn, L. A.

    2016-12-01

    To ensure seamless programmatic access for data analysis (including machine learning), standardization of both data and services is vital. At the Australian National Computational Infrastructure (NCI) we have developed a Data Quality Strategy (DQS) that currently provides processes for: (1) the consistency of data structures in the underlying High Performance Data (HPD) platform; (2) quality control through compliance with recognized community standards; and (3) data quality assurance through demonstrated functionality across common platforms, tools and services. NCI hosts one of Australia's largest repositories (10+ PBytes) of research data collections spanning datasets from climate, coasts, oceans and geophysics through to astronomy, bioinformatics and the social sciences. A key challenge is the application of community-agreed data standards to the broad set of Earth systems and environmental data that are being used. Within these disciplines, data span a wide range of gridded, ungridded (i.e., line surveys, point clouds), and raster image types, as well as diverse coordinate reference projections and resolutions. By implementing our DQS we have seen progressive improvement in the quality of the datasets across the different subject domains, and through this, the ease by which the users can programmatically access the data, either in situ or via web services. As part of its quality control procedures, NCI has developed a compliance checker based upon existing domain standards. The DQS also includes extensive Functionality Testing which include readability by commonly used libraries (e.g., netCDF, HDF, GDAL, etc.); accessibility by data servers (e.g., THREDDS, Hyrax, GeoServer), validation against scientific analysis and programming platforms (e.g., Python, Matlab, QGIS); and visualization tools (e.g., ParaView, NASA Web World Wind). These tests ensure smooth interoperability between products and services as well as exposing unforeseen requirements and

  14. Scientific Platform as a Service - Tools and solutions for efficient access to and analysis of oceanographic data

    Science.gov (United States)

    Vines, Aleksander; Hansen, Morten W.; Korosov, Anton

    2017-04-01

    Existing infrastructure international and Norwegian projects, e.g., NorDataNet, NMDC and NORMAP, provide open data access through the OPeNDAP protocol following the conventions for CF (Climate and Forecast) metadata, designed to promote the processing and sharing of files created with the NetCDF application programming interface (API). This approach is now also being implemented in the Norwegian Sentinel Data Hub (satellittdata.no) to provide satellite EO data to the user community. Simultaneously with providing simplified and unified data access, these projects also seek to use and establish common standards for use and discovery metadata. This then allows development of standardized tools for data search and (subset) streaming over the internet to perform actual scientific analysis. A combinnation of software tools, which we call a Scientific Platform as a Service (SPaaS), will take advantage of these opportunities to harmonize and streamline the search, retrieval and analysis of integrated satellite and auxiliary observations of the oceans in a seamless system. The SPaaS is a cloud solution for integration of analysis tools with scientific datasets via an API. The core part of the SPaaS is a distributed metadata catalog to store granular metadata describing the structure, location and content of available satellite, model, and in situ datasets. The analysis tools include software for visualization (also online), interactive in-depth analysis, and server-based processing chains. The API conveys search requests between system nodes (i.e., interactive and server tools) and provides easy access to the metadata catalog, data repositories, and the tools. The SPaaS components are integrated in virtual machines, of which provisioning and deployment are automatized using existing state-of-the-art open-source tools (e.g., Vagrant, Ansible, Docker). The open-source code for scientific tools and virtual machine configurations is under version control at https

  15. A Shallow Layer Approach for Geo-flow emplacement

    Science.gov (United States)

    Costa, A.; Folch, A.; Mecedonio, G.

    2009-04-01

    Geophysical flows such as lahars or lava flows severely threat the communities located on or near the volcano flanks. Risks and damages caused by the propagation of this kind of flows require a quantitative description of this phenomenon and reliable tools for forecasting their emplacement. Computational models are a valuable tool for planning risk mitigation countermeasures, such as human intervention to force flow diversion, artificial barriers, and allow for significant economical and social benefits. A FORTRAN 90 code based on a Shallow Layer Approach for Geo-flows (SLAG) for describing transport and emplacement of diluted lahars, water and lava was developed in both serial and parallel version. Three rheological models, such as those describing i) a viscous, ii) a turbulent, and iii) a dilatant flow respectively, were implemented in order to describe transport of lavas, water and diluted lahars. The code was made user-friendly by creating some interfaces that allow the user to easily define the problem, extract and interpolate the topography of the simulation domain. Moreover SLAG outputs can be written in both GRD format (e.g., Surfer), NetCDF format, or visualized directly in GoogleEarth. In SLAG the governing equations were treated using a Godunov splitting method following George (2008) algorithm based on a Riemann solver for the shallow water equations that decomposes an augmented state variable the depth, momentum, momentum flux, and bathymetry into four propagating discontinuities or waves. For our application, the algorithm was generalized for solving the energy equation. For validating the code in simulating real geophysical flows, we performed few simulations the lava flow event of the the 3rd and 4th January 1992 Etna eruption, the July 2001 Etna lava flows, January 2002 Nyragongo lava flows and few test cases for simulating transport of diluted lahars. Ref: George, D.L. (2008), Augmented Riemann Solvers for the Shallow Water Equations over Variable

  16. Data Immersion for CCNY Undergraduate Summer Interns at the IEDA Geoinformatics Facility

    Science.gov (United States)

    Uribe, R.; Van Wert, T.; Alabi, T.

    2016-12-01

    National Science Foundation (NSF) funded programs that provide grants and resources to enhance undergraduate learning and provide a pathway to future career opportunities in the geosciences by increasing retention and broadening participation. In an increasingly digital world, geoinformatics and the importance of large data storage and accessibility is a rapidly expanding field in the geosciences. The NSF-funded Interdisciplinary Earth Data Alliance (IEDA) - City College of New York (CCNY) summer internship program aims to provide diverse undergraduates from CCNY with data processing experience within the IEDA facility at Columbia University's Lamont-Doherty Earth Observatory (LDEO). CCNY interns worked alongside IEDA mentors and were immersed in the day-to-day operations of the IEDA facility. Skills necessary to work with geoscience data were developed throughout the internship and participation with the broader cohort of Lamont summer interns was promoted. Summer lectures delivered by researchers at LDEO provided interns with cutting-edge geoscience content from experts across a wide range of fields in the Earth sciences. CCNY undergraduate interns undertook two data compilation projects. First, interns compiled LiDAR land elevation data to enhance the land-ocean base map used across IEDA map-based resources. For that, the interns downloaded and classified one- and three-meter resolution LiDAR topographic data from the USGS The National Mapfor the lower 48 states. Second, computer-derived regional and global seismic tomography models from the Incorporated Research Institutions for Seismology (IRIS) were compiled and processed for integration with GeoMapApp, a free mapping application developed at LDEO (www.geomapapp.org). Interns established a data processing workflow to extract tomographic depth slices from dozens of tomographic grids. Executing LINUX commands and shell scripts, the native format binary netCDF files were resampled and reformatted and compared to

  17. A High-Resolution 3D Weather Radar, MSG, and Lightning Sensor Observation Composite

    Science.gov (United States)

    Diederich, Malte; Senf, Fabian; Wapler, Kathrin; Simmer, Clemens

    2013-04-01

    Within the research group 'Object-based Analysis and SEamless prediction' (OASE) of the Hans Ertel Centre for Weather Research programme (HerZ), a data composite containing weather radar, lightning sensor, and Meteosat Second Generation observations is being developed for the use in object-based weather analysis and nowcasting. At present, a 3D merging scheme combines measurements of the Bonn and Jülich dual polarimetric weather radar systems (data provided by the TR32 and TERENO projects) into a 3-dimensional polar-stereographic volume grid, with 500 meters horizontal, and 250 meters vertical resolution. The merging takes into account and compensates for various observational error sources, such as attenuation through hydrometeors, beam blockage through topography and buildings, minimum detectable signal as a function of noise threshold, non-hydrometeor echos like insects, and interference from other radar systems. In addition to this, the effect of convection during the radar 5-minute volume scan pattern is mitigated through calculation of advection vectors from subsequent scans and their use for advection correction when projecting the measurements into space for any desired timestamp. The Meteosat Second Generation rapid scan service provides a scan in 12 spectral visual and infrared wavelengths every 5 minutes over Germany and Europe. These scans, together with the derived microphysical cloud parameters, are projected into the same polar stereographic grid used for the radar data. Lightning counts from the LINET lightning sensor network are also provided for every 2D grid pixel. The combined 3D radar and 2D MSG/LINET data is stored in a fully documented netCDF file for every 5 minute interval, and is made ready for tracking and object based weather analysis. At the moment, the 3D data only covers the Bonn and Jülich area, but the algorithms are planed to be adapted to the newly conceived DWD polarimetric C-Band 5 minute interval volume scan strategy. An

  18. Latest developments for the IAGOS database: Interoperability and metadata

    Science.gov (United States)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for

  19. Between a Map and a Data Rod

    Science.gov (United States)

    Teng, W. L.; Rui, H.; Strub, R. F.; Vollmer, B.

    2015-12-01

    A "Digital Divide" has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or "maps") and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported "data rods" project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectives/constraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly ("virtual") data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a year's worth of time series for hourly data (~9,000 time steps) in ~90 seconds. Our approach is a specific

  20. Modern Era Retrospective Restrospective-Analysis for Research and Applications (MERRA) Data and Services at the GES DISC

    Science.gov (United States)

    Berrick, Stephen W.; Shen, Suhung; Ostrenga, Dana

    2008-01-01

    keywords, location names or latitude/longitude box, and date/time, with responses within a few seconds. (2) Giovanni is a GES DISC developed Web application that provides data visualization and analysis online. Giovanni features popular visualizations such as latitude-longitude maps, animations, cross sections, profiles, time series, etc. and some basic statistical analysis functions such as scatter plots and correlation coefficient maps. Users are able to download results in several different formats, including Google Earth. (3) On-the-fly parameter subsetting of data within a spatial/temporal window is provided through a simple select and click Web page. (4) MERRA data are also available via OPeNDAP, GrADS Data Server (GDS) and can be converted to netCDF on the fly.

  1. Expanding the use of Scientific Data through Maps and Apps

    Science.gov (United States)

    Shrestha, S. R.; Zimble, D. A.; Herring, D.; Halpert, M.

    2014-12-01

    The importance of making scientific data more available can't be overstated. There is a wealth of useful scientific data available and demand for this data is only increasing; however, applying scientific data towards practical uses poses several technical challenges. These challenges can arise from difficulty in handling the data due largely to 1) the complexity, variety and volume of scientific data and 2) applying and operating the techniques and tools needed to visualize and analyze the data. As a result, the combined knowledge required to take advantage of these data requires highly specialized skill sets that in total, limit the ability of scientific data from being used in more practical day-to-day decision making activities. While these challenges are daunting, information technologies do exist that can help mitigate some of these issues. Many organizations for years have already been enjoying the benefits of modern service oriented architectures (SOAs) for everyday enterprise tasks. We can use this approach to modernize how we share and access our scientific data where much of the specialized tools and techniques needed to handle and present scientific data can be automated and executed by servers and done so in an appropriate way. We will discuss and show an approach for preparing file based scientific data (e.g. GRIB, netCDF) for use in standard based scientific web services. These scientific web services are able to encapsulate the logic needed to handle and describe scientific data through a variety of service types including, image, map, feature, geoprocessing, and their respective service methods. By combining these types of services and leveraging well-documented and modern web development APIs, we can afford to focus our attention on the design and development of user-friendly maps and apps. Our scenario will include developing online maps through these services by integrating various forecast data from the Climate Forecast System (CFSv2). This

  2. HadISD: a quality-controlled global synoptic report database for selected variables at long-term stations from 1973–2011

    Directory of Open Access Journals (Sweden)

    D. E. Parker

    2012-10-01

    Full Text Available This paper describes the creation of HadISD: an automatically quality-controlled synoptic resolution dataset of temperature, dewpoint temperature, sea-level pressure, wind speed, wind direction and cloud cover from global weather stations for 1973–2011. The full dataset consists of over 6000 stations, with 3427 long-term stations deemed to have sufficient sampling and quality for climate applications requiring sub-daily resolution. As with other surface datasets, coverage is heavily skewed towards Northern Hemisphere mid-latitudes. The dataset is constructed from a large pre-existing ASCII flatfile data bank that represents over a decade of substantial effort at data retrieval, reformatting and provision. These raw data have had varying levels of quality control applied to them by individual data providers. The work proceeded in several steps: merging stations with multiple reporting identifiers; reformatting to netCDF; quality control; and then filtering to form a final dataset. Particular attention has been paid to maintaining true extreme values where possible within an automated, objective process. Detailed validation has been performed on a subset of global stations and also on UK data using known extreme events to help finalise the QC tests. Further validation was performed on a selection of extreme events world-wide (Hurricane Katrina in 2005, the cold snap in Alaska in 1989 and heat waves in SE Australia in 2009. Some very initial analyses are performed to illustrate some of the types of problems to which the final data could be applied. Although the filtering has removed the poorest station records, no attempt has been made to homogenise the data thus far, due to the complexity of retaining the true distribution of high-resolution data when applying adjustments. Hence non-climatic, time-varying errors may still exist in many of the individual station records and care is needed in inferring long-term trends from these data. This

  3. WMT: The CSDMS Web Modeling Tool

    Science.gov (United States)

    Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged

  4. Combining ESGF Data node with its complementary data services at the NASA Center for Climate Simulation

    Science.gov (United States)

    Shen, Y.; Carriere, L.; Nadeau, D.; Potter, G. L.; Peters, J.; Winter, E.; Cinquini, L.; Blodgett, D. L.; McInerney, M.

    2014-12-01

    time. Future work includes incorporating Web Map Service (WMS), either when it is enabled in ESGF or through a standalone WMS viewer accessing the complementary server, to visualize ana4MIPs and obs4MIPs datasets, thereby providing georeferenced images of climate data stored as netcdf files. Use Cases for each of these projects will be presented.

  5. Data Access Tools And Services At The Goddard Distributed Active Archive Center (GDAAC)

    Science.gov (United States)

    Pham, Long; Eng, Eunice; Sweatman, Paul

    2003-01-01

    As one of the largest providers of Earth Science data from the Earth Observing System, GDAAC provides the latest data from the Moderate Resolution Imaging Spectroradiometer (MODIS), Atmospheric Infrared Sounder (AIRS), Solar Radiation and Climate Experiment (SORCE) data products via GDAAC's data pool (50TB of disk cache). In order to make this huge volume of data more accessible to the public and science communities, the GDAAC offers multiple data access tools and services: Open Source Project for Network Data Access Protocol (OPeNDAP), Grid Analysis and Display System (GrADS/DODS) (GDS), Live Access Server (LAS), OpenGlS Web Map Server (WMS) and Near Archive Data Mining (NADM). The objective is to assist users in retrieving electronically a smaller, usable portion of data for further analysis. The OPeNDAP server, formerly known as the Distributed Oceanographic Data System (DODS), allows the user to retrieve data without worrying about the data format. OPeNDAP is capable of server-side subsetting of HDF, HDF-EOS, netCDF, JGOFS, ASCII, DSP, FITS and binary data formats. The GrADS/DODS server is capable of serving the same data formats as OPeNDAP. GDS has an additional feature of server-side analysis. Users can analyze the data on the server there by decreasing the computational load on their client's system. The LAS is a flexible server that allows user to graphically visualize data on the fly, to request different file formats and to compare variables from distributed locations. Users of LAS have options to use other available graphics viewers such as IDL, Matlab or GrADS. WMS is based on the OPeNDAP for serving geospatial information. WMS supports OpenGlS protocol to provide data in GIs-friendly formats for analysis and visualization. NADM is another access to the GDAAC's data pool. NADM gives users the capability to use a browser to upload their C, FORTRAN or IDL algorithms, test the algorithms, and mine data in the data pool. With NADM, the GDAAC provides an

  6. The MMI Device Ontology: Enabling Sensor Integration

    Science.gov (United States)

    Rueda, C.; Galbraith, N.; Morris, R. A.; Bermudez, L. E.; Graybeal, J.; Arko, R. A.; Mmi Device Ontology Working Group

    2010-12-01

    .g., SensorML, NetCDF). These identifiers can be resolved through a web browser, or other client applications via HTTP against the MMI Ontology Registry and Repository (ORR), where the ontology is maintained. SPARQL-based query capabilities, which are enhanced with reasoning, along with several supported output formats, allow the effective interaction of diverse client applications with the semantic information associated with the device ontology. In this presentation we describe the process for the development of the MMI Device Ontology and illustrate extensions and applications that demonstrate the benefits of adopting this semantic approach, including example queries involving inference. We also highlight the issues encountered and future work.

  7. Integrating sea floor observatory data: the EMSO data infrastructure

    Science.gov (United States)

    Huber, Robert; Azzarone, Adriano; Carval, Thierry; Doumaz, Fawzi; Giovanetti, Gabriele; Marinaro, Giuditta; Rolin, Jean-Francois; Beranzoli, Laura; Waldmann, Christoph

    2013-04-01

    interoperability of the EMSO data infrastructure. Beneath common standards for metadata exchange such as OpenSearch or OAI-PMH, EMSO has chosen to implement core standards of the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) suite of standards, such as Catalogue Service for Web (CS-W), Sensor Observation Service (SOS) and Observations and Measurements (O&M). Further, strong integration efforts are currently undertaken to harmonize data formats e.g NetCDF as well as the used ontologies and terminologies. The presentation will also give information to users about the discovery and visualization procedure for the EMSO data presently available.

  8. A central repository for gridded data in the MeteoSwiss Data Warehouse

    Science.gov (United States)

    Grueter, E.

    2010-09-01

    The significance of gridded data in meteorology and climatology has increased remarkably over the last years. New products of observing systems such as radars, improved interpolation techniques, spatial analyses or modeling procedures have already led to a proliferated amount of available grid data within the meteorological and climatological community. Since these products are generated by different systems the format in which they are delivered can vary quite much. To facilitate the combined use of different gridded data sets (f. ex. rain accumulation based on radar data and interpolated rainfall based on observations) MeteoSwiss decided to incorporate them in one central data repository. It's been the strategy of MeteoSwiss over the last ten years to store and manage all of its data in a single central data platform - the MeteoSwiss Data Warehouse - which is completely metadata driven. After the integration of all kind of historical and current surface data the system was extended in 2009 to store different types of upper air data. The last release of this Data Warehouse project focuses on grid data to complete MeteoSwiss' data integration strategy. This release comprises both the storage of different types of gridded datasets being delivered in various data formats into one single grid database and its data management. Here from datasets, which have originally been created in different data formats (f. ex. gif and netCDF), can be exported in whatever format is supported by the system. This procedure facilitates to a great extent the combined analyses of grid data originating in different data sources. Additionally interfaces to other software packages such as R allow direct access to the grid database. Web applications are implemented to allow users to carry out predefined spatial analyses such as spatial aggregation for a user specified extent of the dataset. After evaluating different solutions MeteoSwiss decided to implement its system using existing GIS

  9. Global Precipitation Measurement (GPM) Mission Products and Services at the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC)

    Science.gov (United States)

    Liu, Zhong; Ostrenga, D.; Vollmer, B.; Deshong, B.; Greene, M.; Teng, W.; Kempler, S. J.

    2015-01-01

    On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http:pmm.nasa.govGPM). The GPM mission consists of an international network of satellites in which a GPM Core Observatory satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 16 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available include the following: 1. Level-1 GPM Microwave Imager (GMI) and partner radiometer products. 2. Goddard Profiling Algorithm (GPROF) GMI and partner products. 3. Integrated Multi-satellitE Retrievals for GPM (IMERG) products. (early, late, and final)A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http:disc.sci.gsfc.nasa.govgpm). Data services that are currently and to-be available include Google-like Mirador (http:mirador.gsfc.nasa.gov) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http:giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data

  10. Improving Data Catalogs with Free and Open Source Software

    Science.gov (United States)

    Schweitzer, R.; Hankin, S.; O'Brien, K.

    2013-12-01

    The Global Earth Observation Integrated Data Environment (GEO-IDE) is NOAA's effort to successfully integrate data and information with partners in the national US-Global Earth Observation System (US-GEO) and the international Global Earth Observation System of Systems (GEOSS). As part of the GEO-IDE, the Unified Access Framework (UAF) is working to build momentum towards the goal of increased data integration and interoperability. The UAF project is moving towards this goal with an approach that includes leveraging well known and widely used standards, as well as free and open source software. The UAF project shares the widely held conviction that the use of data standards is a key ingredient necessary to achieve interoperability. Many community-based consensus standards fail, though, due to poor compliance. Compliance problems emerge for many reasons: because the standards evolve through versions, because documentation is ambiguous or because individual data providers find the standard inadequate as-is to meet their special needs. In addition, minimalist use of standards will lead to a compliant service, but one which is of low quality. In this presentation, we will be discussing the UAF effort to build a catalog cleaning tool which is designed to crawl THREDDS catalogs, analyze the data available, and then build a 'clean' catalog of data which is standards compliant and has a uniform set of data access services available. These data services include, among others, OPeNDAP, Web Coverage Service (WCS) and Web Mapping Service (WMS). We will also discuss how we are utilizing free and open source software and services to both crawl, analyze and build the clean data catalog, as well as our efforts to help data providers improve their data catalogs. We'll discuss the use of open source software such as DataNucleus, Thematic Realtime Environmental Distributed Data Services (THREDDS), ncISO and the netCDF Java Common Data Model (CDM). We'll also demonstrate how we are

  11. Oceanotron, Scalable Server for Marine Observations

    Science.gov (United States)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to

  12. Providing Data Access for Interdisciplinary Research

    Science.gov (United States)

    Hooper, R. P.; Couch, A.

    2012-12-01

    , such as gridded data, have standard storage formats (e.g., netCDF) but its native format is not convenient for water research. Some progress has been made to "transpose" these data sets from gridded data to a grid of virtual gages with time series. Such a format is more convenient for research of a limited spatial extent through time. Advances in relational data base structure now make it possible to serve very large data sets, such as radar-based precipitation grids, through HIS. Expanding the use of a standards-based services-oriented architecture will enable interdisciplinary research to proceed far more rapidly by putting data onto scientists' computers with a fraction of the effort previously required.

  13. Putting User Stories First: Experiences Adapting the Legacy Data Models and Information Architecture at NASA JPL's PO.DAAC to Accommodate the New Information Lifecycle Required by SWOT

    Science.gov (United States)

    McGibbney, L. J.; Hausman, J.; Laurencelle, J. C.; Toaz, R., Jr.; McAuley, J.; Freeborn, D. J.; Stoner, C.

    2016-12-01

    Standards such as CF Conventions, NetCDF, HDF and ISO Metadata, etc. Beyond SWOT… what choices were made such that the new PO.DAAC IA will flexible enough and adequately design such that future missions with even more advanced requirements can be accommodated within PO.DAAC.

  14. NCEI-TSG: A Global in situ Sea-surface Salinity and Temperature Database of Thermosalinograph (TSG) Observations

    Science.gov (United States)

    Zhang, H. M.; Wang, Z.; Boyer, T.; Bayler, E. J.; Biddle, M.; Baker-Yeboah, S.; Zhang, Y.

    2016-12-01

    The NOAA National Centers for Environmental Information (NCEI) has constructed a Global Thermosalinograph Database (NCEI-TSG) to facilitate access to these in situ sea-surface salinity and temperature measurements. This database provides a comprehensive set of quality-controlled in situ sea-surface salinity (SSS) and temperature (SST) measurements collected from over 200 vessels during the period 1989 to the present. Compared to other TSG datasets, these data have several advantages. 1) The NCEI-TSG is the world's most complete TSG dataset, containing all data from the different TSG data assembly centers, e.g. Shipboard Automated Meteorological and Oceanographic System (SAMOS), Global Ocean Surface Underway Data (GOSUD) and Atlantic Oceanographic and Meteorological Laboratory (AOML), with more historical data from NCEI's archive to be added. 2) When different versions of a dataset are available, the dataset with the highest resolution is always chosen. 3) All data are converted to a common NetCDF format, employing enhanced metadata, following Attribute Convention for Dataset Discovery (ACDD) and Climate and Forecast (CF) conventions, to increase the overall quality and searchability of both the data and metadata. 4) All data are processed using the same 11-step quality control procedures and criteria and flagged using a two-level flag system to provide a well-organized, uniformly quality-controlled TSG dataset for the user community. The NCEI-TSG, a unique dataset for in situ sea-surface observations, serves as a significant resource for establishing match-ups with satellite SST and SSS observations for validation and comparisons. The NCEI-TSG database will significantly contribute to the in situ component of the NOAA Satellite SSS Quality Monitor (4SQM) project (under development). This dataset facilitates assessments of global SST and SSS variability and the analysis of patterns and trends at various regional and temporal scales, enabling new insights in climate

  15. Using R for analysing spatio-temporal datasets: a satellite-based precipitation case study

    Science.gov (United States)

    Zambrano-Bigiarini, Mauricio

    2017-04-01

    Increasing computer power and the availability of remote-sensing data measuring different environmental variables has led to unprecedented opportunities for Earth sciences in recent decades. However, dealing with hundred or thousands of files, usually in different vectorial and raster formats and measured with different temporal frequencies, impose high computation challenges to take full advantage of all the available data. R is a language and environment for statistical computing and graphics which includes several functions for data manipulation, calculation and graphical display, which are particularly well suited for Earth sciences. In this work I describe how R was used to exhaustively evaluate seven state-of-the-art satellite-based rainfall estimates (SRE) products (TMPA 3B42v7, CHIRPSv2, CMORPH, PERSIANN-CDR, PERSIAN-CCS-adj, MSWEPv1.1 and PGFv3) over the complex topography and diverse climatic gradients of Chile. First, built-in functions were used to automatically download the satellite-images in different raster formats and spatial resolutions and to clip them into the Chilean spatial extent if necessary. Second, the raster package was used to read, plot, and conduct an exploratory data analysis in selected files of each SRE product, in order to detect unexpected problems (rotated spatial domains, order or variables in NetCDF files, etc). Third, raster was used along with the hydroTSM package to aggregate SRE files into different temporal scales (daily, monthly, seasonal, annual). Finally, the hydroTSM and hydroGOF packages were used to carry out a point-to-pixel comparison between precipitation time series measured at 366 stations and the corresponding grid cell of each SRE. The modified Kling-Gupta index of model performance was used to identify possible sources of systematic errors in each SRE, while five categorical indices (PC, POD, FAR, ETS, fBIAS) were used to assess the ability of each SRE to correctly identify different precipitation intensities

  16. The IAGOS Information System

    Science.gov (United States)

    Boulanger, Damien; Thouret, Valérie; Brissebrat, Guillaume

    2017-04-01

    IAGOS (In-service Aircraft for a Global Observing System) is a European Research Infrastructure which aims at the provision of long-term, regular and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft and do measurements of aerosols, cloud particles, greenhouse gases, ozone, water vapor and nitrogen oxides from the surface to the lower stratosphere. The IAGOS database is an essential part of the global atmospheric monitoring network. It contains IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data. The IAGOS Data Portal http://www.iagos.org, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data center AERIS (http://www.aeris-data.fr). In 2016 the new IAGOS Data Portal has been released. In addition to the data download the portal provides improved and new services such as download in NetCDF or NASA Ames formats and plotting tools (maps, time series, vertical profiles, etc.). New added value products are or will be soon available through the portal: back trajectories, origin of air masses, co-location with satellite data, etc. Web services allow to download IAGOS metadata such as flights and airports information. Administration tools have been implemented for users management and instruments monitoring. A major improvement is the interoperability with international portals or other databases in order to improve IAGOS data discovery. In the frame of the IGAS project (IAGOS for the Copernicus Atmospheric Service), a data network has been setup. It is composed of three data centers: the IAGOS database in Toulouse, the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de) and the CAMS (Copernicus Atmosphere Monitoring Service) data center in Jülich (http://join.iek.fz-juelich.de). The link with the CAMS data center, through the JOIN interface, allows to

  17. A WPS Based Architecture for Climate Data Analytic Services (CDAS) at NASA

    Science.gov (United States)

    Maxwell, T. P.; McInerney, M.; Duffy, D.; Carriere, L.; Potter, G. L.; Doutriaux, C.

    2015-12-01

    Faced with unprecedented growth in the Big Data domain of climate science, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute trusted and tested analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using trusted climate data analysis tools (ESMF, CDAT, NCO, etc.). The framework is structured as a set of interacting modules allowing maximal flexibility in deployment choices. The current set of module managers include: Staging Manager: Runs the computation locally on the WPS server or remotely using tools such as celery or SLURM. Compute Engine Manager: Runs the computation serially or distributed over nodes using a parallelization framework such as celery or spark. Decomposition Manger: Manages strategies for distributing the data over nodes. Data Manager: Handles the import of domain data from long term storage and manages the in-memory and disk-based caching architectures. Kernel manager: A kernel is an encapsulated computational unit which executes a processor's compute task. Each kernel is implemented in python exploiting existing analysis packages (e.g. CDAT) and is compatible with all CDAS compute engines and decompositions. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be executed using either direct web service calls, a python script or application, or a javascript-based web application. Client packages in python or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the

  18. Implementation of Web Processing Services (WPS) over IPSL Earth System Grid Federation (ESGF) node

    Science.gov (United States)

    Kadygrov, Nikolay; Denvil, Sebastien; Carenton, Nicolas; Levavasseur, Guillaume; Hempelmann, Nils; Ehbrecht, Carsten

    2016-04-01

    The Earth System Grid Federation (ESGF) is aimed to provide access to climate data for the international climate community. ESGF is a system of distributed and federated nodes that dynamically interact with each other. ESGF user may search and download climatic data, geographically distributed over the world, from one common web interface and through standardized API. With the continuous development of the climate models and the beginning of the sixth phase of the Coupled Model Intercomparison Project (CMIP6), the amount of data available from ESGF will continuously increase during the next 5 years. IPSL holds a replication of the different global and regional climate models output, observations and reanalysis data (CMIP5, CORDEX, obs4MIPs, etc) that are available on the IPSL ESGF node. In order to let scientists perform analysis of the models without downloading vast amount of data the Web Processing Services (WPS) were installed at IPSL compute node. The work is part of the CONVERGENCE project founded by French National Research Agency (ANR). PyWPS implementation of the Web processing Service standard from Open Geospatial Consortium (OGC) in the framework of birdhouse software is used. The processes could be run by user remotely through web-based WPS client or by using command-line tool. All the calculations are performed on the server side close to the data. If the models/observations are not available at IPSL it will be downloaded and cached by WPS process from ESGF network using synda tool. The outputs of the WPS processes are available for download as plots, tar-archives or as NetCDF files. We present the architecture of WPS at IPSL along with the processes for evaluation of the model performance, on-site diagnostics and post-analysis processing of the models output, e.g.: - regriding/interpolation/aggregation - ocgis (OpenClimateGIS) based polygon subsetting of the data - average seasonal cycle, multimodel mean, multimodel mean bias - calculation of the

  19. Modern Era Retrospective-analysis for Research and Applications (MERRA) Data and Services at the GES DISC

    Science.gov (United States)

    Berrick, S. W.; Ostrenga, D.; Shen, S.

    2008-12-01

    provided through a simple "select and click" Web page. (4) MERRA data are also available via OPeNDAP, GrADS Data Server (GDS) and can be converted to netCDF "on the fly". Detailed MERRA data access information is available at the MDISC portal: http://disc.gsfc.nasa.gov/MDISC

  20. CM-DataONE: A Framework for collaborative analysis of climate model output

    Science.gov (United States)

    Xu, Hao; Bai, Yuqi; Li, Sha; Dong, Wenhao; Huang, Wenyu; Xu, Shiming; Lin, Yanluan; Wang, Bin

    2015-04-01

    CM-DataONE is a distributed collaborative analysis framework for climate model data which aims to break through the data access barriers of increasing file size and to accelerate research process. As data size involved in project such as the fifth Coupled Model Intercomparison Project (CMIP5) has reached petabytes, conventional methods for analysis and diagnosis of model outputs have been rather time-consuming and redundant. CM-DataONE is developed for data publishers and researchers from relevant areas. It can enable easy access to distributed data and provide extensible analysis functions based on tools such as NCAR Command Language, NetCDF Operators (NCO) and Climate Data Operators (CDO). CM-DataONE can be easily installed, configured, and maintained. The main web application has two separate parts which communicate with each other through APIs based on HTTP protocol. The analytic server is designed to be installed in each data node while a data portal can be configured anywhere and connect to a nearest node. Functions such as data query, analytic task submission, status monitoring, visualization and product downloading are provided to end users by data portal. Data conform to CMIP5 Model Output Format in each peer node can be scanned by the server and mapped to a global information database. A scheduler included in the server is responsible for task decomposition, distribution and consolidation. Analysis functions are always executed where data locate. Analysis function package included in the server has provided commonly used functions such as EOF analysis, trend analysis and time series. Functions are coupled with data by XML descriptions and can be easily extended. Various types of results can be obtained by users for further studies. This framework has significantly decreased the amount of data to be transmitted and improved efficiency in model intercomparison jobs by supporting online analysis and multi-node collaboration. To end users, data query is

  1. Grid Data Management and Customer Demands at MeteoSwiss

    Science.gov (United States)

    Rigo, G.; Lukasczyk, Ch.

    2010-09-01

    Data grids constitute the required input form for a variety of applications. Therefore, customers increasingly expect climate services to not only provide measured data, but also grids of these with the required configurations on an operational basis. Currently, MeteoSwiss is establishing a production chain for delivering data grids by subscription directly from the data warehouse in order to meet the demand for precipitation data grids by governmental, business and science customers. The MeteoSwiss data warehouse runs on an Oracle database linked with an ArcGIS Standard edition geodatabase. The grids are produced by Unix-based software written in R called GRIDMCH which extracts the station data from the data warehouse and stores the files in the file system. By scripts, the netcdf-v4 files are imported via an FME interface into the database. Currently daily and monthly deliveries of daily precipitation grids are available from MeteoSwiss with a spatial resolution of 2.2km x 2.2km. These daily delivered grids are a preliminary based on 100 measuring sites whilst the grid of the monthly delivery of daily sums is calculated out of about 430 stations. Crucial for the absorption by the customers is the understanding of and the trust into the new grid product. Clearly stating needs which can be covered by grid products, the customers require a certain lead time to develop applications making use of the particular grid. Therefore, early contacts and a continuous attendance as well as flexibility in adjusting the production process to fulfill emerging customer needs are important during the introduction period. Gridding over complex terrain can lead to temporally elevated uncertainties in certain areas depending on the weather situation and coverage of measurements. Therefore, careful instructions on the quality and use and the possibility to communicate the uncertainties of gridded data proofed to be essential especially to the business and science customers who require

  2. SWIFT2: Software for continuous ensemble short-term streamflow forecasting for use in research and operations

    Science.gov (United States)

    Perraud, Jean-Michel; Bennett, James C.; Bridgart, Robert; Robertson, David E.

    2016-04-01

    Research undertaken through the Water Information Research and Development Alliance (WIRADA) has laid the foundations for continuous deterministic and ensemble short-term forecasting services. One output of this research is the software Short-term Water Information Forecasting Tools version 2 (SWIFT2). SWIFT2 is developed for use in research on short term streamflow forecasting techniques as well as operational forecasting services at the Australian Bureau of Meteorology. The variety of uses in research and operations requires a modular software system whose components can be arranged in applications that are fit for each particular purpose, without unnecessary software duplication. SWIFT2 modelling structures consist of sub-areas of hydrologic models, nodes and links with in-stream routing and reservoirs. While this modelling structure is customary, SWIFT2 is built from the ground up for computational and data intensive applications such as ensemble forecasts necessary for the estimation of the uncertainty in forecasts. Support for parallel computation on multiple processors or on a compute cluster is a primary use case. A convention is defined to store large multi-dimensional forecasting data and its metadata using the netCDF library. SWIFT2 is written in modern C++ with state of the art software engineering techniques and practices. A salient technical feature is a well-defined application programming interface (API) to facilitate access from different applications and technologies. SWIFT2 is already seamlessly accessible on Windows and Linux via packages in R, Python, Matlab and .NET languages such as C# and F#. Command line or graphical front-end applications are also feasible. This poster gives an overview of the technology stack, and illustrates the resulting features of SWIFT2 for users. Research and operational uses share the same common core C++ modelling shell for consistency, but augmented by different software modules suitable for each context. The

  3. Using the STOQS Web Application for Access to in situ Oceanographic Data

    Science.gov (United States)

    McCann, M. P.

    2012-12-01

    Using the STOQS Web Application for Access to in situ Oceanographic Data Mike McCann 7 August 2012 With increasing measurement and sampling capabilities of autonomous oceanographic platforms (e.g. Gliders, Autonomous Underwater Vehicles, Wavegliders), the need to efficiently access and visualize the data they collect is growing. The Monterey Bay Aquarium Research Institute has designed and built the Spatial Temporal Oceanographic Query System (STOQS) specifically to address this issue. The need for STOQS arises from inefficiencies discovered from using CF-NetCDF point observation conventions for these data. The problem is that access efficiency decreases with decreasing dimension of CF-NetCDF data. For example, the Trajectory Common Data Model feature type has only one coordinate dimension, usually Time - positions of the trajectory (Depth, Latitude, Longitude) are stored as non-indexed record variables within the NetCDF file. If client software needs to access data between two depth values or from a bounded geographic area, then the whole data set must be read and the selection made within the client software. This is very inefficient. What is needed is a way to easily select data of interest from an archive given any number of spatial, temporal, or other constraints. Geospatial relational database technology provides this capability. The full STOQS application consists of a Postgres/PostGIS database, Mapserver, and Python-Django running on a server and Web 2.0 technology (jQuery, OpenLayers, Twitter Bootstrap) running in a modern web browser. The web application provides faceted search capabilities allowing a user to quickly drill into the data of interest. Data selection can be constrained by spatial, temporal, and depth selections as well as by parameter value and platform name. The web application layer also provides a REST (Representational State Transfer) Application Programming Interface allowing tools such as the Matlab stoqstoolbox to retrieve data

  4. Medclic: the Mediterranean in one click

    Science.gov (United States)

    Troupin, Charles; Frontera, Biel; Sebastián, Kristian; Pau Beltran, Joan; Krietemeyer, Andreas; Gómara, Sonia; Gomila, Mikel; Escudier, Romain; Juza, Mélanie; Mourre, Baptiste; Garau, Angels; Cañellas, Tomeu; Tintoré, Joaquín

    2016-04-01

    "Medclic: the Mediterranean in one click" is a research and dissemination project focused on the scientific, technological and societal approaches of the Balearic Islands Coastal Observing and Forecasting System ({SOCIB}{www.socib.es}) in a collaboration with "la Caixa" Foundation. SOCIB aims at research excellence and the development of technology which enables progress toward the sustainable management of coastal and marine environments, providing solutions to meet the needs of society. Medclic goes one step forward and has two main goals: at the scientific level, to advance in establishing and understanding the mesoscale variability at the regional scale and its interaction, and thus improving the characterisation of the "oceanic weather" in the Mediterranean; at the outreach level: to bring SOCIB and the new paradigm of multi-platform observation in real time closer to society, through scientific outreach. SOCIB Data Centre is the core of the new multi-platform and real time oceanography and is responsible for directing the different stages of data management, ranging from data acquisition to its distribution and visualization through web applications. The system implemented relies on open source solutions and provides data in line with international standards and conventions (INSPIRE, netCDF Climate and Forecast, ldots). In addition, the Data Centre has implemented a REST web service, called Data Discovery. This service allows data generated by SOCIB to be integrated into applications developed by the Data Centre itself or by third parties, as it is the case with Medclic. Relying on this data distribution, the new web Medclic, www.medclic.es, constitutes an interactive scientific and educational area of communication that contributes to the rapprochement of the general public with the new marine and coastal observing technologies. Thanks to the Medclic web, data coming from new observing technologies in oceanography are available in real time and in one clic

  5. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    Science.gov (United States)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  6. Community-Based Services that Facilitate Interoperability and Intercomparison of Precipitation Datasets from Multiple Sources

    Science.gov (United States)

    Liu, Zhong; Kempler, Steven; Teng, William; Leptoukh, Gregory; Ostrenga, Dana

    2010-01-01

    perform the complicated data access and match-up processes. In addition, PDISC tool and service capabilities being adapted for GPM data will be described, including the Google-like Mirador data search and access engine; semantic technology to help manage large amounts of multi-sensor data and their relationships; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion to various formats (e.g., netCDF, HDF, KML (for Google Earth)); visualization and analysis of Level 2 data profiles and maps; parameter and spatial subsetting; time and temporal aggregation; regridding; data version control and provenance; continuous archive verification; and expertise in data-related standards and interoperability. The goal of providing these services is to further the progress towards a common framework by which data analysis/validation can be more easily accomplished.

  7. Distributed Computation Resources for Earth System Grid Federation (ESGF)

    Science.gov (United States)

    Duffy, D.; Doutriaux, C.; Williams, D. N.

    2014-12-01

    The Intergovernmental Panel on Climate Change (IPCC), prompted by the United Nations General Assembly, has published a series of papers in their Fifth Assessment Report (AR5) on processes, impacts, and mitigations of climate change in 2013. The science used in these reports was generated by an international group of domain experts. They studied various scenarios of climate change through the use of highly complex computer models to simulate the Earth's climate over long periods of time. The resulting total data of approximately five petabytes are stored in a distributed data grid known as the Earth System Grid Federation (ESGF). Through the ESGF, consumers of the data can find and download data with limited capabilities for server-side processing. The Sixth Assessment Report (AR6) is already in the planning stages and is estimated to create as much as two orders of magnitude more data than the AR5 distributed archive. It is clear that data analysis capabilities currently in use will be inadequate to allow for the necessary science to be done with AR6 data—the data will just be too big. A major paradigm shift from downloading data to local systems to perform data analytics must evolve to moving the analysis routines to the data and performing these computations on distributed platforms. In preparation for this need, the ESGF has started a Compute Working Team (CWT) to create solutions that allow users to perform distributed, high-performance data analytics on the AR6 data. The team will be designing and developing a general Application Programming Interface (API) to enable highly parallel, server-side processing throughout the ESGF data grid. This API will be integrated with multiple analysis and visualization tools, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), netCDF Operator (NCO), and others. This presentation will provide an update on the ESGF CWT's overall approach toward enabling the necessary storage proximal computational

  8. An open-source distributed mesoscale hydrologic model (mHM)

    Science.gov (United States)

    Samaniego, Luis; Kumar, Rohini; Zink, Matthias; Thober, Stephan; Mai, Juliane; Cuntz, Matthias; Schäfer, David; Schrön, Martin; Musuuza, Jude; Prykhodko, Vladyslav; Dalmasso, Giovanni; Attinger, Sabine; Spieler, Diana; Rakovec, Oldrich; Craven, John; Langenberg, Ben

    2014-05-01

    The mesoscale hydrological model (mHM) is based on numerical approximations of dominant hydrological processes that have been tested in various hydrological models such as: HBV and VIC. In general, mHM simulates the following processes: canopy interception, snow accumulation and melting, soil moisture dynamics (n-horizons), infiltration and surface runoff, evapotranspiration, subsurface storage and discharge generation, deep percolation and baseflow, and discharge attenuation and flood routing. The main characteristic of mHM is the treatment of the sub-grid variability of input variables and model parameters which clearly distinguishes this model from existing precipitation-runoff models or land surface models. It uses a Multiscale Parameter Regionalization (MPR) to account for the sub-grid variability and to avoid continuous re-calibration. Effective model parameters are location and time dependent (e.g., soil porosity). They are estimated through upscaling operators that link sub-grid morphologic information (e.g., soil texture) with global transfer-function parameters, which, in turn, are found through multi-basin optimization. Global parameters estimated with the MPR technique are quasi-scale invariant and guarantee flux-matching across scales. mHM is an open source code, written in Fortran 2003 (standard), fully modular, with high computational efficiency, and parallelized. It is portable to multiple platforms (Linux, OS X, Windows) and includes a number of algorithms for sensitivity analysis, analysis of parameter uncertainty (MCMC), and optimization (DDS, SA, SCE). All simulated state variables and outputs can be stored as netCDF files for further analysis and visualization. mHM has been evaluated in all major river basins in Germany and over 80 US and 250 European river basins. The model efficiency (NSE) during validation at proxy locations is on average greater than 0.6. During last years, mHM had been used for number of hydrologic applications such as

  9. NCI HPC Scaling and Optimisation in Climate, Weather, Earth system science and the Geosciences

    Science.gov (United States)

    Evans, B. J. K.; Bermous, I.; Freeman, J.; Roberts, D. S.; Ward, M. L.; Yang, R.

    2016-12-01

    in a coordinated way. The coordinations have involved user stakeholders, the model developer community, and dependent software libraries. For example, we have spent significant time characterising I/O scalability, and improving the use of libraries such as NetCDF and HDF5.

  10. Best Practices for Preparing Interoperable Geospatial Data

    Science.gov (United States)

    Wei, Y.; Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Beaty, T. W.

    2010-12-01

    mechanisms to advertise, visualize, and distribute the standardized geospatial data. In this presentation, we summarize the experiences learned and the best practices for geospatial data standardization. The presentation will describe how diverse and historical data archived in the ORNL DAAC were converted into standard and non-proprietary formats; what tools were used to make the conversion; how the spatial and temporal information are properly captured in a consistent manor; how to name a data file or a variable to make it both human-friendly and semantically interoperable; how NetCDF file format and CF convention can promote the data usage in ecosystem modeling user community; how those standardized geospatial data can be fed into OGC Web Services to support on-demand data visualization and access; and how the metadata should be collected and organized so that they can be discovered through standard catalog services.

  11. Development of an Oceanographic Data Archiving and Service System for the Korean Researchers

    Science.gov (United States)

    Kim, Sung Dae; Park, Hyuk Min; Baek, Sang Ho

    2014-05-01

    was used as the data retrieving service of TS DB, which uses GIS interface made by open source GIS software. We also installed Live Access Service developed by US PMEL for service of the satellite netCDF data files, which support on-the-fly visualization and OPeNDAP (Open-source Project for a Network Data Access Protocol) service for remote connection and sub-setting of large data set

  12. SOCIB applications for oceanographic data management

    Science.gov (United States)

    Troupin, Charles; Pau Beltran, Joan; Frontera, Biel; Gómara, Sonia; Lora, Sebastian; March, David; Sebastian, Kristian; Tintoré, Joaquin

    2015-04-01

    The Balearic Islands Coastal Ocean Observing and Forecasting System (SOCIB, http://www.socib.es), is a multi-platform Marine Research Infrastructure that provides free, open and quality-controlled data from near-shore to the open sea. To collect the necessary data, the SOCIB system is made up of: a research vessel, a high-frequency (HF) radar system, weather stations, tide gauges, moorings, drifting buoys, ARGO profilers, and gliders (autonomous underwater vehicles). In addition, the system has recently begun incorporating oceanographic sensors attached to sea turtles. High-resolution numerical models provide forecast for hydrodynamics (ROMS) and waves (SAPO). According to SOCIB principles, data have to be: discoverable and accessible; freely available; interoperable, quality-controlled and standardized. The Data Centre (DC) manages the different steps of data processing, including: acquisition using SOCIB platforms (gliders, drifters, HF radar, ...), numerical models (hydrodynamics, waves, ...) or information generated by other data sources, distribution through dedicated web and mobile applications dynamic visualisation. The SOCIB DC constitutes an example of marine information systems within the framework of new coastal ocean observatories. In this work we present some of the applications developed for specific type of users, as well as the technologies used for their implementation: DAPP (Deployments application, http://apps.socib.es/dapp/), a web application to display information related to mobile platform trajectories. LW4NC2 (http://thredds.socib.es/lw4nc2), a web application for multidimensional (grid) data from NetCDF files (numerical models, HF radar). SACOSTA (http://gis.socib.es/sacosta), a viewer for cartographic data such as environmental sensitivity of the coastline. SEABOARD (http://seaboard.socib.es), a tool to disseminate SOCIB real time data to different types of users. Smart-phone apps to access data, platform trajectories and forecasts in real

  13. Nimbus Satellite Data Rescue Project for Sea Ice Extent: Data Processing

    Science.gov (United States)

    Campbell, G. G.; Sandler, M.; Moses, J. F.; Gallaher, D. W.

    2011-12-01

    Early Nimbus satellites collected both visible and infrared observations of the Earth at high resolution. Nimbus I operated in September, 1964. Nimbus II operated from April to November 1966 and Nimbus III operated from May 1969 to November 1969. We will discuss our procedures to recover this data into a modern digital archive useful for scientific analysis. The Advanced Videocon Camera System data was transmitted as an analog signal proportional to the brightness detected by a video camera. This was archived on black and white film. At NSIDC we are scanning and digitizing the film images using equipment derived from the motion picture industry. The High Resolution Infrared Radiance data was originally recorded in 36 bit words on 7 track digital tapes. The HRIR data were recently recovered from the tapes and TAP (tape file format from 1966) files were placed in EOSDIS archives for online access. The most interesting parts of the recovery project were the additional processing required to rectify and navigate the raw digital files. One of the artifacts we needed to identify and remove were fiducial marks representing latitude and longitude lines added to the film for users in the 1960's. The IR data recording inserted an artificial random jitter in the alignment of individual scan lines. We will describe our procedures to navigate, remap, detect noise and remove artifacts in the data. Beyond cleaning up the HRIR swath data or the AVCS picture data, we are remapping the data into standard grids for comparisons in time. A first run of all the Nimbus 2 HRIR data into EASE grids in NetCDF format has been completed. This turned up interesting problems of overlaps and missing data issues. Some of these processes require extensive computer resources and we have established methods for using the 'Elastic Compute Cloud' facility at Amazon.com to run the many processes in parallel. In addition we have set up procedures at the University of Colorado to monitor the ongoing

  14. Adding Data Management Services to Parallel File Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, Scott [Univ. of California, Santa Cruz, CA (United States)

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decades the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file

  15. "One-Stop Shopping" for Ocean Remote-Sensing and Model Data

    Science.gov (United States)

    Li, P. Peggy; Vu, Quoc; Chao, Yi; Li, Zhi-Jin; Choi, Jei-Kook

    2006-01-01

    OurOcean Portal 2.0 (http:// ourocean.jpl.nasa.gov) is a software system designed to enable users to easily gain access to ocean observation data, both remote-sensing and in-situ, configure and run an Ocean Model with observation data assimilated on a remote computer, and visualize both the observation data and the model outputs. At present, the observation data and models focus on the California coastal regions and Prince William Sound in Alaska. This system can be used to perform both real-time and retrospective analyses of remote-sensing data and model outputs. OurOcean Portal 2.0 incorporates state-of-the-art information technologies (IT) such as MySQL database, Java Web Server (Apache/Tomcat), Live Access Server (LAS), interactive graphics with Java Applet at the Client site and MatLab/GMT at the server site, and distributed computing. OurOcean currently serves over 20 real-time or historical ocean data products. The data are served in pre-generated plots or their native data format. For some of the datasets, users can choose different plotting parameters and produce customized graphics. OurOcean also serves 3D Ocean Model outputs generated by ROMS (Regional Ocean Model System) using LAS. The Live Access Server (LAS) software, developed by the Pacific Marine Environmental Laboratory (PMEL) of the National Oceanic and Atmospheric Administration (NOAA), is a configurable Web-server program designed to provide flexible access to geo-referenced scientific data. The model output can be views as plots in horizontal slices, depth profiles or time sequences, or can be downloaded as raw data in different data formats, such as NetCDF, ASCII, Binary, etc. The interactive visualization is provided by graphic software, Ferret, also developed by PMEL. In addition, OurOcean allows users with minimal computing resources to configure and run an Ocean Model with data assimilation on a remote computer. Users may select the forcing input, the data to be assimilated, the

  16. Monthly Water Balance Model Hydrology Futures

    Science.gov (United States)

    Bock, Andy; Hay, Lauren E.; Markstrom, Steven; Atkinson, R. Dwight

    2016-01-01

    A monthly water balance model (MWBM) was driven with precipitation and temperature using a station-based dataset for current conditions (1950 to 2010) and selected statistically-downscaled general circulation models (GCMs) for current and future conditions (1950 to 2099) across the conterminous United States (CONUS) using hydrologic response units from the Geospatial Fabric for National Hydrologic Modeling (http://dx.doi.org/doi:10.5066/F7542KMD). Six MWBM output variables (actual evapotranspiration (AET), potential evapotranspiration (PET), runoff (RO), streamflow (STRM), soil moisture storage (SOIL), and snow water equivalent (SWE)) and the two MWBM input variables (atmospheric temperature (TAVE) and precipitation (PPT)) were summarized for hydrologic response units and aggregated at points of interest on a stream network. Results were then organized into the Monthly Water Balance Hydrology Futures database, an open-access database using netCDF format (http://cida-eros-mows1.er.usgs.gov/thredds/dodsC/nwb_pub/).  Methods used to calibrate and parameterize the MWBM are detailed in the Hydrology and Earth System Sciences (HESS)  paper "Parameter regionalization of a monthly water balance model for the conterminous United States" by Bock and others (2016).  See the discussion paper link in the "Related External Resources" section for access.  Supplemental data files related to the plots and data analysis in Bock and others (2016) can be found in the HESS-2015-325.zip folder in the "Attached Files" section.  Detailed information on the files and data can be found in the ReadMe.txt contained within the zipped folder. Recommended citation of discussion paper:Bock, A.R., Hay, L.E., McCabe, G.J., Markstrom, S.L., and Atkinson, R.D., 2016, Parameter regionalization of a monthly water balance model for the conterminous United States: Hydrology and Earth System Sciences, v. 20, 2861-2876, doi:10.5194/hess-20-2861-2016, 2016

  17. Multiscale Hy3S: Hybrid stochastic simulation for supercomputers

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2006-02-01

    Full Text Available Abstract Background Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Results Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users

  18. The Mars Analysis Correction Data Assimilation (MACDA): A reference atmospheric reanalysis

    Science.gov (United States)

    Montabone, Luca; Lewis, Stephen R.; Steele, Liam J.; Holmes, James; Read, Peter L.; Valeanu, Alexandru; Smith, Michael D.; Kass, David; Kleinboehl, Armin; LMD Team, MGS/TES Team, MRO/MCS Team

    2016-10-01

    The Mars Analysis Correction Data Assimilation (MACDA) dataset version 1.0 contains the reanalysis of fundamental atmospheric and surface variables for the planet Mars covering a period of about three Martian years (late MY 24 to early MY 27). This four-dimensional dataset has been produced by data assimilation of retrieved thermal profiles and column dust optical depths from NASA's Mars Global Surveyor/Thermal Emission Spectrometer (MGS/TES), which have been assimilated into a Mars global climate model (MGCM) using the Analysis Correction scheme developed at the UK Meteorological Office.The MACDA v1.0 reanalysis is publicly available, and the NetCDF files can be downloaded from the archive at the Centre for Environmental Data Analysis/British Atmospheric Data Centre (CEDA/BADC). The variables included in the dataset can be visualised using an ad-hoc graphical user interface (the "MACDA Plotter") located at the following URL: http://macdap.physics.ox.ac.uk/The first paper about MACDA reanalysis of TES retrievals appeared in 2006, although the acronym MACDA was not yet used at that time. Ten years later, MACDA v1.0 has been used by several researchers worldwide and has contributed to the advancement of the knowledge about the martian atmosphere in critical areas such as the radiative impact of water ice clouds, the solsticial pause in baroclinic wave activity, and the climatology and dynamics of polar vortices, to cite only a few. It is therefore timely to review the scientific results obtained by using such Mars reference atmospheric reanalysis, in order to understand what priorities the user community should focus on in the next decade.MACDA is an ongoing collaborative project, and work funded by NASA MDAP Programme is currently undertaken to produce version 2.0 of the Mars atmospheric reanalysis. One of the key improvements is the extension of the reanalysis period to nine martian years (MY 24 through MY 32), with the assimilation of NASA's Mars Reconnaissance

  19. Incorporating Brokers within Collaboration Environments

    Science.gov (United States)

    Rajasekar, A.; Moore, R.; de Torcy, A.

    2013-12-01

    A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the Data

  20. Brokering technologies to realize the hydrology scenario in NSF BCube

    Science.gov (United States)

    Boldrini, Enrico; Easton, Zachary; Fuka, Daniel; Pearlman, Jay; Nativi, Stefano

    2015-04-01

    In the National Science Foundation (NSF) BCube project an international team composed of cyber infrastructure experts, geoscientists, social scientists and educators are working together to explore the use of brokering technologies, initially focusing on four domains: hydrology, oceans, polar, and weather. In the hydrology domain, environmental models are fundamental to understand the behaviour of hydrological systems. A specific model usually requires datasets coming from different disciplines for its initialization (e.g. elevation models from Earth observation, weather data from Atmospheric sciences, etc.). Scientific datasets are usually available on heterogeneous publishing services, such as inventory and access services (e.g. OGC Web Coverage Service, THREDDS Data Server, etc.). Indeed, datasets are published according to different protocols, moreover they usually come in different formats, resolutions, Coordinate Reference Systems (CRSs): in short different grid environments depending on the original data and the publishing service processing capabilities. Scientists can thus be impeded by the burden of discovery, access and normalize the desired datasets to the grid environment required by the model. These technological tasks of course divert scientists from their main, scientific goals. The use of GI-axe brokering framework has been experimented in a hydrology scenario where scientists needed to compare a particular hydrological model with two different input datasets (digital elevation models): - the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) dataset, v.2. - the Shuttle Radar Topography Mission (SRTM) dataset, v.3. These datasets were published by means of Hyrax Server technology, which can provide NetCDF files at their original resolution and CRS. Scientists had their model running on ArcGIS, so the main goal was to import the datasets using the available ArcPy library and have EPSG:4326 with the same resolution grid as the

  1. Extending the GI Brokering Suite to Support New Interoperability Specifications

    Science.gov (United States)

    Boldrini, E.; Papeschi, F.; Santoro, M.; Nativi, S.

    2014-12-01

    The GI brokering suite provides the discovery, access, and semantic Brokers (i.e. GI-cat, GI-axe, GI-sem) that empower a Brokering framework for multi-disciplinary and multi-organizational interoperability. GI suite has been successfully deployed in the framework of several programmes and initiatives, such as European Union funded projects, NSF BCube, and the intergovernmental coordinated effort Global Earth Observation System of Systems (GEOSS). Each GI suite Broker facilitates interoperability for a particular functionality (i.e. discovery, access, semantic extension) among a set of brokered resources published by autonomous providers (e.g. data repositories, web services, semantic assets) and a set of heterogeneous consumers (e.g. client applications, portals, apps). A wide set of data models, encoding formats, and service protocols are already supported by the GI suite, such as the ones defined by international standardizing organizations like OGC and ISO (e.g. WxS, CSW, SWE, GML, netCDF) and by Community specifications (e.g. THREDDS, OpenSearch, OPeNDAP, ESRI APIs). Using GI suite, resources published by a particular Community or organization through their specific technology (e.g. OPeNDAP/netCDF) can be transparently discovered, accessed, and used by different Communities utilizing their preferred tools (e.g. a GIS visualizing WMS layers). Since Information Technology is a moving target, new standards and technologies continuously emerge and are adopted in the Earth Science context too. Therefore, GI Brokering suite was conceived to be flexible and accommodate new interoperability protocols and data models. For example, GI suite has recently added support to well-used specifications, introduced to implement Linked data, Semantic Web and precise community needs. Amongst the others, they included: DCAT: a RDF vocabulary designed to facilitate interoperability between Web data catalogs. CKAN: a data management system for data distribution, particularly used by

  2. SciSpark: In-Memory Map-Reduce for Earth Science Algorithms

    Science.gov (United States)

    Ramirez, P.; Wilson, B. D.; Whitehall, K. D.; Palamuttam, R. S.; Mattmann, C. A.; Shah, S.; Goodman, A.; Burke, W.

    2016-12-01

    We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based Apache Hadoop by 100x in memory and by 10x on disk. SciSpark extends Spark to support Earth Science use in three ways: Efficient ingest of N-dimensional geo-located arrays (physical variables) from netCDF3/4, HDF4/5, and/or OPeNDAP URLS; Array operations for dense arrays in scala and Java using the ND4S/ND4J or Breeze libraries; Operations to "split" datasets across a Spark cluster by time or space or both. For example, a decade-long time-series of geo-variables can be split across time to enable parallel "speedups" of analysis by day, month, or season. Similarly, very high-resolution climate grids can be partitioned into spatial tiles for parallel operations across rows, columns, or blocks. In addition, using Spark's gateway into python, PySpark, one can utilize the entire ecosystem of numpy, scipy, etc. Finally, SciSpark Notebooks provide a modern eNotebook technology in which scala, python, or spark-sql codes are entered into cells in the Notebook and executed on the cluster, with results, plots, or graph visualizations displayed in "live widgets". We have exercised SciSpark by implementing three complex Use Cases: discovery and evolution of Mesoscale Convective Complexes (MCCs) in storms, yielding a graph of connected components; PDF Clustering of atmospheric state using parallel K-Means; and statistical "rollups" of geo-variables or model-to-obs. differences (i.e. mean, stddev, skewness, & kurtosis) by day, month, season, year, and multi-year. Geo-variables are ingested and split across the cluster using methods on the sciSparkContext object including netCDFVariables() for spatial decomposition and wholeNetCDFVariables() for time-series. The

  3. A proposed-standard format to represent and distribute tomographic models and other earth spatial data

    Science.gov (United States)

    Postpischl, L.; Morelli, A.; Danecek, P.

    2009-04-01

    ) computational meshes. This approach can exploit the capabilities of the web browser as a computing platform: a series of in-page quick tools for comparative analysis between models will be presented, as well as visualisation techniques for tomographic layers in Google Maps and Google Earth. We are working on tools for conversion into common scientific format like netCDF, to allow easy visualisation in GEON-IDV or gmt.

  4. Implementing the HDF-EOS5 software library for data products in the UNAVCO InSAR archive

    Science.gov (United States)

    Baker, Scott; Meertens, Charles; Crosby, Christopher

    2017-04-01

    UNAVCO is a non-profit university-governed consortium that operates the U.S. National Science Foundation (NSF) Geodesy Advancing Geosciences and EarthScope (GAGE) facility and provides operational support to the Western North America InSAR Consortium (WInSAR). The seamless synthetic aperture radar archive (SSARA) is a seamless distributed access system for SAR data and higher-level data products. Under the NASA-funded SSARA project, a user-contributed InSAR archive for interferograms, time series, and other derived data products was developed at UNAVCO. The InSAR archive development has led to the adoption of the HDF-EOS5 data model, file format, and library. The HDF-EOS software library was designed to support NASA Earth Observation System (EOS) science data products and provides data structures for radar geometry (Swath) and geocoded (Grid) data based on the HDF5 data model and file format provided by the HDF Group. HDF-EOS5 inherits the benefits of HDF5 (open-source software support, internal compression, portability, support for structural data, self-describing file metadata enhanced performance, and xml support) and provides a way to standardize InSAR data products. Instrument- and datatype-independent services, such as subsetting, can be applied to files across a wide variety of data products through the same library interface. The library allows integration with GIS software packages such as ArcGIS and GDAL, conversion to other data formats like NetCDF and GeoTIFF, and is extensible with new data structures to support future requirements. UNAVCO maintains a GitHub repository that provides example software for creating data products from popular InSAR processing software packages like GMT5SAR and ISCE as well as examples for reading and converting the data products into other formats. Digital object identifiers (DOI) have been incorporated into the InSAR archive allowing users to assign a permanent location for their processed result and easily reference the

  5. MyOcean Central Information System - Achievements and Perspectives

    Science.gov (United States)

    Claverie, Vincent; Loubrieu, Thomas; Jolibois, Tony; de Dianous, Rémi; Blower, Jon; Romero, Laia; Griffiths, Guy

    2013-04-01

    Since 2009, MyOcean (http://www.myocean.eu) is providing an operational service, for forecasts, analysis and expertise on ocean currents, temperature, salinity, sea level, primary ecosystems and ice coverage. The production of observation and forecasting data is done by 42 Production Units (PU). Product download and visualisation are hosted by 25 Dissemination Units (DU). All these products and associated services are gathered in a single catalogue hiding the intricate distributed organization of PUs and DUs. Besides applying INSPIRE directive and OGC recommendations, MyOcean overcomes technical choices and challenges. This presentation focuses on 3 specific issues met by MyOcean and relevant for many Spatial Data Infrastructures: user's transaction accounting, large volume download and stream line the catalogue maintenance. Transaction Accounting: Set up powerful means to get detailed knowledge of system usage in order to subsequently improve the products (ocean observations, analysis and forecast dataset) and services (view, download) offer. This subject drives the following ones: Central authentication management for the distributed web services implementations: add-on to THREDDS Data Server for WMS and NETCDF sub-setting service, specific FTP. Share user management with co-funding projects. In addition to MyOcean, alternate projects also need consolidated information about the use of the cofunded products. Provide a central facility for the user management. This central facility provides users' rights to geographically distributed services and gathers transaction accounting history from these distributed services. Propose a user-friendly web interface to download large volume of data (several GigaBytes) as robust as basic FTP but intuitive and file/directory independent. This should rely on a web service drafting the INSPIRE to-be specification and OGC recommendations for download taking into account that FTP server is not enough friendly (need to know

  6. MyOcean Internal Information System (Dial-P)

    Science.gov (United States)

    Blanc, Frederique; Jolibois, Tony; Loubrieu, Thomas; Manzella, Giuseppe; Mazzetti, Paolo; Nativi, Stefano

    2010-05-01

    , trajectory, station, grid, etc., which will be implemented in netCDF format. SeaDataNet is recommending ODV and NetCDF formats. Another problem related to data curation and interoperability is the possibility to use common vocabularies. Common vocabularies are developed in many international initiatives, such as GEMET (promoted by INSPIRE as a multilingual thesaurus), UNIDATA, SeaDataNet, Marine Metadata Initiative (MMI). MIS is considering the SeaDataNet vocabulary as a base for interoperability. Four layers of different abstraction levels of interoperability an be defined: - Technical/basic: this layer is implemented at each TAC or MFC through internet connection and basic services for data transfer and browsing (e.g FTP, HTTP, etc). - Syntactic: allowing the interchange of metadata and protocol elements. This layer corresponds to a definition Core Metadata Set, the format of exchange/delivery for the data and associated metadata and possible software. This layer is implemented by the DIAL-P logical interface (e.g. adoption of INSPIRE compliant metadata set and common data formats). - Functional/pragmatic: based on a common set of functional primitives or on a common set of service definitions. This layer refers to the definition of services based on Web services standards. This layer is implemented by the DIAL-P logical interface (e.g. adoption of INSPIRE compliant network services). - Semantic: allowing to access similar classes of objects and services across multiple sites, with multilinguality of content as one specific aspect. This layer corresponds to MIS interface, terminology and thesaurus. Given the above requirements, the proposed solution is a federation of systems, where the individual participants are self-contained autonomous systems, but together form a consistent wider picture. A mid-tier integration layer mediates between existing systems, adapting their data and service model schema to the MIS. The developed MIS is a read-only system, i.e. does not allow

  7. OceanSITES format and Ocean Observatory Output harmonisation: past, present and future

    Science.gov (United States)

    Pagnani, Maureen; Galbraith, Nan; Diggs, Stephen; Lankhorst, Matthias; Hidas, Marton; Lampitt, Richard

    2015-04-01

    The Global Ocean Observing System (GOOS) initiative was launched in 1991, and was the first step in creating a global view of ocean observations. In 1999 oceanographers at the OceanObs conference envisioned a 'global system of eulerian observatories' which evolved into the OceanSITES project. OceanSITES has been generously supported by individual oceanographic institutes and agencies across the globe, as well as by the WMO-IOC Joint Technical Commission for Oceanography and Marine Meteorology (under JCOMMOPS). The project is directed by the needs of research scientists, but has a strong data management component, with an international team developing content standards, metadata specifications, and NetCDF templates for many types of in situ oceanographic data. The OceanSITES NetCDF format specification is intended as a robust data exchange and archive format specifically for time-series observatory data from the deep ocean. First released in February 2006, it has evolved to build on and extend internationally recognised standards such as the Climate and Forecast (CF) standard, BODC vocabularies, ISO formats and vocabularies, and in version 1.3, released in 2014, ACDD (Attribute Convention for Dataset Discovery). The success of the OceanSITES format has inspired other observational groups, such as autonomous vehicles and ships of opportunity, to also use the format and today it is fulfilling the original concept of providing a coherent set of data from eurerian observatories. Data in the OceanSITES format is served by 2 Global Data Assembly Centres (GDACs), one at Coriolis, in France, at ftp://ftp.ifremer.fr/ifremer/oceansites/ and one at the US NDBC, at ftp://data.ndbc.noaa.gov/data/oceansites/. These two centres serve over 26,800 OceanSITES format data files from 93 moorings. The use of standardised and controlled features enables the files held at the OceanSITES GDACs to be electronically discoverable and ensures the widest access to the data. The Ocean

  8. Delft FEWS: an open interface that connects models and data streams for operational forecasting systems

    Science.gov (United States)

    de Rooij, Erik; Werner, Micha

    2010-05-01

    giving the flexibility required for a state-of-the-art operational forecasting service. While Delft-FEWS comes with a user-friendly GIS based interface, a time series viewer and editor, and a wide range of tools for visualization, analysis, validation and data conversion, the available graphical display can be extended. New graphical components can be seamlessly integrated with the system through the SOAP service. Thanks to this open infrastructure, new models can easily be incorporated into an operational system without having to change the operational process. This allows the forecaster to focus on the science instead of having to worry about model details and data formats. Furthermore all model formats introduced to the Delft-FEWS framework will in principle become available to the Delft-FEWS community (in some cases subject to the licence conditions of the model supplier). Currently a wide range of models has been integrated and is being used operationally; Mike 11, HEC-RAS & HEC-RESSIM, HBV, MODFLOW, SOBEK and more. In this way Delft-FEWS not only provides a modelling interface but also a platform for model inter-comparison or multi-model ensembles, as well as a knowledge interface that allows forecasters throughout the world to exchange their views and ideas on operational forecasting. Keywords: FEWS; forecasting; modelling; timeseries; data; XML; NetCDF; interface; SOAP

  9. The Earth Information Exchange: A Portal for Earth Science From the ESIP Federation

    Science.gov (United States)

    Wertz, R.; Hutchinson, C.; Hardin, D.

    2006-12-01

    current working groups are focused toward the issues of Air Quality, Coastal Management, Disaster Management, Ecological Forecasting, Public Health, and Water Management. Initially, the Exchange will be linked to USGS's Geospatial One Stop portal, NASA's Earth Science Gateway, the Global Change Master Directory (GCMD) and the Eos ClearingHOuse (ECHO). The Earth Information Exchange will be an integrated system of distributed components that work together to expedite the process of Earth science and to increase the effective application of its results to benefit the public. Specifically the EIE is designed to provide a comprehensive inventory of Earth observation metadata by GEOSS and other commonly used issue area categories. To provide researchers, educators and policy makers with ready access to metadata over the web, via URLs. To provide researchers with access to data in common scientific data formats such as netCDF and HDF-EOS and common scientific data models such as swath, point and grid. To provide policy makers and others with an e-commerce marketplace where advanced data products (analysis tools, models, simulations, decision support products) can be found and acquired. And, to provide researchers, educators and policy makers with a broad inventory of the human resources associated with the Federation and its partners.

  10. A Flexible Component based Access Control Architecture for OPeNDAP Services

    Science.gov (United States)

    Kershaw, Philip; Ananthakrishnan, Rachana; Cinquini, Luca; Lawrence, Bryan; Pascoe, Stephen; Siebenlist, Frank

    2010-05-01

    . These components filter requests to the service they protect and apply the required authentication and authorisation schemes. Filters have been developed for OpenID and SSL client based authentication. The latter enabling access with MyProxy issued credentials. By preserving a clear separation between the security and application functionality, multiple authentication technologies may be supported without the need for modification to the underlying OPeNDAP application. The software has been developed in the Python programming language securing the Python based OPeNDAP implementation, PyDAP. This utilises the Python WSGI (Web Server Gateway Interface) specification to create distinct security filter components. Work is also currently underway to develop a parallel Java based filter implementation to secure the THREDDS Data Server. Whilst the ability to apply this flexible approach to the server side security layer is important, the development of compatible client software is vital to the take up of these services across a wide user base. To date PyDAP and wget based clients have been tested and work is planned to integrate the required security interface into the netCDF API. This forms part of ongoing collaboration with the OPeNDAP user and development community to ensure interoperability.

  11. Data management in Oceanography at SOCIB

    Science.gov (United States)

    Joaquin, Tintoré; March, David; Lora, Sebastian; Sebastian, Kristian; Frontera, Biel; Gómara, Sonia; Pau Beltran, Joan

    2014-05-01

    SOCIB, the Balearic Islands Coastal Ocean Observing and Forecasting System (http://www.socib.es), is a Marine Research Infrastructure, a multiplatform distributed and integrated system, a facility of facilities that extends from the nearshore to the open sea and provides free, open and quality control data. SOCIB is a facility o facilities and has three major infrastructure components: (1) a distributed multiplatform observing system, (2) a numerical forecasting system, and (3) a data management and visualization system. We present the spatial data infrastructure and applications developed at SOCIB. One of the major goals of the SOCIB Data Centre is to provide users with a system to locate and download the data of interest (near real-time and delayed mode) and to visualize and manage the information. Following SOCIB principles, data need to be (1) discoverable and accessible, (2) freely available, and (3) interoperable and standardized. In consequence, SOCIB Data Centre Facility is implementing a general data management system to guarantee international standards, quality assurance and interoperability. The combination of different sources and types of information requires appropriate methods to ingest, catalogue, display, and distribute this information. SOCIB Data Centre is responsible for directing the different stages of data management, ranging from data acquisition to its distribution and visualization through web applications. The system implemented relies on open source solutions. In other words, the data life cycle relies in the following stages: • Acquisition: The data managed by SOCIB mostly come from its own observation platforms, numerical models or information generated from the activities in the SIAS Division. • Processing: Applications developed at SOCIB to deal with all collected platform data performing data calibration, derivation, quality control and standardization. • Archival: Storage in netCDF and spatial databases. • Distribution

  12. Developing a Metadata Infrastructure to facilitate data driven science gateway and to provide Inspire/GEMINI compliance for CLIPC

    Science.gov (United States)

    Mihajlovski, Andrej; Plieger, Maarten; Som de Cerff, Wim; Page, Christian

    2016-04-01

    indicators Key is the availability of standardized metadata, describing indicator data and services. This will enable standardization and interoperability between the different distributed services of CLIPC. To disseminate CLIPC indicator data, transformed data products to enable impacts assessments and climate change impact indicators a standardized meta-data infrastructure is provided. The challenge is that compliance of existing metadata to INSPIRE ISO standards and GEMINI standards needs to be extended to further allow the web portal to be generated from the available metadata blueprint. The information provided in the headers of netCDF files available through multiple catalogues, allow us to generate ISO compliant meta data which is in turn used to generate web based interface content, as well as OGC compliant web services such as WCS and WMS for front end and WPS interactions for the scientific users to combine and generate new datasets. The goal of the metadata infrastructure is to provide a blueprint for creating a data driven science portal, generated from the underlying: GIS data, web services and processing infrastructure. In the presentation we will present the results and lessons learned.

  13. NW-MILO Acoustic Data Collection

    Energy Technology Data Exchange (ETDEWEB)

    Matzner, Shari; Myers, Joshua R.; Maxwell, Adam R.; Jones, Mark E.

    2010-02-17

    signatures of small vessels. The sampling rate of 8 kHz and low pass filtering to 2 kHz results in an alias-free signal in the frequency band that is appropriate for small vessels. Calibration was performed using a Lubell underwater speaker so that the raw data signal levels can be converted to sound pressure. Background noise is present due to a nearby pump and as a result of tidal currents. More study is needed to fully characterize the noise, but it does not pose an obstacle to using the acoustic data for the purposes of vessel detection and signature analysis. The detection range for a small vessel was estimated using the calibrated voltage response of the system and a cylindrical spreading model for transmission loss. The sound pressure of a typical vessel with an outboard motor was found to be around 140 dB mPa, and could theoretically be detected from 10 km away. In practical terms, a small vessel could reliably be detected from 3 - 5 km away. The data is archived in netCDF files, a standard scientific file format that is "self describing". This means that each data file contains the metadata - timestamps, units, origin, etc. - needed to make the data meaningful and portable. Other file formats, such as XML, are also supported. A visualization tool has been developed to view the acoustic data in the form of spectrograms, along with the coincident radar track data and camera images.

  14. Ocean Acidification Scientific Data Stewardship: An approach for end-to-end data management and integration

    Science.gov (United States)

    Arzayus, K. M.; Garcia, H. E.; Jiang, L.; Michael, P.

    2012-12-01

    As the designated Federal permanent oceanographic data center in the United States, NOAA's National Oceanographic Data Center (NODC) has been providing scientific stewardship for national and international marine environmental and ecosystem data for over 50 years. NODC is supporting NOAA's Ocean Acidification Program and the science community by providing end-to-end scientific data management of ocean acidification (OA) data, dedicated online data discovery, and user-friendly access to a diverse range of historical and modern OA and other chemical, physical, and biological oceanographic data. This effort is being catalyzed by the NOAA Ocean Acidification Program, but the intended reach is for the broader scientific ocean acidification community. The first three years of the project will be focused on infrastructure building. A complete ocean acidification data content standard is being developed to ensure that a full spectrum of ocean acidification data and metadata can be stored and utilized for optimal data discovery and access in usable data formats. We plan to develop a data access interface capable of allowing users to constrain their search based on real-time and delayed mode measured variables, scientific data quality, their observation types, the temporal coverage, methods, instruments, standards, collecting institutions, and the spatial coverage. In addition, NODC seeks to utilize the existing suite of international standards (including ISO 19115-2 and CF-compliant netCDF) to help our data producers use those standards for their data, and help our data consumers make use of the well-standardized metadata-rich data sets. These tools will be available through our NODC Ocean Acidification Scientific Data Stewardship (OADS) web page at http://www.nodc.noaa.gov/oceanacidification. NODC also has a goal to provide each archived dataset with a unique ID, to ensure a means of providing credit to the data provider. Working with partner institutions, such as the

  15. Application of WRF - SWAT OpenMI 2.0 based models integration for real time hydrological modelling and forecasting

    Science.gov (United States)

    Bugaets, Andrey; Gonchukov, Leonid

    2014-05-01

    Intake of deterministic distributed hydrological models into operational water management requires intensive collection and inputting of spatial distributed climatic information in a timely manner that is both time consuming and laborious. The lead time of the data pre-processing stage could be essentially reduced by coupling of hydrological and numerical weather prediction models. This is especially important for the regions such as the South of the Russian Far East where its geographical position combined with a monsoon climate affected by typhoons and extreme heavy rains caused rapid rising of the mountain rivers water level and led to the flash flooding and enormous damage. The objective of this study is development of end-to-end workflow that executes, in a loosely coupled mode, an integrated modeling system comprised of Weather Research and Forecast (WRF) atmospheric model and Soil and Water Assessment Tool (SWAT 2012) hydrological model using OpenMI 2.0 and web-service technologies. Migration SWAT into OpenMI compliant involves reorganization of the model into a separate initialization, performing timestep and finalization functions that can be accessed from outside. To save SWAT normal behavior, the source code was separated from OpenMI-specific implementation into the static library. Modified code was assembled into dynamic library and wrapped into C# class implemented the OpenMI ILinkableComponent interface. Development of WRF OpenMI-compliant component based on the idea of the wrapping web-service clients into a linkable component and seamlessly access to output netCDF files without actual models connection. The weather state variables (precipitation, wind, solar radiation, air temperature and relative humidity) are processed by automatic input selection algorithm to single out the most relevant values used by SWAT model to yield climatic data at the subbasin scale. Spatial interpolation between the WRF regular grid and SWAT subbasins centroid (which are

  16. Sentinel-3 SAR Altimetry Toolbox - Scientific Exploitation of Operational Missions (SEOM) Program Element

    Science.gov (United States)

    Benveniste, Jérôme; Lucas, Bruno; Dinardo, Salvatore

    2014-05-01

    The prime objective of the SEOM (Scientific Exploitation of Operational Missions) element is to federate, support and expand the large international research community that the ERS, ENVISAT and the Envelope programmes have build up over the last 20 years for the future European operational Earth Observation missions, the Sentinels. Sentinel-3 builds directly on a proven heritage pioneered by ERS-1, ERS-2, Envisat and CryoSat-2, with a dual-frequency (Ku and C band) advanced Synthetic Aperture Radar Altimeter (SRAL) that provides measurements at a resolution of ~300m in SAR mode along track. Sentinel-3 will provide exact measurements of sea-surface height along with accurate topography measurements over sea ice, ice sheets, rivers and lakes. The first of the Sentinel-3 series is planned for launch in early 2015. The current universal altimetry toolbox is BRAT (Basic Radar Altimetry Toolbox) which can read all previous and current altimetry mission's data, but it does not have the capabilities to read the upcoming Sentinel-3 L1 and L2 products. ESA will endeavour to develop and supply this capability to support the users of the future Sentinel-3 SAR Altimetry Mission. BRAT is a collection of tools and tutorial documents designed to facilitate the processing of radar altimetry data. This project started in 2005 from the joint efforts of ESA (European Space Agency) and CNES (Centre National d'Etudes Spatiales, the French Space Agency), and it is freely available at http://earth.esa.int/brat. The tools enable users to interact with the most common altimetry data formats, the BratGUI is the front-end for the powerful command line tools that are part of the BRAT suite. BRAT can also be used in conjunction with Matlab/IDL (via reading routines) or in C/C++/Fortran via a programming API, allowing the user to obtain desired data, bypassing the data-formatting hassle. BRAT can be used simply to visualise data quickly, or to translate the data into other formats such as netCDF

  17. Sentinel-3 SAR Altimetry Toolbox

    Science.gov (United States)

    Benveniste, Jerome; Lucas, Bruno; DInardo, Salvatore

    2015-04-01

    The prime objective of the SEOM (Scientific Exploitation of Operational Missions) element is to federate, support and expand the large international research community that the ERS, ENVISAT and the Envelope programmes have build up over the last 20 years for the future European operational Earth Observation missions, the Sentinels. Sentinel-3 builds directly on a proven heritage of ERS-2 and Envisat, and CryoSat-2, with a dual-frequency (Ku and C band) advanced Synthetic Aperture Radar Altimeter (SRAL) that provides measurements at a resolution of ~300m in SAR mode along track. Sentinel-3 will provide exact measurements of sea-surface height along with accurate topography measurements over sea ice, ice sheets, rivers and lakes. The first of the two Sentinels is expected to be launched in early 2015. The current universal altimetry toolbox is BRAT (Basic Radar Altimetry Toolbox) which can read all previous and current altimetry mission's data, but it does not have the capabilities to read the upcoming Sentinel-3 L1 and L2 products. ESA will endeavour to develop and supply this capability to support the users of the future Sentinel-3 SAR Altimetry Mission. BRAT is a collection of tools and tutorial documents designed to facilitate the processing of radar altimetry data. This project started in 2005 from the joint efforts of ESA (European Space Agency) and CNES (Centre National d'Etudes Spatiales), and it is freely available at http://earth.esa.int/brat. The tools enable users to interact with the most common altimetry data formats, the BratGUI is the front-end for the powerful command line tools that are part of the BRAT suite. BRAT can also be used in conjunction with Matlab/IDL (via reading routines) or in C/C++/Fortran via a programming API, allowing the user to obtain desired data, bypassing the data-formatting hassle. BRAT can be used simply to visualise data quickly, or to translate the data into other formats such as netCDF, ASCII text files, KML (Google Earth

  18. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  19. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  20. Air Quality uFIND: User-oriented Tool Set for Air Quality Data Discovery and Access

    Science.gov (United States)

    Hoijarvi, K.; Robinson, E. M.; Husar, R. B.; Falke, S. R.; Schultz, M. G.; Keating, T. J.

    2012-12-01

    Historically, there have been major impediments to seamless and effective data usage encountered by both data providers and users. Over the last five years, the international Air Quality (AQ) Community has worked through forums such as the Group on Earth Observations AQ Community of Practice, the ESIP AQ Working Group, and the Task Force on Hemispheric Transport of Air Pollution to converge on data format standards (e.g., netCDF), data access standards (e.g., Open Geospatial Consortium Web Coverage Services), metadata standards (e.g., ISO 19115), as well as other conventions (e.g., CF Naming Convention) in order to build an Air Quality Data Network. The centerpiece of the AQ Data Network is the web service-based tool set: user-oriented Filtering and Identification of Networked Data. The purpose of uFIND is to provide rich and powerful facilities for the user to: a) discover and choose a desired dataset by navigation through the multi-dimensional metadata space using faceted search, b) seamlessly access and browse datasets, and c) use uFINDs facilities as a web service for mashups with other AQ applications and portals. In a user-centric information system such as uFIND, the user experience is improved by metadata that includes the general fields for discovery as well as community-specific metadata to narrow the search beyond space, time and generic keyword searches. However, even with the community-specific additions, the ISO 19115 records were formed in compliance with the standard, so that other standards-based search interface could leverage this additional information. To identify the fields necessary for metadata discovery we started with the ISO 19115 Core Metadata fields and fields that were needed for a Catalog Service for the Web (CSW) Record. This fulfilled two goals - one to create valid ISO 19115 records and the other to be able to retrieve the records through a Catalog Service for the Web query. Beyond the required set of fields, the AQ Community added