WorldWideScience

Sample records for data storage

  1. Data Centre Infrastructure & Data Storage @ Facebook

    CERN Multimedia

    CERN. Geneva; Garson, Matt; Kauffman, Mike

    2018-01-01

    Several speakers from the Facebook company will present their take on the infrastructure of their Data Center and Storage facilities, as follows: 10:00 - Facebook Data Center Infrastructure, by Delfina Eberly, Mike Kauffman and Veerendra Mulay Insight into how Facebook thinks about data center design, including electrical and cooling systems, and the technology and tooling used to manage data centers. 11:00 - Storage at Facebook, by Matt Garson An overview of Facebook infrastructure, focusing on different storage systems, in particular photo/video storage and storage for data analytics. About the speakers Mike Kauffman, Director, Data Center Site Engineering Delfina Eberly, Infrastructure, Site Services Matt Garson, Storage at Facebook Veerendra Mulay, Infrastructure

  2. The Fermilab data storage infrastructure

    International Nuclear Information System (INIS)

    Jon A Bakken et al.

    2003-01-01

    Fermilab, in collaboration with the DESY laboratory in Hamburg, Germany, has created a petabyte scale data storage infrastructure to meet the requirements of experiments to store and access large data sets. The Fermilab data storage infrastructure consists of the following major storage and data transfer components: Enstore mass storage system, DCache distributed data cache, ftp and Grid ftp for primarily external data transfers. This infrastructure provides a data throughput sufficient for transferring data from experiments' data acquisition systems. It also allows access to data in the Grid framework

  3. High Density Digital Data Storage System

    Science.gov (United States)

    Wright, Kenneth D., II; Gray, David L.; Rowland, Wayne D.

    1991-01-01

    The High Density Digital Data Storage System was designed to provide a cost effective means for storing real-time data from the field-deployable digital acoustic measurement system. However, the high density data storage system is a standalone system that could provide a storage solution for many other real time data acquisition applications. The storage system has inputs for up to 20 channels of 16-bit digital data. The high density tape recorders presently being used in the storage system are capable of storing over 5 gigabytes of data at overall transfer rates of 500 kilobytes per second. However, through the use of data compression techniques the system storage capacity and transfer rate can be doubled. Two tape recorders have been incorporated into the storage system to produce a backup tape of data in real-time. An analog output is provided for each data channel as a means of monitoring the data as it is being recorded.

  4. Utilizing cloud storage architecture for long-pulse fusion experiment data storage

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Ming; Liu, Qiang [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan, Hubei (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan, Hubei (China); Zheng, Wei, E-mail: zhenghaku@gmail.com [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan, Hubei (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan, Hubei (China); Wan, Kuanhong; Hu, Feiran; Yu, Kexun [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan, Hubei (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan, Hubei (China)

    2016-11-15

    Scientific data storage plays a significant role in research facility. The explosion of data in recent years was always going to make data access, acquiring and management more difficult especially in fusion research field. For future long-pulse experiment like ITER, the extremely large data will be generated continuously for a long time, putting much pressure on both the write performance and the scalability. And traditional database has some defects such as inconvenience of management, hard to scale architecture. Hence a new data storage system is very essential. J-TEXTDB is a data storage and management system based on an application cluster and a storage cluster. J-TEXTDB is designed for big data storage and access, aiming at improving read–write speed, optimizing data system structure. The application cluster of J-TEXTDB is used to provide data manage functions and handles data read and write operations from the users. The storage cluster is used to provide the storage services. Both clusters are composed with general servers. By simply adding server to the cluster can improve the read–write performance, the storage space and redundancy, making whole data system highly scalable and available. In this paper, we propose a data system architecture and data model to manage data more efficient. Benchmarks of J-TEXTDB performance including read and write operations are given.

  5. Utilizing cloud storage architecture for long-pulse fusion experiment data storage

    International Nuclear Information System (INIS)

    Zhang, Ming; Liu, Qiang; Zheng, Wei; Wan, Kuanhong; Hu, Feiran; Yu, Kexun

    2016-01-01

    Scientific data storage plays a significant role in research facility. The explosion of data in recent years was always going to make data access, acquiring and management more difficult especially in fusion research field. For future long-pulse experiment like ITER, the extremely large data will be generated continuously for a long time, putting much pressure on both the write performance and the scalability. And traditional database has some defects such as inconvenience of management, hard to scale architecture. Hence a new data storage system is very essential. J-TEXTDB is a data storage and management system based on an application cluster and a storage cluster. J-TEXTDB is designed for big data storage and access, aiming at improving read–write speed, optimizing data system structure. The application cluster of J-TEXTDB is used to provide data manage functions and handles data read and write operations from the users. The storage cluster is used to provide the storage services. Both clusters are composed with general servers. By simply adding server to the cluster can improve the read–write performance, the storage space and redundancy, making whole data system highly scalable and available. In this paper, we propose a data system architecture and data model to manage data more efficient. Benchmarks of J-TEXTDB performance including read and write operations are given.

  6. The Petascale Data Storage Institute

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Garth [Carnegie Mellon Univ., Pittsburgh, PA (United States); Long, Darrell [The Regents of the University of California, Santa Cruz, CA (United States); Honeyman, Peter [Univ. of Michigan, Ann Arbor, MI (United States); Grider, Gary [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kramer, William [National Energy Research Scientific Computing Center, Berkeley, CA (United States); Shalf, John [National Energy Research Scientific Computing Center, Berkeley, CA (United States); Roth, Philip [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Felix, Evan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ward, Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-07-01

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.

  7. Unit 037 - Fundamentals of Data Storage

    OpenAIRE

    037, CC in GIScience; Jacobson, Carol R.

    2000-01-01

    This unit introduces the concepts and terms needed to understand storage of GIS data in a computer system, including the weaknesses of a discrete data model for representing the real world; an overview of data storage types and terminology; and a description of data storage issues.

  8. ALICE bags data storage accolades

    CERN Multimedia

    2007-01-01

    ComputerWorld has recognized CERN with an award for the 'Best Practices in Storage' for ALICE's data acquisition system, in the category of 'Systems Implementation'. The award was presented to the ALICE DAQ team on 18 April at a ceremony in San Diego, CA. (Top) ALICE physicist Ulrich Fuchs. (Bottom) Three of the five storage racks for the ALICE Data Acquisition system (Photo Antonio Saba). Between 16 and19 April, one thousand people from data storage networks around the world gathered to attend the biannual Storage Networking World Conference. Twenty-five companies and organizations were celebrated as finalists, and five of those were given honorary awards-among them CERN, which tied for first place in the category of Systems Implementation for the success of the ALICE Data Acquisition System. CERN was one of five finalists in this category, which recognizes the winning facility for 'the successful design, implementation and management of an interoperable environment'. 'Successful' could include documentati...

  9. ICI optical data storage tape: An archival mass storage media

    Science.gov (United States)

    Ruddick, Andrew J.

    1993-01-01

    At the 1991 Conference on Mass Storage Systems and Technologies, ICI Imagedata presented a paper which introduced ICI Optical Data Storage Tape. This paper placed specific emphasis on the media characteristics and initial data was presented which illustrated the archival stability of the media. More exhaustive analysis that was carried out on the chemical stability of the media is covered. Equally important, it also addresses archive management issues associated with, for example, the benefits of reduced rewind requirements to accommodate tape relaxation effects that result from careful tribology control in ICI Optical Tape media. ICI Optical Tape media was designed to meet the most demanding requirements of archival mass storage. It is envisaged that the volumetric data capacity, long term stability and low maintenance characteristics demonstrated will have major benefits in increasing reliability and reducing the costs associated with archival storage of large data volumes.

  10. New data storage and retrieval systems for JET data

    Energy Technology Data Exchange (ETDEWEB)

    Layne, Richard E-mail: richard.layne@ukaea.org.uk; Wheatley, Martin E-mail: martin.wheatley@ukaea.org.uk

    2002-06-01

    Since the start of the Joint European Torus (JET), an IBM mainframe has been the main platform for data analysis and storage (J. Comput. Phys. 73 (1987) 85). The mainframe was removed in June 2001 and Solaris and Linux are now the main data storage and analysis platforms. New data storage and retrieval systems have therefore been developed: the Data Warehouse, the JET pulse file server, and the processed pulse file system. In this paper, the new systems will be described, and the design decisions that led to the final systems will be outlined.

  11. New data storage and retrieval systems for JET data

    International Nuclear Information System (INIS)

    Layne, Richard; Wheatley, Martin

    2002-01-01

    Since the start of the Joint European Torus (JET), an IBM mainframe has been the main platform for data analysis and storage (J. Comput. Phys. 73 (1987) 85). The mainframe was removed in June 2001 and Solaris and Linux are now the main data storage and analysis platforms. New data storage and retrieval systems have therefore been developed: the Data Warehouse, the JET pulse file server, and the processed pulse file system. In this paper, the new systems will be described, and the design decisions that led to the final systems will be outlined

  12. Recordable storage medium with protected data area

    NARCIS (Netherlands)

    2005-01-01

    The invention relates to a method of storing data on a rewritable data storage medium, to a corresponding storage medium, to a corresponding recording apparatus and to a corresponding playback apparatus. Copy-protective measures require that on rewritable storage media some data must be stored which

  13. Federated data storage and management infrastructure

    International Nuclear Information System (INIS)

    Zarochentsev, A; Kiryanov, A; Klimentov, A; Krasnopevtsev, D; Hristov, P

    2016-01-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics. (paper)

  14. Compact Holographic Data Storage

    Science.gov (United States)

    Chao, T. H.; Reyes, G. F.; Zhou, H.

    2001-01-01

    NASA's future missions would require massive high-speed onboard data storage capability to Space Science missions. For Space Science, such as the Europa Lander mission, the onboard data storage requirements would be focused on maximizing the spacecraft's ability to survive fault conditions (i.e., no loss in stored science data when spacecraft enters the 'safe mode') and autonomously recover from them during NASA's long-life and deep space missions. This would require the development of non-volatile memory. In order to survive in the stringent environment during space exploration missions, onboard memory requirements would also include: (1) survive a high radiation environment (1 Mrad), (2) operate effectively and efficiently for a very long time (10 years), and (3) sustain at least a billion write cycles. Therefore, memory technologies requirements of NASA's Earth Science and Space Science missions are large capacity, non-volatility, high-transfer rate, high radiation resistance, high storage density, and high power efficiency. JPL, under current sponsorship from NASA Space Science and Earth Science Programs, is developing a high-density, nonvolatile and rad-hard Compact Holographic Data Storage (CHDS) system to enable large-capacity, high-speed, low power consumption, and read/write of data in a space environment. The entire read/write operation will be controlled with electrooptic mechanism without any moving parts. This CHDS will consist of laser diodes, photorefractive crystal, spatial light modulator, photodetector array, and I/O electronic interface. In operation, pages of information would be recorded and retrieved with random access and high-speed. The nonvolatile, rad-hard characteristics of the holographic memory will provide a revolutionary memory technology meeting the high radiation challenge facing the Europa Lander mission. Additional information is contained in the original extended abstract.

  15. Liquid crystals for holographic optical data storage

    DEFF Research Database (Denmark)

    Matharu, Avtar; Jeeva, S.; Ramanujam, P.S.

    2007-01-01

    to the information storage demands of the 21st century is detailed. Holography is a small subset of the much larger field of optical data storage and similarly, the diversity of materials used for optical data storage is enormous. The theory of polarisation holography which produces holograms of constant intensity...

  16. ENERGY STAR Certified Data Center Storage

    Science.gov (United States)

    Certified models meet all ENERGY STAR requirements as listed in the Version 1.0 ENERGY STAR Program Requirements for Data Center Storage that are effective as of December 2, 2013. A detailed listing of key efficiency criteria are available at http://www.energystar.gov/certified-products/detail/data_center_storage

  17. Holographic Optical Data Storage

    Science.gov (United States)

    Timucin, Dogan A.; Downie, John D.; Norvig, Peter (Technical Monitor)

    2000-01-01

    Although the basic idea may be traced back to the earlier X-ray diffraction studies of Sir W. L. Bragg, the holographic method as we know it was invented by D. Gabor in 1948 as a two-step lensless imaging technique to enhance the resolution of electron microscopy, for which he received the 1971 Nobel Prize in physics. The distinctive feature of holography is the recording of the object phase variations that carry the depth information, which is lost in conventional photography where only the intensity (= squared amplitude) distribution of an object is captured. Since all photosensitive media necessarily respond to the intensity incident upon them, an ingenious way had to be found to convert object phase into intensity variations, and Gabor achieved this by introducing a coherent reference wave along with the object wave during exposure. Gabor's in-line recording scheme, however, required the object in question to be largely transmissive, and could provide only marginal image quality due to unwanted terms simultaneously reconstructed along with the desired wavefront. Further handicapped by the lack of a strong coherent light source, optical holography thus seemed fated to remain just another scientific curiosity, until the field was revolutionized in the early 1960s by some major breakthroughs: the proposition and demonstration of the laser principle, the introduction of off-axis holography, and the invention of volume holography. Consequently, the remainder of that decade saw an exponential growth in research on theory, practice, and applications of holography. Today, holography not only boasts a wide variety of scientific and technical applications (e.g., holographic interferometry for strain, vibration, and flow analysis, microscopy and high-resolution imagery, imaging through distorting media, optical interconnects, holographic optical elements, optical neural networks, three-dimensional displays, data storage, etc.), but has become a prominent am advertising

  18. Surface-enhanced raman optical data storage system

    Science.gov (United States)

    Vo-Dinh, Tuan

    1994-01-01

    An improved Surface-Enhanced Raman Optical Data Storage System (SERODS) is disclosed. In the improved system, entities capable of existing in multiple reversible states are present on the storage device. Such entities result in changed Surface-Enhanced Raman Scattering (SERS) when localized state changes are effected in less than all of the entities. Therefore, by changing the state of entities in localized regions of a storage device, the SERS emissions in such regions will be changed. When a write-on device is controlled by a data signal, such a localized regions of changed SERS emissions will correspond to the data written on the device. The data may be read by illuminating the surface of the storage device with electromagnetic radiation of an appropriate frequency and detecting the corresponding SERS emissions. Data may be deleted by reversing the state changes of entities in regions where the data was initially written. In application, entities may be individual molecules which allows for the writing of data at the molecular level. A read/write/delete head utilizing near-field quantum techniques can provide for a write/read/delete device capable of effecting state changes in individual molecules, thus providing for the effective storage of data at the molecular level.

  19. Heterogeneous Data Storage Management with Deduplication in Cloud Computing

    OpenAIRE

    Yan, Zheng; Zhang, Lifang; Ding, Wenxiu; Zheng, Qinghua

    2017-01-01

    Cloud storage as one of the most important services of cloud computing helps cloud users break the bottleneck of restricted resources and expand their storage without upgrading their devices. In order to guarantee the security and privacy of cloud users, data are always outsourced in an encrypted form. However, encrypted data could incur much waste of cloud storage and complicate data sharing among authorized users. We are still facing challenges on encrypted data storage and management with ...

  20. Utilizing ZFS for the Storage of Acquired Data

    International Nuclear Information System (INIS)

    Pugh, C.; Henderson, P.; Silber, K.; Carroll, T.; Ying, K.

    2009-01-01

    Every day, the amount of data that is acquired from plasma experiments grows dramatically. It has become difficult for systems administrators to keep up with the growing demand for hard drive storage space. In the past, project storage has been supplied using UNIX filesystem (ufs) partitions. In order to increase the size of the disks using this system, users were required to discontinue use of the disk, so the existing data could be transferred to a disk of larger capacity or begin use of a completely new and separate disk, thus creating a segmentation of data storage. With the application of ZFS pools, the data capacity woes are over. ZFS provides simple administration that eliminates the need to unmount to resize, or transfer data to a larger disk. With a storage limit of 16 Exabytes (1018), ZFS provides immense scalability. Utilizing ZFS as the new project disk file system, users and administrators can eliminate time wasted waiting for data to transfer from one hard drive to another, and also enables more efficient use of disk space, as system administrators need only allocate what is presently required. This paper will discuss the application and benefits of using ZFS as an alternative to traditional data access and storage in the fusion environment.

  1. The Analysis of RDF Semantic Data Storage Optimization in Large Data Era

    Science.gov (United States)

    He, Dandan; Wang, Lijuan; Wang, Can

    2018-03-01

    With the continuous development of information technology and network technology in China, the Internet has also ushered in the era of large data. In order to obtain the effective acquisition of information in the era of large data, it is necessary to optimize the existing RDF semantic data storage and realize the effective query of various data. This paper discusses the storage optimization of RDF semantic data under large data.

  2. The NOAO Data Lab virtual storage system

    Science.gov (United States)

    Graham, Matthew J.; Fitzpatrick, Michael J.; Norris, Patrick; Mighell, Kenneth J.; Olsen, Knut; Stobie, Elizabeth B.; Ridgway, Stephen T.; Bolton, Adam S.; Saha, Abhijit; Huang, Lijuan W.

    2016-07-01

    Collaborative research/computing environments are essential for working with the next generations of large astronomical data sets. A key component of them is a distributed storage system to enable data hosting, sharing, and publication. VOSpace1 is a lightweight interface providing network access to arbitrary backend storage solutions and endorsed by the International Virtual Observatory Alliance (IVOA). Although similar APIs exist, such as Amazon S3, WebDav, and Dropbox, VOSpace is designed to be protocol agnostic, focusing on data control operations, and supports asynchronous and third-party data transfers, thereby minimizing unnecessary data transfers. It also allows arbitrary computations to be triggered as a result of a transfer operation: for example, a file can be automatically ingested into a database when put into an active directory or a data reduction task, such as Sextractor, can be run on it. In this paper, we shall describe the VOSpace implementations that we have developed for the NOAO Data Lab. These offer both dedicated remote storage, accessible as a local file system via FUSE, and a local VOSpace service to easily enable data synchronization.

  3. Federated data storage system prototype for LHC experiments and data intensive science

    Science.gov (United States)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  4. Data Storage and Management for Global Research Data Infrastructures - Status and Perspectives

    Directory of Open Access Journals (Sweden)

    Erwin Laure

    2013-07-01

    Full Text Available In the vision of Global Research Data Infrastructures (GRDIs, data storage and management plays a crucial role. A successful GRDI will require a common globally interoperable distributed data system, formed out of data centres, that incorporates emerging technologies and new scientific data activities. The main challenge is to define common certification and auditing frameworks that will allow storage providers and data communities to build a viable partnership based on trust. To achieve this, it is necessary to find a long-term commitment model that will give financial, legal, and organisational guarantees of digital information preservation. In this article we discuss the state of the art in data storage and management for GRDIs and point out future research directions that need to be tackled to implement GRDIs.

  5. Disk storage at CERN: Handling LHC data and beyond

    International Nuclear Information System (INIS)

    Espinal, X; Adde, G; Chan, B; Iven, J; Presti, G Lo; Lamanna, M; Mascetti, L; Pace, A; Peters, A; Ponce, S; Sindrilaru, E

    2014-01-01

    The CERN-IT Data Storage and Services (DSS) group stores and provides access to data coming from the LHC and other physics experiments. We implement specialised storage services to provide tools for optimal data management, based on the evolution of data volumes, the available technologies and the observed experiment and users' usage patterns. Our current solutions are CASTOR, for highly-reliable tape-backed storage for heavy-duty Tier-0 workflows, and EOS, for disk-only storage for full-scale analysis activities. CASTOR is evolving towards a simplified disk layer in front of the tape robotics, focusing on recording the primary data from the detectors. EOS is now a well-established storage service used intensively by the four big LHC experiments. Its conceptual design based on multi-replica and in-memory namespace, makes it the perfect system for data intensive workflows. The LHC-Long Shutdown 1 (LSI) presents a window of opportunity to shape up both of our storage services and validate against the ongoing analysis activity in order to successfully face the new LHC data taking period in 2015. In this paper, the current state and foreseen evolutions of CASTOR and EOS will be presented together with a study about the reliability of our systems.

  6. Data storage accounting and verification at LHC experiments

    Energy Technology Data Exchange (ETDEWEB)

    Huang, C. H. [Fermilab; Lanciotti, E. [CERN; Magini, N. [CERN; Ratnikova, N. [Moscow, ITEP; Sanchez-Hernandez, A. [CINVESTAV, IPN; Serfon, C. [Munich U.; Wildish, T. [Princeton U.; Zhang, X. [Beijing, Inst. High Energy Phys.

    2012-01-01

    All major experiments at the Large Hadron Collider (LHC) need to measure real storage usage at the Grid sites. This information is equally important for resource management, planning, and operations. To verify the consistency of central catalogs, experiments are asking sites to provide a full list of the files they have on storage, including size, checksum, and other file attributes. Such storage dumps, provided at regular intervals, give a realistic view of the storage resource usage by the experiments. Regular monitoring of the space usage and data verification serve as additional internal checks of the system integrity and performance. Both the importance and the complexity of these tasks increase with the constant growth of the total data volumes during the active data taking period at the LHC. The use of common solutions helps to reduce the maintenance costs, both at the large Tier1 facilities supporting multiple virtual organizations and at the small sites that often lack manpower. We discuss requirements and solutions to the common tasks of data storage accounting and verification, and present experiment-specific strategies and implementations used within the LHC experiments according to their computing models.

  7. Concept of data storage prototype for Super-C-Tau factory detector

    International Nuclear Information System (INIS)

    Maximov, D.A.

    2017-01-01

    The physics program of experiments at the Super- c τ factory with a peak luminosity of 10 35 cm −2 s −1 leads to high requrements for Data Acquisition and Data Storage systems. Detector data storage is one of the key component of the detector infrastructure, so it must be reliable, highly available and fault tolerant shared storage. It is mostly oriented (from end user point of view) for sequential but mixed read and write operations and is planed to store large data blocks (files). According to CDR of Super-C-Tau factory detector data storage must have very high performance (up to 1 Tbps in both directions simultaneously) and have significant volume (tens and hundreds of Petabytes). It is decided to build a series of prototypes with growing capabilities to investigate storage and neighboring technologies. First prototype of data storage is aimed to develop and test basic components of detector data storage system such as storage devices, networks and software. This prototype is designed to be capable to work with data rate of order 10 Gbps. It is estimated that about 5 modern computers with about 50 disks in total should be enough to archive required performance. The prototype will be based on Ceph storage technology. Ceph is a distributed storage system which allows to create storage solutions with very flexible design, high availability and scalability.

  8. High density data storage principle, technology, and materials

    CERN Document Server

    Zhu, Daoben

    2009-01-01

    The explosive increase in information and the miniaturization of electronic devices demand new recording technologies and materials that combine high density, fast response, long retention time and rewriting capability. As predicted, the current silicon-based computer circuits are reaching their physical limits. Further miniaturization of the electronic components and increase in data storage density are vital for the next generation of IT equipment such as ultra high-speed mobile computing, communication devices and sophisticated sensors. This original book presents a comprehensive introduction to the significant research achievements on high-density data storage from the aspects of recording mechanisms, materials and fabrication technologies, which are promising for overcoming the physical limits of current data storage systems. The book serves as an useful guide for the development of optimized materials, technologies and device structures for future information storage, and will lead readers to the fascin...

  9. Damsel: A Data Model Storage Library for Exascale Science

    Energy Technology Data Exchange (ETDEWEB)

    Choudhary, Alok [Northwestern Univ., Evanston, IL (United States); Liao, Wei-keng [Northwestern Univ., Evanston, IL (United States)

    2014-07-11

    Computational science applications have been described as having one of seven motifs (the “seven dwarfs”), each having a particular pattern of computation and communication. From a storage and I/O perspective, these applications can also be grouped into a number of data model motifs describing the way data is organized and accessed during simulation, analysis, and visualization. Major storage data models developed in the 1990s, such as Network Common Data Format (netCDF) and Hierarchical Data Format (HDF) projects, created support for more complex data models. Development of both netCDF and HDF5 was influenced by multi-dimensional dataset storage requirements, but their access models and formats were designed with sequential storage in mind (e.g., a POSIX I/O model). Although these and other high-level I/O libraries have had a beneficial impact on large parallel applications, they do not always attain a high percentage of peak I/O performance due to fundamental design limitations, and they do not address the full range of current and future computational science data models. The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. The project consists of three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community. The product of this project, Damsel library, is openly available for download from http://cucis.ece.northwestern.edu/projects/DAMSEL. Several case studies and application programming interface

  10. Two-Level Verification of Data Integrity for Data Storage in Cloud Computing

    Science.gov (United States)

    Xu, Guangwei; Chen, Chunlin; Wang, Hongya; Zang, Zhuping; Pang, Mugen; Jiang, Ping

    Data storage in cloud computing can save capital expenditure and relive burden of storage management for users. As the lose or corruption of files stored may happen, many researchers focus on the verification of data integrity. However, massive users often bring large numbers of verifying tasks for the auditor. Moreover, users also need to pay extra fee for these verifying tasks beyond storage fee. Therefore, we propose a two-level verification of data integrity to alleviate these problems. The key idea is to routinely verify the data integrity by users and arbitrate the challenge between the user and cloud provider by the auditor according to the MACs and ϕ values. The extensive performance simulations show that the proposed scheme obviously decreases auditor's verifying tasks and the ratio of wrong arbitration.

  11. Damsel: A Data Model Storage Library for Exascale Science

    Energy Technology Data Exchange (ETDEWEB)

    Koziol, Quincey [The HDF Group, Champaign, IL (United States)

    2014-11-26

    The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. We will accomplish this through three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community.

  12. Damsel - A Data Model Storage Library for Exascale Science

    Energy Technology Data Exchange (ETDEWEB)

    Samatova, Nagiza F

    2014-07-18

    The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. We will accomplish this through three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community.

  13. Data storage accounting and verification in LHC experiments

    CERN Document Server

    Ratnikova ,Natalia

    2012-01-01

    All major experiments at Large Hadron Collider (LHC) need to measure real storage usage at the Grid sites. This information is equally important for the resource management, planning, and operations. To verify consistency of the central catalogs, experiments are asking sites to provide full list of files they have on storage, including size, checksum, and other file attributes. Such storage dumps provided at regular intervals give a realistic view of the storage resource usage by the experiments. Regular monitoring of the space usage and data verification serve as additional internal checks of the system integrity and performance. Both the importance and the complexity of these tasks increase with the constant growth of the total data volumes during the active data taking period at the LHC. Developed common solutions help to reduce the maintenance costs both at the large Tier-1 facilities supporting multiple virtual organizations, and at the small sites that often lack manpower. We discuss requirements...

  14. Enabling data-intensive science with Tactical Storage Systems

    CERN Multimedia

    CERN. Geneva; Marquina, Miguel Angel

    2006-01-01

    Large scale scientific computing requires the ability to share and consume data and storage in complex ways across multiple systems. However, conventional systems constrain users to the fixed abstractions selected by the local system administrator. The result is that users must either move data manually over the wide area or simply be satisfied with the resources of a single cluster. To remedy this situation, we introduce the concept of a tactical storage system (TSS) that allows users to create, reconfigure, and destroy distributed storage systems without special privileges or complex configuration. We have deployed a prototype TSS of 200 disks and 8 TB of storage at the University of Notre Dame and applied it to several problems in astrophysics, high energy physics, and bioinformatics. This talk will focus on novel system structures that support data-intensive science. About the speaker: Douglas Thain is an Assistant Professor of Computer Science and Engineering at the University of Notre Dame. He received ...

  15. Cost-effective data storage/archival subsystem for functional PACS

    Science.gov (United States)

    Chen, Y. P.; Kim, Yongmin

    1993-09-01

    Not the least of the requirements of a workable PACS is the ability to store and archive vast amounts of information. A medium-size hospital will generate between 1 and 2 TBytes of data annually on a fully functional PACS. A high-speed image transmission network coupled with a comparably high-speed central data storage unit can make local memory and magnetic disks in the PACS workstations less critical and, in an extreme case, unnecessary. Under these circumstances, the capacity and performance of the central data storage subsystem and database is critical in determining the response time at the workstations, thus significantly affecting clinical acceptability. The central data storage subsystem not only needs to provide sufficient capacity to store about ten days worth of images (five days worth of new studies, and on the average, about one comparison study for each new study), but also supplies images to the requesting workstation in a timely fashion. The database must provide fast retrieval responses upon users' requests for images. This paper analyzes both advantages and disadvantages of multiple parallel transfer disks versus RAID disks for short-term central data storage subsystem, as well as optical disk jukebox versus digital recorder tape subsystem for long-term archive. Furthermore, an example high-performance cost-effective storage subsystem which integrates both the RAID disks and high-speed digital tape subsystem as a cost-effective PACS data storage/archival unit are presented.

  16. SERODS: a new medium for high-density optical data storage

    Science.gov (United States)

    Vo-Dinh, Tuan; Stokes, David L.

    1998-10-01

    A new optical dada storage technology based on the surface- enhanced Raman scattering (SERS) effect has been developed for high-density optical memory and three-dimensional data storage. With the surface-enhanced Raman optical data storage (SERODS) technology, the molecular interactions between the optical layer molecules and the nanostructured metal substrate are modified by the writing laser, changing their SERS properties to encode information as bits. Since the SERS properties are extremely sensitive to molecular nano- environments, very small 'spectrochemical holes' approaching the diffraction limit can be produced for the writing process. The SERODS device uses a reading laser to induce the SERS emission of molecules on the disk and a photometric detector tuned to the frequency of the RAMAN spectrum to retrieve the stored information. The results illustrate that SERODS is capable of three-dimensional data storage and has the potential to achieve higher storage density than currently available optical data storage systems.

  17. ID based cryptography for secure cloud data storage

    OpenAIRE

    Kaaniche , Nesrine; Boudguiga , Aymen; Laurent , Maryline

    2013-01-01

    International audience; This paper addresses the security issues of storing sensitive data in a cloud storage service and the need for users to trust the commercial cloud providers. It proposes a cryptographic scheme for cloud storage, based on an original usage of ID-Based Cryptography. Our solution has several advantages. First, it provides secrecy for encrypted data which are stored in public servers. Second, it offers controlled data access and sharing among users, so that unauthorized us...

  18. The design of data storage system based on Lustre for EAST

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feng, E-mail: wangfeng@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Chen, Ying; Li, Shi [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Yang, Fei [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Department of Computer Science, Anhui Medical University, Hefei, Anhui (China); Xiao, Bingjia [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui (China)

    2016-11-15

    Highlights: • A high performance data storage system based on Lustre and InfiniBand network has been designed and implemented on EAST tokamak. • The acquired data are stored into MDSplus database continuously on Lustre storage system during discharge. • The high performance computing clusters are interconnected with data acquisition and storage system by Lustre and InfiniBand network. - Abstract: The quasi-steady state operation is one of the main purposes of EAST tokamak, and more than 400 s discharge pulse has been achieved in the past campaigns. The acquired data amount increases continuously with the discharge length. At the same time to meet the requirement of the upgrade and improvement of the diagnostic systems, more and more data acquisition channels have come into service. Some new diagnostic systems require high sampling rate data acquisition more than 10MSPS. In the last campaign 2014, the data streaming is about 2000MB/s and the total data amount is more than 100TB. How to store the huge data continuously becomes a big problem. A new data storage system based on Lustre has been designed to solve the problem. All the storage nodes and servers are connected to InfiniBand FDR 56Gbps network. The maximum parallel throughput of the total storage system is about 10GB/s. It is easy to expand the storage system by adding I/O nodes when more capacity and performance are required in the future. The new data storage system will be applied in the next campaign of EAST. The system details are given in the paper.

  19. The design of data storage system based on Lustre for EAST

    International Nuclear Information System (INIS)

    Wang, Feng; Chen, Ying; Li, Shi; Yang, Fei; Xiao, Bingjia

    2016-01-01

    Highlights: • A high performance data storage system based on Lustre and InfiniBand network has been designed and implemented on EAST tokamak. • The acquired data are stored into MDSplus database continuously on Lustre storage system during discharge. • The high performance computing clusters are interconnected with data acquisition and storage system by Lustre and InfiniBand network. - Abstract: The quasi-steady state operation is one of the main purposes of EAST tokamak, and more than 400 s discharge pulse has been achieved in the past campaigns. The acquired data amount increases continuously with the discharge length. At the same time to meet the requirement of the upgrade and improvement of the diagnostic systems, more and more data acquisition channels have come into service. Some new diagnostic systems require high sampling rate data acquisition more than 10MSPS. In the last campaign 2014, the data streaming is about 2000MB/s and the total data amount is more than 100TB. How to store the huge data continuously becomes a big problem. A new data storage system based on Lustre has been designed to solve the problem. All the storage nodes and servers are connected to InfiniBand FDR 56Gbps network. The maximum parallel throughput of the total storage system is about 10GB/s. It is easy to expand the storage system by adding I/O nodes when more capacity and performance are required in the future. The new data storage system will be applied in the next campaign of EAST. The system details are given in the paper.

  20. Security and efficiency data sharing scheme for cloud storage

    International Nuclear Information System (INIS)

    Han, Ke; Li, Qingbo; Deng, Zhongliang

    2016-01-01

    With the adoption and diffusion of data sharing paradigm in cloud storage, there have been increasing demands and concerns for shared data security. Ciphertext Policy Attribute-Based Encryption (CP-ABE) is becoming a promising cryptographic solution to the security problem of shared data in cloud storage. However due to key escrow, backward security and inefficiency problems, existing CP-ABE schemes cannot be directly applied to cloud storage system. In this paper, an effective and secure access control scheme for shared data is proposed to solve those problems. The proposed scheme refines the security of existing CP-ABE based schemes. Specifically, key escrow and conclusion problem are addressed by dividing key generation center into several distributed semi-trusted parts. Moreover, secrecy revocation algorithm is proposed to address not only back secrecy but efficient problem in existing CP-ABE based scheme. Furthermore, security and performance analyses indicate that the proposed scheme is both secure and efficient for cloud storage.

  1. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    Energy Technology Data Exchange (ETDEWEB)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro; Kuhn, Michael; Carns, Philip; Ludwig, Thomas

    2017-09-05

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question: Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms

  2. PETASCALE DATA STORAGE INSTITUTE (PDSI) Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Garth [Carnegie Mellon University

    2012-11-26

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability. The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools. The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz. Because the Institute focuses on low level files systems and storage systems, its role in improving SciDAC systems was one of supporting application middleware such as data management and system-level performance tuning. In retrospect, the Petascale Data Storage Institute’s most innovative and impactful contribution is the Parallel Log-structured File System (PLFS). Published in SC09, PLFS is middleware that operates in MPI-IO or embedded in FUSE for non-MPI applications. Its function is to decouple concurrently written files into a per-process log file, whose impact (the contents of the single file that the parallel application was concurrently writing) is determined on later reading, rather than during its writing. PLFS is transparent to the parallel application, offering a POSIX or MPI-IO interface, and it shows an order of magnitude speedup to the Chombo benchmark and two orders of magnitude to the FLASH benchmark. Moreover, LANL production applications see speedups of 5X to 28X, so PLFS has been put into production at LANL. Originally conceived and prototyped in a PDSI collaboration between LANL and CMU, it has grown to engage many other PDSI institutes, international partners like AWE

  3. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Directory of Open Access Journals (Sweden)

    Shaoming Pan

    Full Text Available Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  4. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Science.gov (United States)

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  5. Technology Assessment of High Capacity Data Storage Systems: Can We Avoid a Data Survivability Crisis?

    Science.gov (United States)

    Halem, M.; Shaffer, F.; Palm, N.; Salmon, E.; Raghavan, S.; Kempster, L.

    1998-01-01

    This technology assessment of long-term high capacity data storage systems identifies an emerging crisis of severe proportions related to preserving important historical data in science, healthcare, manufacturing, finance and other fields. For the last 50 years, the information revolution, which has engulfed all major institutions of modem society, centered itself on data-their collection, storage, retrieval, transmission, analysis and presentation. The transformation of long term historical data records into information concepts, according to Drucker, is the next stage in this revolution towards building the new information based scientific and business foundations. For this to occur, data survivability, reliability and evolvability of long term storage media and systems pose formidable technological challenges. Unlike the Y2K problem, where the clock is ticking and a crisis is set to go off at a specific time, large capacity data storage repositories face a crisis similar to the social security system in that the seriousness of the problem emerges after a decade or two. The essence of the storage crisis is as follows: since it could take a decade to migrate a peta-byte of data to a new media for preservation, and the life expectancy of the storage media itself is only a decade, then it may not be possible to complete the transfer before an irrecoverable data loss occurs. Over the last two decades, a number of anecdotal crises have occurred where vital scientific and business data were lost or would have been lost if not for major expenditures of resources and funds to save this data, much like what is happening today to solve the Y2K problem. A pr-ime example was the joint NASA/NSF/NOAA effort to rescue eight years worth of TOVS/AVHRR data from an obsolete system, which otherwise would have not resulted in the valuable 20-year long satellite record of global warming. Current storage systems solutions to long-term data survivability rest on scalable architectures

  6. A Privacy-Preserving Outsourcing Data Storage Scheme with Fragile Digital Watermarking-Based Data Auditing

    Directory of Open Access Journals (Sweden)

    Xinyue Cao

    2016-01-01

    Full Text Available Cloud storage has been recognized as the popular solution to solve the problems of the rising storage costs of IT enterprises for users. However, outsourcing data to the cloud service providers (CSPs may leak some sensitive privacy information, as the data is out of user’s control. So how to ensure the integrity and privacy of outsourced data has become a big challenge. Encryption and data auditing provide a solution toward the challenge. In this paper, we propose a privacy-preserving and auditing-supporting outsourcing data storage scheme by using encryption and digital watermarking. Logistic map-based chaotic cryptography algorithm is used to preserve the privacy of outsourcing data, which has a fast operation speed and a good effect of encryption. Local histogram shifting digital watermark algorithm is used to protect the data integrity which has high payload and makes the original image restored losslessly if the data is verified to be integrated. Experiments show that our scheme is secure and feasible.

  7. Volume Holographic Storage of Digital Data Implemented in Photorefractive Media

    Science.gov (United States)

    Heanue, John Frederick

    A holographic data storage system is fundamentally different from conventional storage devices. Information is recorded in a volume, rather than on a two-dimensional surface. Data is transferred in parallel, on a page-by -page basis, rather than serially. These properties, combined with a limited need for mechanical motion, lead to the potential for a storage system with high capacity, fast transfer rate, and short access time. The majority of previous volume holographic storage experiments have involved direct storage and retrieval of pictorial information. Success in the development of a practical holographic storage device requires an understanding of the performance capabilities of a digital system. This thesis presents a number of contributions toward this goal. A description of light diffraction from volume gratings is given. The results are used as the basis for a theoretical and numerical analysis of interpage crosstalk in both angular and wavelength multiplexed holographic storage. An analysis of photorefractive grating formation in photovoltaic media such as lithium niobate is presented along with steady-state expressions for the space-charge field in thermal fixing. Thermal fixing by room temperature recording followed by ion compensation at elevated temperatures is compared to simultaneous recording and compensation at high temperature. In particular, the tradeoff between diffraction efficiency and incomplete Bragg matching is evaluated. An experimental investigation of orthogonal phase code multiplexing is described. Two unique capabilities, the ability to perform arithmetic operations on stored data pages optically, rather than electronically, and encrypted data storage, are demonstrated. A comparison of digital signal representations, or channel codes, is carried out. The codes are compared in terms of bit-error rate performance at constant capacity. A well-known one-dimensional digital detection technique, maximum likelihood sequence estimation, is

  8. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wadhwa, Bharti [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science; Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Butt, Ali R. [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objects to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.

  9. Scaling up DNA data storage and random access retrieval

    OpenAIRE

    Gopalan, Parikshit; Ceze, Luis; Nguyen, Bichlien; Takahashi, Christopher; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Seelig, Georg; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Yekhanin, Sergey; Makarychev, Konstantin

    2017-01-01

    Current storage technologies can no longer keep pace with exponentially growing amounts of data. Synthetic DNA offers an attractive alternative due to its potential information density of ~ 1018B/mm3, 107 times denser than magnetic tape, and potential durability of thousands of years. Recent advances in DNA data storage have highlighted technical challenges, in particular, coding and random access, but have stored only modest amounts of data in synthetic DNA. This paper demonstrates an end-to...

  10. Failure Analysis of Storage Data Magnetic Systems

    Directory of Open Access Journals (Sweden)

    Ortiz–Prado A.

    2010-10-01

    Full Text Available This paper shows the conclusions about the corrosion mechanics in storage data magnetic systems (hard disk. It was done from the inspection of 198 units that were in service in nine different climatic regions characteristic for Mexico. The results allow to define trends about the failure forms and the factors that affect them. In turn, this study has analyzed the causes that led to mechanical failure and those due to deterioration by atmospheric corrosion. On the basis of the results obtained from the field sampling, demonstrates that the hard disk failure is fundamentally by mechanical effects. The deterioration by environmental effects were found in read-write heads, integrated circuits, printed circuit boards and in some of the electronic components of the controller card of the device, but not in magnetic storage surfaces. There fore, you can discard corrosion on the surface of the disk as the main kind of failure due to environmental deterioration. To avoid any inconvenience in the magnetic data storage system it is necessary to ensure sealing of the system.

  11. Peptide oligomers for holographic data storage

    DEFF Research Database (Denmark)

    Berg, Rolf Henrik; Hvilsted, Søren; Ramanujam, P.S.

    1996-01-01

    SEVERAL classes of organic materials (such as photoanisotropic liquid-crystalline polymers(1-4) and photorefractive polymers(5-7)) are being investigated for the development of media for optical data storage. Here we describe a new family of organic materials-peptide oligomers containing azobenzene...

  12. StorageTek T10000 Data Cartridge

    CERN Multimedia

    This data cartridge works on several StorageTek systems. The goal is to provide cartridge compatibility across several system. It has been designed for space saving and ultra-high capacity tape. It permit to fulfill high-volume backup, archiving, and disaster recovery.

  13. Antenna data storage concept for phased array radio astronomical instruments

    Science.gov (United States)

    Gunst, André W.; Kruithof, Gert H.

    2018-04-01

    Low frequency Radio Astronomy instruments like LOFAR and SKA-LOW use arrays of dipole antennas for the collection of radio signals from the sky. Due to the large number of antennas involved, the total data rate produced by all the antennas is enormous. Storage of the antenna data is both economically and technologically infeasible using the current state of the art storage technology. Therefore, real-time processing of the antenna voltage data using beam forming and correlation is applied to achieve a data reduction throughout the signal chain. However, most science could equally well be performed using an archive of raw antenna voltage data coming straight from the A/D converters instead of capturing and processing the antenna data in real time over and over again. Trends on storage and computing technology make such an approach feasible on a time scale of approximately 10 years. The benefits of such a system approach are more science output and a higher flexibility with respect to the science operations. In this paper we present a radically new system concept for a radio telescope based on storage of raw antenna data. LOFAR is used as an example for such a future instrument.

  14. Enhanced Obfuscation Technique for Data Confidentiality in Public Cloud Storage

    OpenAIRE

    Oli S. Arul; Arockiam L.

    2016-01-01

    With an advent of cloud computing, data storage has become a boon in information technology. At the same time, data storage in remote places have become important issues. Lot of techniques are available to ensure protection of data confidentiality. These techniques do not completely serve the purpose in protecting data. The Obfuscation techniques come to rescue for protecting data from malicious attacks. This paper proposes an obfuscation technique to encrypt the desired data type on the clou...

  15. High Throughput WAN Data Transfer with Hadoop-based Storage

    Science.gov (United States)

    Amin, A.; Bockelman, B.; Letts, J.; Levshina, T.; Martin, T.; Pi, H.; Sfiligoi, I.; Thomas, M.; Wüerthwein, F.

    2011-12-01

    Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.

  16. High Throughput WAN Data Transfer with Hadoop-based Storage

    International Nuclear Information System (INIS)

    Amin, A; Thomas, M; Bockelman, B; Letts, J; Martin, T; Pi, H; Sfiligoi, I; Wüerthwein, F; Levshina, T

    2011-01-01

    Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.

  17. Cloud Data Storage Federation for Scientific Applications

    NARCIS (Netherlands)

    Koulouzis, S.; Vasyunin, D.; Cushing, R.; Belloum, A.; Bubak, M.; an Mey, D.; Alexander, M.; Bientinesi, P.; Cannataro, M.; Clauss, C.; Costan, A.; Kecskemeti, G.; Morin, C.; Ricci, L.; Sahuquillo, J.; Schulz, M.; Scarano, V.; Scott, S.L.; Weidendorfer, J.

    2014-01-01

    Nowadays, data-intensive scientific research needs storage capabilities that enable efficient data sharing. This is of great importance for many scientific domains such as the Virtual Physiological Human. In this paper, we introduce a solution that federates a variety of systems ranging from file

  18. Design of Dimensional Model for Clinical Data Storage and Analysis

    Directory of Open Access Journals (Sweden)

    Dipankar SENGUPTA

    2013-06-01

    Full Text Available Current research in the field of Life and Medical Sciences is generating chunk of data on daily basis. It has thus become a necessity to find solutions for efficient storage of this data, trying to correlate and extract knowledge from it. Clinical data generated in Hospitals, Clinics & Diagnostics centers is falling under a similar paradigm. Patient’s records in various hospitals are increasing at an exponential rate, thus adding to the problem of data management and storage. Major problem being faced corresponding to storage, is the varied dimensionality of the data, ranging from images to numerical form. Therefore there is a need for development of efficient data model which can handle this multi-dimensionality data issue and store the data with historical aspect.For the stated problem lying in façade of clinical informatics we propose a clinical dimensional model design which can be used for development of a clinical data mart. The model has been designed keeping in consideration temporal storage of patient's data with respect to all possible clinical parameters which can include both textual and image based data. Availability of said data for each patient can be then used for application of data mining techniques for finding the correlation of all the parameters at the level of individual and population.

  19. Analysis and Research on Spatial Data Storage Model Based on Cloud Computing Platform

    Science.gov (United States)

    Hu, Yong

    2017-12-01

    In this paper, the data processing and storage characteristics of cloud computing are analyzed and studied. On this basis, a cloud computing data storage model based on BP neural network is proposed. In this data storage model, it can carry out the choice of server cluster according to the different attributes of the data, so as to complete the spatial data storage model with load balancing function, and have certain feasibility and application advantages.

  20. The challenge of a data storage hierarchy

    Science.gov (United States)

    Ruderman, Michael

    1992-01-01

    A discussion of Mesa Archival Systems' data archiving system is presented. This data archiving system is strictly a software system that is implemented on a mainframe and manages the data into permanent file storage. Emphasis is placed on the fact that any kind of client system on the network can be connected through the Unix interface of the data archiving system.

  1. Data Acquisition and Mass Storage

    Science.gov (United States)

    Vande Vyvre, P.

    2004-08-01

    The experiments performed at supercolliders will constitute a new challenge in several disciplines of High Energy Physics and Information Technology. This will definitely be the case for data acquisition and mass storage. The microelectronics, communication, and computing industries are maintaining an exponential increase of the performance of their products. The market of commodity products remains the largest and the most competitive market of technology products. This constitutes a strong incentive to use these commodity products extensively as components to build the data acquisition and computing infrastructures of the future generation of experiments. The present generation of experiments in Europe and in the US already constitutes an important step in this direction. The experience acquired in the design and the construction of the present experiments has to be complemented by a large R&D effort executed with good awareness of industry developments. The future experiments will also be expected to follow major trends of our present world: deliver physics results faster and become more and more visible and accessible. The present evolution of the technologies and the burgeoning of GRID projects indicate that these trends will be made possible. This paper includes a brief overview of the technologies currently used for the different tasks of the experimental data chain: data acquisition, selection, storage, processing, and analysis. The major trends of the computing and networking technologies are then indicated with particular attention paid to their influence on the future experiments. Finally, the vision of future data acquisition and processing systems and their promise for future supercolliders is presented.

  2. Managing high-bandwidth real-time data storage

    Energy Technology Data Exchange (ETDEWEB)

    Bigelow, David D. [Los Alamos National Laboratory; Brandt, Scott A [Los Alamos National Laboratory; Bent, John M [Los Alamos National Laboratory; Chen, Hsing-Bung [Los Alamos National Laboratory

    2009-09-23

    There exist certain systems which generate real-time data at high bandwidth, but do not necessarily require the long-term retention of that data in normal conditions. In some cases, the data may not actually be useful, and in others, there may be too much data to permanently retain in long-term storage whether it is useful or not. However, certain portions of the data may be identified as being vitally important from time to time, and must therefore be retained for further analysis or permanent storage without interrupting the ongoing collection of new data. We have developed a system, Mahanaxar, intended to address this problem. It provides quality of service guarantees for incoming real-time data streams and simultaneous access to already-recorded data on a best-effort basis utilizing any spare bandwidth. It has built in mechanisms for reliability and indexing, can scale upwards to meet increasing bandwidth requirements, and handles both small and large data elements equally well. We will show that a prototype version of this system provides better performance than a flat file (traditional filesystem) based version, particularly with regard to quality of service guarantees and hard real-time requirements.

  3. Technology Assessment of High Capacity Data Storage Systems: Can We Avoid a Data Survivability Crisis

    Science.gov (United States)

    Halem, M.; Shaffer, F.; Palm, N.; Salmon, E.; Raghavan, S.; Kempster, L.

    1998-01-01

    The density of digital storage media in our information-intensive society increases by a factor of four every three years, while the rate at which this data can be migrated to viable long-term storage has been increasing by a factor of only four every nine years. Meanwhile, older data stored on increasingly obsolete media, are at considerable risk. When the systems for which the media were designed are no longer serviced by their manufacturers (many of whom are out of business), the data will no longer be accessible. In some cases, older media suffer from a physical breakdown of components - tapes simply lose their magnetic properties after a long time in storage. The scale of the crisis is compatible to that facing the Social Security System. Greater financial and intellectual resources to the development and refinement of new storage media and migration technologies in order to preserve as much data as possible.

  4. Multilevel recording of complex amplitude data pages in a holographic data storage system using digital holography.

    Science.gov (United States)

    Nobukawa, Teruyoshi; Nomura, Takanori

    2016-09-05

    A holographic data storage system using digital holography is proposed to record and retrieve multilevel complex amplitude data pages. Digital holographic techniques are capable of modulating and detecting complex amplitude distribution using current electronic devices. These techniques allow the development of a simple, compact, and stable holographic storage system that mainly consists of a single phase-only spatial light modulator and an image sensor. As a proof-of-principle experiment, complex amplitude data pages with binary amplitude and four-level phase are recorded and retrieved. Experimental results show the feasibility of the proposed holographic data storage system.

  5. Developments in data storage materials perspective

    CERN Document Server

    Chong, Chong Tow

    2011-01-01

    "The book covers the recent developments in the field of materials for advancing recording technology by experts worldwide. Chapters that provide sufficient information on the fundamentals will be also included, so that the book can be followed by graduate students or a beginner in the field of magnetic recording. The book also would have a few chapters related to optical data storage. In addition to helping a graduate student to quickly grasp the subject, the book also will serve as a useful reference material for the advanced researcher. The field of materials science related to data storage applications (especially hard disk drives) is rapidly growing. Several innovations take place every year in order to keep the growth trend in the capacity of the hard disk drives. Moreover, magnetic recording is very complicated that it is quite difficult for new engineers and graduate students in the field of materials science or electrical engineering to grasp the subject with a good understanding. There are no compet...

  6. Disk storage management for LHCb based on Data Popularity estimator

    Science.gov (United States)

    Hushchyn, Mikhail; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-12-01

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.

  7. Disk storage management for LHCb based on Data Popularity estimator

    International Nuclear Information System (INIS)

    Hushchyn, Mikhail; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-01-01

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data. (paper)

  8. Holographic memory for high-density data storage and high-speed pattern recognition

    Science.gov (United States)

    Gu, Claire

    2002-09-01

    As computers and the internet become faster and faster, more and more information is transmitted, received, and stored everyday. The demand for high density and fast access time data storage is pushing scientists and engineers to explore all possible approaches including magnetic, mechanical, optical, etc. Optical data storage has already demonstrated its potential in the competition against other storage technologies. CD and DVD are showing their advantages in the computer and entertainment market. What motivated the use of optical waves to store and access information is the same as the motivation for optical communication. Light or an optical wave has an enormous capacity (or bandwidth) to carry information because of its short wavelength and parallel nature. In optical storage, there are two types of mechanism, namely localized and holographic memories. What gives the holographic data storage an advantage over localized bit storage is the natural ability to read the stored information in parallel, therefore, meeting the demand for fast access. Another unique feature that makes the holographic data storage attractive is that it is capable of performing associative recall at an incomparable speed. Therefore, volume holographic memory is particularly suitable for high-density data storage and high-speed pattern recognition. In this paper, we review previous works on volume holographic memories and discuss the challenges for this technology to become a reality.

  9. Shipping and storage cask data for spent nuclear fuel

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, E.R.; Notz, K.J.

    1988-11-01

    This document is a compilation of data on casks used for the storage and/or transport of commercially generated spent fuel in the US based on publicly available information. In using the information contained in the following data sheets, it should be understood that the data have been assembled from published information, which in some instances was not internally consistent. Moreover, it was sometimes necessary to calculate or infer the values of some attributes from available information. Nor was there always a uniform method of reporting the values of some attributes; for example, an outside surface dose of the loaded cask was sometimes reported to be the maximum acceptable by NRC, while in other cases the maximum actual dose rate expected was reported, and in still other cases the expected average dose rate was reported. A summary comparison of the principal attributes of storage and transportable storage casks is provided and a similar comparison for shipping casks is also shown. References to source data are provided on the individual data sheets for each cask.

  10. Shipping and storage cask data for spent nuclear fuel

    International Nuclear Information System (INIS)

    Johnson, E.R.; Notz, K.J.

    1988-11-01

    This document is a compilation of data on casks used for the storage and/or transport of commercially generated spent fuel in the US based on publicly available information. In using the information contained in the following data sheets, it should be understood that the data have been assembled from published information, which in some instances was not internally consistent. Moreover, it was sometimes necessary to calculate or infer the values of some attributes from available information. Nor was there always a uniform method of reporting the values of some attributes; for example, an outside surface dose of the loaded cask was sometimes reported to be the maximum acceptable by NRC, while in other cases the maximum actual dose rate expected was reported, and in still other cases the expected average dose rate was reported. A summary comparison of the principal attributes of storage and transportable storage casks is provided and a similar comparison for shipping casks is also shown. References to source data are provided on the individual data sheets for each cask

  11. Disk storage management for LHCb based on Data Popularity estimator

    CERN Document Server

    INSPIRE-00545541; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-12-23

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times ...

  12. Proactive replica checking to assure reliability of data in cloud storage with minimum replication

    Science.gov (United States)

    Murarka, Damini; Maheswari, G. Uma

    2017-11-01

    The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.

  13. Computer system for environmental sample analysis and data storage and analysis

    International Nuclear Information System (INIS)

    Brauer, F.P.; Fager, J.E.

    1976-01-01

    A mini-computer based environmental sample analysis and data storage system has been developed. The system is used for analytical data acquisition, computation, storage of analytical results, and tabulation of selected or derived results for data analysis, interpretation and reporting. This paper discussed the structure, performance and applications of the system

  14. Using RFID to Enhance Security in Off-Site Data Storage

    Directory of Open Access Journals (Sweden)

    Enrique de la Hoz

    2010-08-01

    Full Text Available Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system’s benefits in terms of efficiency and failure prevention.

  15. Using RFID to Enhance Security in Off-Site Data Storage

    Science.gov (United States)

    Lopez-Carmona, Miguel A.; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R.

    2010-01-01

    Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID)-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system’s benefits in terms of efficiency and failure prevention. PMID:22163638

  16. Using RFID to enhance security in off-site data storage.

    Science.gov (United States)

    Lopez-Carmona, Miguel A; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R

    2010-01-01

    Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID)-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system's benefits in terms of efficiency and failure prevention.

  17. Optimization of Comb-Drive Actuators [Nanopositioners for probe-based data storage and musical MEMS

    NARCIS (Netherlands)

    Engelen, Johannes Bernardus Charles

    2011-01-01

    The era of infinite storage seems near. To reach it, data storage capabilities need to grow, and new storage technologies must be developed.This thesis studies one aspect of one of the emergent storage technologies: optimizing electrostatic combdrive actuation for a parallel probe-based data storage

  18. A Hybrid Multilevel Storage Architecture for Electric Power Dispatching Big Data

    Science.gov (United States)

    Yan, Hu; Huang, Bibin; Hong, Bowen; Hu, Jing

    2017-10-01

    Electric power dispatching is the center of the whole power system. In the long run time, the power dispatching center has accumulated a large amount of data. These data are now stored in different power professional systems and form lots of information isolated islands. Integrating these data and do comprehensive analysis can greatly improve the intelligent level of power dispatching. In this paper, a hybrid multilevel storage architecture for electrical power dispatching big data is proposed. It introduces relational database and NoSQL database to establish a power grid panoramic data center, effectively meet power dispatching big data storage needs, including the unified storage of structured and unstructured data fast access of massive real-time data, data version management and so on. It can be solid foundation for follow-up depth analysis of power dispatching big data.

  19. Towards Efficient Scientific Data Management Using Cloud Storage

    Science.gov (United States)

    He, Qiming

    2013-01-01

    A software prototype allows users to backup and restore data to/from both public and private cloud storage such as Amazon's S3 and NASA's Nebula. Unlike other off-the-shelf tools, this software ensures user data security in the cloud (through encryption), and minimizes users operating costs by using space- and bandwidth-efficient compression and incremental backup. Parallel data processing utilities have also been developed by using massively scalable cloud computing in conjunction with cloud storage. One of the innovations in this software is using modified open source components to work with a private cloud like NASA Nebula. Another innovation is porting the complex backup to- cloud software to embedded Linux, running on the home networking devices, in order to benefit more users.

  20. Development of climate data storage and processing model

    Science.gov (United States)

    Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.

    2016-11-01

    We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.

  1. Integrated Storage and Management of Vector and Raster Data Based on Oracle Database

    Directory of Open Access Journals (Sweden)

    WU Zheng

    2017-05-01

    Full Text Available At present, there are many problems in the storage and management of multi-source heterogeneous spatial data, such as the difficulty of transferring, the lack of unified storage and the low efficiency. By combining relational database and spatial data engine technology, an approach for integrated storage and management of vector and raster data is proposed on the basis of Oracle in this paper. This approach establishes an integrated storage model on vector and raster data and optimizes the retrieval mechanism at first, then designs a framework for the seamless data transfer, finally realizes the unified storage and efficient management of multi-source heterogeneous data. By comparing experimental results with the international leading similar software ArcSDE, it is proved that the proposed approach has higher data transfer performance and better query retrieval efficiency.

  2. Cloud and virtual data storage networking

    CERN Document Server

    Schulz, Greg

    2011-01-01

    The amount of data being generated, processed, and stored has reached unprecedented levels. Even during the recent economic crisis, there has been no slow down or information recession. Instead, the need to process, move, and store data has only increased. Consequently, IT organizations are looking to do more with what they have while supporting growth along with new services without compromising on cost and service delivery. Cloud and Virtual Data Storage Networking, by savvy IT industry veteran Greg Schulz, looks at converging IT resources and management technologies for facilitating efficie

  3. Data storage as a service

    OpenAIRE

    Tomšič, Jan

    2016-01-01

    The purpose of the thesis was comparison of interfaces to network attached file systems and object storage. The thesis describes network file system and mounting procedure in Linux operating system. Object storage and distributed storage systems are explained with examples of usage. Amazon S3 is an example of object store with access trough REST interface. Ceph, a system for distributed object storage, is explained in detail, and a Ceph cluster was deployed for the purpose of this thesis. Cep...

  4. Data storage and retrieval for long-term dog studies

    International Nuclear Information System (INIS)

    Watson, C.R.; Trauger, G.M.; McIntyre, J.M.; Slavich, A.L.; Park, J.F.

    1980-01-01

    Over half of the 500,000 records collected on dogs in the last 20 years in our laboratory have been converted from sequential storage on magnetic tape to direct-access disk storage on a PDP 11/70 minicomputer. An interactive storage and retrieval system, based on a commercially available query language, has been developed to make these records more accessible. Data entry and retrieval are now performed by scientists and technicians rather than by keypunch operators and computer specialists. Further conversion awaits scheduled computer enhancement

  5. Alternative Data Storage Solution for Mobile Messaging Services

    Directory of Open Access Journals (Sweden)

    David C. C. Ong

    2007-01-01

    Full Text Available In recent years, mobile devices have become relatively more powerful with additional features which have the capability to provide multimedia streaming. Better, faster and more reliable data storage solutions in the mobile messaging platform have become more essential with these additional improvements. The existing mobile messaging infrastructure, in particular the data storage platform has become less proficient in coping with the increased demand for its services. This demand especially in the mobile messaging area (i.e. SMS – Short Messaging Service, MMS – Multimedia Messaging Service, which may well exceeded 250,000 requests per second, means that the need to evaluate competing data management systems has become not only necessary but essential. This paper presents an evaluation of SMS and MMS platforms using different database management systems – DBMS and recommends the best data management strategies for these platforms.

  6. Archiving and retrieval of experimental data using SAN based centralized storage system for SST-1

    Energy Technology Data Exchange (ETDEWEB)

    Bhandarkar, Manisha, E-mail: manisha@ipr.res.in; Masand, Harish; Kumar, Aveg; Patel, Kirit; Dhongde, Jasraj; Gulati, Hitesh; Mahajan, Kirti; Chudasama, Hitesh; Pradhan, Subrata

    2016-11-15

    Highlights: • SAN (Storage Area Network) based centralized data storage system of SST-1 has envisaged to address the need of centrally availability of SST-1 storage system to archive/retrieve experimental data for the authenticated users for 24 × 7. • The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS cluster file system with multipath support. • The adopted SAN based data storage for SST-1 is a modular, robust, and allows future expandability. • Important considerations has been taken like, Handling of varied Data writing speed from different subsystems to central storage, Simultaneous read access of the bulk experimental and as well as essential diagnostic data, The life expectancy of data, How often data will be retrieved and how fast it will be needed, How much historical data should be maintained at storage. - Abstract: SAN (Storage Area Network, a high-speed, block level storage device) based centralized data storage system of SST-1 (Steady State superconducting Tokamak) has envisaged to address the need of availability of SST-1 operation & experimental data centrally for archival as well as retrieval [2]. Considering the initial data volume requirement, ∼10 TB (Terabytes) capacity of SAN based data storage system has configured/installed with optical fiber backbone with compatibility considerations of existing Ethernet network of SST-1. The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS (Global File System) cluster file system with multipath support. Tier-1 is of ∼3 TB (frequent access and low data storage capacity) comprises of Fiber channel (FC) based hard disks for optimum throughput. Tier-2 is of ∼6 TB (less frequent access and high data storage capacity) comprises of SATA based hard disks. Tier-3 will be planned later to store offline historical data. In the SAN configuration two tightly coupled storage servers (with cluster configuration) are

  7. Archiving and retrieval of experimental data using SAN based centralized storage system for SST-1

    International Nuclear Information System (INIS)

    Bhandarkar, Manisha; Masand, Harish; Kumar, Aveg; Patel, Kirit; Dhongde, Jasraj; Gulati, Hitesh; Mahajan, Kirti; Chudasama, Hitesh; Pradhan, Subrata

    2016-01-01

    Highlights: • SAN (Storage Area Network) based centralized data storage system of SST-1 has envisaged to address the need of centrally availability of SST-1 storage system to archive/retrieve experimental data for the authenticated users for 24 × 7. • The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS cluster file system with multipath support. • The adopted SAN based data storage for SST-1 is a modular, robust, and allows future expandability. • Important considerations has been taken like, Handling of varied Data writing speed from different subsystems to central storage, Simultaneous read access of the bulk experimental and as well as essential diagnostic data, The life expectancy of data, How often data will be retrieved and how fast it will be needed, How much historical data should be maintained at storage. - Abstract: SAN (Storage Area Network, a high-speed, block level storage device) based centralized data storage system of SST-1 (Steady State superconducting Tokamak) has envisaged to address the need of availability of SST-1 operation & experimental data centrally for archival as well as retrieval [2]. Considering the initial data volume requirement, ∼10 TB (Terabytes) capacity of SAN based data storage system has configured/installed with optical fiber backbone with compatibility considerations of existing Ethernet network of SST-1. The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS (Global File System) cluster file system with multipath support. Tier-1 is of ∼3 TB (frequent access and low data storage capacity) comprises of Fiber channel (FC) based hard disks for optimum throughput. Tier-2 is of ∼6 TB (less frequent access and high data storage capacity) comprises of SATA based hard disks. Tier-3 will be planned later to store offline historical data. In the SAN configuration two tightly coupled storage servers (with cluster configuration) are

  8. Using Cloud-based Storage Technologies for Earth Science Data

    Science.gov (United States)

    Michaelis, A.; Readey, J.; Votava, P.

    2016-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  9. Bio-Cryptography Based Secured Data Replication Management in Cloud Storage

    OpenAIRE

    Elango Pitchai

    2016-01-01

    Cloud computing is new way of economical and efficient storage. The single data mart storage system is a less secure because data remain under a single data mart. This can lead to data loss due to different causes like hacking, server failure etc. If an attacker chooses to attack a specific client, then he can aim at a fixed cloud provider, try to have access to the client’s information. This makes an easy job of the attackers, both inside and outside attackers get the benefit of ...

  10. dCache: Big Data storage for HEP communities and beyond

    International Nuclear Information System (INIS)

    Millar, A P; Bernardt, C; Fuhrmann, P; Mkrtchyan, T; Petersen, A; Schwank, K; Behrmann, G; Litvintsev, D; Rossi, A

    2014-01-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  11. ABOUT THE GENERAL CONCEPT OF THE UNIVERSAL STORAGE SYSTEM AND PRACTICE-ORIENTED DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    L. V. Rudikova

    2017-01-01

    Full Text Available Approaches evolution and concept of data accumulation in warehouse and subsequent Data Mining use is perspective due to the fact that, Belarusian segment of the same IT-developments is organizing. The article describes the general concept for creation a system of storage and practice-oriented data analysis, based on the data warehousing technology. The main aspect in universal system design on storage layer and working with data is approach uses extended data warehouse, based on universal platform of stored data, which grants access to storage and subsequent data analysis different structure and subject domains have compound’s points (nodes and extended functional with data structure choice option for data storage and subsequent intrasystem integration. Describe the universal system general architecture of storage and analysis practice-oriented data, structural elements. Main components of universal system for storage and processing practice-oriented data are: online data sources, ETL-process, data warehouse, subsystem of analysis, users. An important place in the system is analytical processing of data, information search, document’s storage and providing a software interface for accessing the functionality of the system from the outside. An universal system based on describing concept will allow collection information of different subject domains, get analytical summaries, do data processing and apply appropriate Data Mining methods and algorithms.

  12. Long-term data storage in diamond

    Science.gov (United States)

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A.

    2016-01-01

    The negatively charged nitrogen vacancy (NV−) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV− optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multicolor optical microscopy to read, write, and reset arbitrary data sets with two-dimensional (2D) binary bit density comparable to present digital-video-disk (DVD) technology. Leveraging on the singular dynamics of NV− ionization, we encode information on different planes of the diamond crystal with no cross-talk, hence extending the storage capacity to three dimensions. Furthermore, we correlate the center’s charge state and the nuclear spin polarization of the nitrogen host and show that the latter is robust to a cycle of NV− ionization and recharge. In combination with super-resolution microscopy techniques, these observations provide a route toward subdiffraction NV charge control, a regime where the storage capacity could exceed present technologies. PMID:27819045

  13. Long-term data storage in diamond.

    Science.gov (United States)

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A

    2016-10-01

    The negatively charged nitrogen vacancy (NV - ) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV - optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multicolor optical microscopy to read, write, and reset arbitrary data sets with two-dimensional (2D) binary bit density comparable to present digital-video-disk (DVD) technology. Leveraging on the singular dynamics of NV - ionization, we encode information on different planes of the diamond crystal with no cross-talk, hence extending the storage capacity to three dimensions. Furthermore, we correlate the center's charge state and the nuclear spin polarization of the nitrogen host and show that the latter is robust to a cycle of NV - ionization and recharge. In combination with super-resolution microscopy techniques, these observations provide a route toward subdiffraction NV charge control, a regime where the storage capacity could exceed present technologies.

  14. Data storage and data access at H1

    International Nuclear Information System (INIS)

    Gerhards, R.; Kleinwort, C.; Kruener-Marquis, U.; Niebergall, F.

    1996-01-01

    The electron proton collider HERA at the DESY laboratory in Hamburg and the H1 experiment are now in successful operation for more than three years. The H1 experiment is logging data at an average rate of 500KB/s which results in a yearly raw data volume of several Terabytes. The data are reconstructed with a delay of only a few hours, also yielding several Terabytes of reconstructed data after physics oriented event classification. Physics analysis is performed on a SGI Challenge computer, equipped with about 500 GB of disk and, since a couple of months, direct access to a Storage Tek ACS 4400 silo. The disk space is mainly devoted to store the reconstructed data in very compressed format (typically 5 to 10 KB per event). This allows for very efficient and fast physics analysis. Monte Carlo data, on the other hand, are kept in the ACS silo and staged to disk on demand. (author)

  15. Enhanced Obfuscation Technique for Data Confidentiality in Public Cloud Storage

    Directory of Open Access Journals (Sweden)

    Oli S. Arul

    2016-01-01

    Full Text Available With an advent of cloud computing, data storage has become a boon in information technology. At the same time, data storage in remote places have become important issues. Lot of techniques are available to ensure protection of data confidentiality. These techniques do not completely serve the purpose in protecting data. The Obfuscation techniques come to rescue for protecting data from malicious attacks. This paper proposes an obfuscation technique to encrypt the desired data type on the cloud providing more protection from unknown hackers. The experimental results show that the time taken for obfuscation is low and the confidentiality percentage is high when compared with existing techniques.

  16. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage.

    Science.gov (United States)

    Guo, Yeting; Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-04-13

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query.

  17. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage

    Directory of Open Access Journals (Sweden)

    Yeting Guo

    2018-04-01

    Full Text Available Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE, an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query.

  18. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage

    Science.gov (United States)

    Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-01-01

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query. PMID:29652810

  19. NEON's Eddy-Covariance Storage Exchange: from Tower to Data Portal

    Science.gov (United States)

    Durden, N. P.; Luo, H.; Xu, K.; Metzger, S.; Durden, D.

    2017-12-01

    NEON's eddy-covariance storage exchange system (ECSE) consists of a suite of sensors including temperature sensors, a CO2 and H2O gas analyzer, and isotopic CO2 and H2O analyzers. NEON's ECSE was developed to provide the vertical profile measurements of temperature, CO2 and H2O concentrations, the stable isotope ratios in CO2 (δ13C) and H2O (δ18O and δ2H) in the atmosphere. The profiles of temperature and concentrations of CO2 and H2O are key to calculate storage fluxes for eddy-covariance tower sites. Storage fluxes have a strong diurnal cycle and can be large in magnitude, especially at temporal scales less than one day. However, the storage term is often neglected in flux computations. To obtian accurate eddy-covariance fluxes, the storage fluxes are calculated and incorporated into the calculations of net surface-atmosphere ecosystem exchange of heat, CO2, and H2O for each NEON tower site. Once the ECSE raw data (Level 0, or L0) is retrieved at NEON's headquarters, it is preconditioned through a sequence of unit conversion, time regularization, and plausibility tests. By utilizing NEON's eddy4R framework (Metzger et al., 2017), higher-level data products are generated including: Level 1 (L1): Measurement-level specific averages of temperature and concentrations of CO2 and H2O. Level 2 (L2): Time rate of change of temperature and concentrations of CO2 and H2O over 30 min at each measurement level along the vertical tower profile. Level 3 (L3): Time rate of change of temperature and concentrations of CO2 and H2O over 30 min (L2), spatially interpolated along the vertical tower profile. Level 4 (L4): Storage fluxes of heat, CO2, and H2O calculated from the integrated time rate of change spatially interpolated profile (L3). The L4 storage fluxes are combined with turbulent fluxes to calculate the net surface-atmosphere ecosystem exchange of heat, CO2, and H2O. Moreover, a final quality flag and uncertainty budget are produced individually for each data stream

  20. A privacy-preserving solution for compressed storage and selective retrieval of genomic data.

    Science.gov (United States)

    Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S; Molyneaux, Adam; Xu, Zhenyu; Fellay, Jacques; Steinmetz, Lars M; Hubaux, Jean-Pierre

    2016-12-01

    In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients' complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. © 2016 Huang et al.; Published by Cold Spring Harbor Laboratory Press.

  1. Development of an integrated data storage and retrieval system for TEC

    International Nuclear Information System (INIS)

    Kemmerling, G.; Blom, H.; Busch, P.; Kooijman, W.; Korten, M.; Laat, C.T.A.M. de; Lourens, W.; Meer, E. van der; Nideroest, B.; Oomens, A.A.M.; Wijnoltz, F.; Zwoll, K.

    2000-01-01

    The database system for the storage and retrieval of experimental and technical data at TEXTOR-94 has to be revised. A new database has to be developed, which complies with future performance and multiplatform requirements. The concept, to be presented here, is based on the commercial object database Objectivity. Objectivity allows a flexible object oriented data design and is able to cope with the large amount of data, which is expected to be about 1 TByte per year. Furthermore, it offers the possibility of data distribution over several hosts. Thus, parallel data storage from the frontend to the database is possible and can be used to achieve the required storage performance of 200 MByte per min. In order to store configurational and experimental data, an object model is under design. It is aimed at describing the device specific information and the acquired data in a common way such that different aproaches for data access may be applied. There are several methods forseen for remote access. In addition to the C++ and Java interfaces already included in Objectivity/DB, CORBA and socket based C interfaces are currently under development. This could also allow an access by non-supported platforms and enable existing legacy applications an integration of the database for storage and retrieval of data by a minimum of code changes

  2. The Design and Application of Data Storage System in Miyun Satellite Ground Station

    Science.gov (United States)

    Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu

    2015-04-01

    China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16

  3. Eigenmode multiplexing with SLM for volume holographic data storage

    Science.gov (United States)

    Chen, Guanghao; Miller, Bo E.; Takashima, Yuzuru

    2017-08-01

    The cavity supports the orthogonal reference beam families as its eigenmodes while enhancing the reference beam power. Such orthogonal eigenmodes are used as additional degree of freedom to multiplex data pages, consequently increase storage densities for volume Holographic Data Storage Systems (HDSS) when the maximum number of multiplexed data page is limited by geometrical factor. Image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at multiple Bragg angles by using Liquid Crystal on Silicon (LCOS) spatial light modulators (SLMs) in reference arms. Total of nine holograms are recorded with three angular and three eigenmode.

  4. Adaptive data migration scheme with facilitator database and multi-tier distributed storage in LHD

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Masaki, Ohsuna; Mamoru, Kojima; Setsuo, Imazu; Miki, Nonomura; Kenji, Watanabe; Masayoshi, Moriya; Yoshio, Nagayama; Kazuo, Kawahata

    2008-01-01

    Recent 'data explosion' induces the demand for high flexibility of storage extension and data migration. The data amount of LHD plasma diagnostics has grown 4.6 times bigger than that of three years before. Frequent migration or replication between plenty of distributed storage becomes mandatory, and thus increases the human operational costs. To reduce them computationally, a new adaptive migration scheme has been developed on LHD's multi-tier distributed storage. So-called the HSM (Hierarchical Storage Management) software usually adopts a low-level cache mechanism or simple watermarks for triggering the data stage-in and out between two storage devices. However, the new scheme can deal with a number of distributed storage by the facilitator database that manages the whole data locations with their access histories and retrieval priorities. Not only the inter-tier migration but also the intra-tier replication and moving are even manageable so that it can be a big help in extending or replacing storage equipment. The access history of each data object is also utilized to optimize the volume size of fast and costly RAID, in addition to a normal cache effect for frequently retrieved data. The new scheme has been verified its effectiveness so that LHD multi-tier distributed storage and other next-generation experiments can obtain such the flexible expandability

  5. AN APPROACH TO REDUCE THE STORAGE REQUIREMENT FOR BIOMETRIC DATA IN AADHAR PROJECT

    Directory of Open Access Journals (Sweden)

    T. Sivakumar

    2013-02-01

    Full Text Available AADHAR is an Indian Government Project to provide unique identification to each Citizen of India. The objective of the project is to collect all the personal details and the biometric traits from each individual. Biometric traits such as iris, face and fingerprint are being collected for authentication. All the information will be stored in a centralized data repository. Considering about the storage requirement for the biometric data of the entire population of India, approximately 20,218 TB of storage space will be required. Since 10 fingerprint data are stored, fingerprint details will take most of the space. In this paper, the storage requirement for the biometric data in the AADHAR project is analyzed and a method is proposed to reduce the storage by cropping the original biometric image before storing. This method can reduce the storage space of the biometric data drastically. All the measurements given in this paper are approximate only.

  6. Biophotopol: A Sustainable Photopolymer for Holographic Data Storage Applications

    Directory of Open Access Journals (Sweden)

    Augusto Beléndez

    2012-05-01

    Full Text Available Photopolymers have proved to be useful for different holographic applications such as holographic data storage or holographic optical elements. However, most photopolymers have certain undesirable features, such as the toxicity of some of their components or their low environmental compatibility. For this reason, the Holography and Optical Processing Group at the University of Alicante developed a new dry photopolymer with low toxicity and high thickness called biophotopol, which is very adequate for holographic data storage applications. In this paper we describe our recent studies on biophotopol and the main characteristics of this material.

  7. Shift-Peristrophic Multiplexing for High Density Holographic Data Storage

    Directory of Open Access Journals (Sweden)

    Zenta Ushiyama

    2014-03-01

    Full Text Available Holographic data storage is a promising technology that provides very large data storage capacity, and the multiplexing method plays a significant role in increasing this capacity. Various multiplexing methods have been previously researched. In the present study, we propose a shift-peristrophic multiplexing technique that uses spherical reference waves, and experimentally verify that this method efficiently increases the data capacity. In the proposed method, a series of holograms is recorded with shift multiplexing, in which the recording material is rotated with its axis perpendicular to the material’s surface. By iterating this procedure, multiplicity is shown to improve. This method achieves more than 1 Tbits/inch2 data density recording. Furthermore, a capacity increase of several TB per disk is expected by maximizing the recording medium performance.

  8. RAIN: A Bio-Inspired Communication and Data Storage Infrastructure.

    Science.gov (United States)

    Monti, Matteo; Rasmussen, Steen

    2017-01-01

    We summarize the results and perspectives from a companion article, where we presented and evaluated an alternative architecture for data storage in distributed networks. We name the bio-inspired architecture RAIN, and it offers file storage service that, in contrast with current centralized cloud storage, has privacy by design, is open source, is more secure, is scalable, is more sustainable, has community ownership, is inexpensive, and is potentially faster, more efficient, and more reliable. We propose that a RAIN-style architecture could form the backbone of the Internet of Things that likely will integrate multiple current and future infrastructures ranging from online services and cryptocurrency to parts of government administration.

  9. Cavity enhanced eigenmode multiplexing for volume holographic data storage

    Science.gov (United States)

    Miller, Bo E.; Takashima, Yuzuru

    2017-08-01

    Previously, we proposed and experimentally demonstrated enhanced recording speeds by using a resonant optical cavity to semi-passively increase the reference beam power while recording image bearing holograms. In addition to enhancing the reference beam power the cavity supports the orthogonal reference beam families of its eigenmodes, which can be used as a degree of freedom to multiplex data pages and increase storage densities for volume Holographic Data Storage Systems (HDSS). While keeping the increased recording speed of a cavity enhanced reference arm, image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at two Bragg angles for expedited recording of four multiplexed holograms. We experimentally confirmed write rates are enhanced by an average factor of 1.1, and page crosstalk is about 2.5%. This hybrid multiplexing opens up a pathway to increase storage density while minimizing modifications to current angular multiplexing HDSS.

  10. EOS as the present and future solution for data storage at CERN

    CERN Document Server

    Peters, AJ; Adde, G

    2015-01-01

    EOS is an open source distributed disk storage system in production since 2011 at CERN. Development focus has been on low-latency analysis use cases for LHC(1) and non- LHC experiments and life-cycle management using JBOD(2) hardware for multi PB storage installations. The EOS design implies a split of hot and cold storage and introduced a change of the traditional HSM(3) functionality based workflows at CERN.The 2015 deployment brings storage at CERN to a new scale and foresees to breach 100 PB of disk storage in a distributed environment using tens of thousands of (heterogeneous) hard drives. EOS has brought to CERN major improvements compared to past storage solutions by allowing quick changes in the quality of service of the storage pools. This allows the data centre to quickly meet the changing performance and reliability requirements of the LHC experiments with minimal data movements and dynamic reconfiguration. For example, the software stack has met the specific needs of the dual computing centre set-...

  11. Secure data storage by three-dimensional absorbers in highly scattering volume medium

    International Nuclear Information System (INIS)

    Matoba, Osamu; Matsuki, Shinichiro; Nitta, Kouichi

    2008-01-01

    A novel data storage in a volume medium with highly scattering coefficient is proposed for data security application. Three-dimensional absorbers are used as data. These absorbers can not be measured by interferometer when the scattering in a volume medium is strong enough. We present a method to reconstruct three-dimensional absorbers and present numerical results to show the effectiveness of the proposed data storage.

  12. Revised cloud storage structure for light-weight data archiving in LHD

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Masaki, Ohsuna; Mamoru, Kojima; Setsuo, Imazu; Miki, Nonomura; Masahiko, Emoto; Takashi, Yamamoto; Yoshio, Nagayama; Takahisa, Ozeki; Noriyoshi, Nakajima; Katsumi, Ida; Osamu, Kaneko

    2014-01-01

    Highlights: • GlusterFS is adopted to replace IznaStor cloud storage in LHD. • GlusterFS and OpenStack/Swift are compared. • SSD-based GlusterFS distributed replicated volume is separated from normal RAID storage. • LABCOM system changes the storage technology every 4 years for cost efficiency. - Abstract: The LHD data archiving system has newly selected GlusterFS distributed filesystem for the replacement of the present cloud storage software named “IznaStor/dSS”. Even though the prior software provided many favorable functionalities of hot plug and play node insertion, internal auto-replication of data files, and symmetric load balancing between all member nodes, it revealed a poor feature in recovering from an accidental malfunction of a storage node. Once a failure happened, the recovering process usually took at least several days or sometimes more than a week with a heavy cpu load. In some cases they fell into the so-called “split-brain” or “amnesia” condition, not to get recovered from it. Since the recovery time tightly depends on the capacity size of the fault node, individual HDD management is more desirable than large volumes of HDD arrays. In addition, the dynamic mutual awareness of data location information may be removed if some other static data distribution method can be applied. In this study, the candidate middleware of “OpenStack/Swift” and “GlusterFS” has been tested by using the real mass of LHD data for more than half a year, and finally GlusterFS has been selected to replace the present IznaStor. It has implemented very limited functionalities of cloud storage but a simplified RAID10-like structure, which may consequently provide lighter-weight read/write ability. Since the LABCOM data system is implemented to be independent of the storage structure, it is easy to plug off the IznaStor and on the new GlusterFS. The effective I/O speed is also confirmed to be on the same level as the estimated one from raw

  13. Revised cloud storage structure for light-weight data archiving in LHD

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, Hideya, E-mail: nakanisi@nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Masaki, Ohsuna; Mamoru, Kojima; Setsuo, Imazu; Miki, Nonomura; Masahiko, Emoto; Takashi, Yamamoto; Yoshio, Nagayama [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Takahisa, Ozeki [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan); Noriyoshi, Nakajima; Katsumi, Ida; Osamu, Kaneko [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan)

    2014-05-15

    Highlights: • GlusterFS is adopted to replace IznaStor cloud storage in LHD. • GlusterFS and OpenStack/Swift are compared. • SSD-based GlusterFS distributed replicated volume is separated from normal RAID storage. • LABCOM system changes the storage technology every 4 years for cost efficiency. - Abstract: The LHD data archiving system has newly selected GlusterFS distributed filesystem for the replacement of the present cloud storage software named “IznaStor/dSS”. Even though the prior software provided many favorable functionalities of hot plug and play node insertion, internal auto-replication of data files, and symmetric load balancing between all member nodes, it revealed a poor feature in recovering from an accidental malfunction of a storage node. Once a failure happened, the recovering process usually took at least several days or sometimes more than a week with a heavy cpu load. In some cases they fell into the so-called “split-brain” or “amnesia” condition, not to get recovered from it. Since the recovery time tightly depends on the capacity size of the fault node, individual HDD management is more desirable than large volumes of HDD arrays. In addition, the dynamic mutual awareness of data location information may be removed if some other static data distribution method can be applied. In this study, the candidate middleware of “OpenStack/Swift” and “GlusterFS” has been tested by using the real mass of LHD data for more than half a year, and finally GlusterFS has been selected to replace the present IznaStor. It has implemented very limited functionalities of cloud storage but a simplified RAID10-like structure, which may consequently provide lighter-weight read/write ability. Since the LABCOM data system is implemented to be independent of the storage structure, it is easy to plug off the IznaStor and on the new GlusterFS. The effective I/O speed is also confirmed to be on the same level as the estimated one from raw

  14. A Toolkit For Storage Qos Provisioning For Data-Intensive Applications

    Directory of Open Access Journals (Sweden)

    Renata Słota

    2012-01-01

    Full Text Available This paper describes a programming toolkit developed in the PL-Grid project, named QStorMan, which supports storage QoS provisioning for data-intensive applications in distributed environments. QStorMan exploits knowledge-oriented methods for matching storage resources to non-functional requirements, which are defined for a data-intensive application. In order to support various usage scenarios, QStorMan provides two interfaces, such as programming libraries or a web portal. The interfaces allow to define the requirements either directly in an application source code or by using an intuitive graphical interface. The first way provides finer granularity, e.g., each portion of data processed by an application can define a different set of requirements. The second method is aimed at legacy applications support, which source code can not be modified. The toolkit has been evaluated using synthetic benchmarks and the production infrastructure of PL-Grid, in particular its storage infrastructure, which utilizes the Lustre file system.

  15. Effective Data Backup System Using Storage Area Network Solution ...

    African Journals Online (AJOL)

    The primary cause of data loss is lack or non- existent of data backup. Storage Area Network Solution (SANS) is internet-based software which will collect clients data and host them in several locations to forestall data loss in case of disaster in one location. The researcher used adobe Dreamweaver (CSC3) embedded with ...

  16. Vector and Raster Data Storage Based on Morton Code

    Science.gov (United States)

    Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.

    2018-05-01

    Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.

  17. Encrypted Data Storage in EGEE

    CERN Document Server

    Frohner, Ákos

    2006-01-01

    The medical community is routinely using clinical images and associated medical data for diagnosis, intervention planning and therapy follow-up. Medical imaging is producing an increasing number of digital images for which computerized archiving, processing and analysis are needed. Grids are promising infrastructures for managing and analyzing the huge medical databases. Given the sensitive nature of medical images, practiotionners are often reluctant to use distributed systems though. Security if often implemented by isolating the imaging network from the outside world inside hospitals. Given the wide scale distribution of grid infrastructures and their multiple administrative entities, the level of security for manipulating medical data should be particularly high. In this presentation we describe the architecture of a solution, the gLite Encrypted Data Storage (EDS), which was developed in the framework of Enabling Grids for E-sciencE (EGEE), a project of the European Commission (contract number INFSO--508...

  18. Efficient and secure outsourcing of genomic data storage.

    Science.gov (United States)

    Sousa, João Sá; Lefebvre, Cédric; Huang, Zhicong; Raisaro, Jean Louis; Aguilar-Melchor, Carlos; Killijian, Marc-Olivier; Hubaux, Jean-Pierre

    2017-07-26

    Cloud computing is becoming the preferred solution for efficiently dealing with the increasing amount of genomic data. Yet, outsourcing storage and processing sensitive information, such as genomic data, comes with important concerns related to privacy and security. This calls for new sophisticated techniques that ensure data protection from untrusted cloud providers and that still enable researchers to obtain useful information. We present a novel privacy-preserving algorithm for fully outsourcing the storage of large genomic data files to a public cloud and enabling researchers to efficiently search for variants of interest. In order to protect data and query confidentiality from possible leakage, our solution exploits optimal encoding for genomic variants and combines it with homomorphic encryption and private information retrieval. Our proposed algorithm is implemented in C++ and was evaluated on real data as part of the 2016 iDash Genome Privacy-Protection Challenge. Results show that our solution outperforms the state-of-the-art solutions and enables researchers to search over millions of encrypted variants in a few seconds. As opposed to prior beliefs that sophisticated privacy-enhancing technologies (PETs) are unpractical for real operational settings, our solution demonstrates that, in the case of genomic data, PETs are very efficient enablers.

  19. A secure and efficient audit mechanism for dynamic shared data in cloud storage.

    Science.gov (United States)

    Kwon, Ohmin; Koo, Dongyoung; Shin, Yongjoo; Yoon, Hyunsoo

    2014-01-01

    With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data.

  20. A Secure and Efficient Audit Mechanism for Dynamic Shared Data in Cloud Storage

    Science.gov (United States)

    2014-01-01

    With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data. PMID:24959630

  1. INFORMATION SECURITY AND SECURE SEARCH OVER ENCRYPTED DATA IN CLOUD STORAGE SERVICES

    OpenAIRE

    Mr. A Mustagees Shaikh *; Prof. Nitin B. Raut

    2016-01-01

    Cloud computing is most widely used as the next generation architecture of IT enterprises, that provide convenient remote access to data storage and application services. This cloud storage can potentially bring great economical savings for data owners and users, but due to wide concerns of data owners that their private data may be exposed or handled by cloud providers. Hence end-to-end encryption techniques and fuzzy fingerprint technique have been used as solutions for secure cloud data st...

  2. Data backup security in cloud storage system

    OpenAIRE

    Атаян, Борис Геннадьевич; Национальный политехнический университет Армении; Багдасарян, Татевик Араевна; Национальный политехнический университет Армении

    2016-01-01

    Cloud backup system is proposed, which provides means for effective creation, secure storage and restore of backups inCloud. For data archiving new efficient SGBP file format is being used in the system, which is based on DEFLATE compressionalgorithm. Proposed format provides means for fast archive creation, which can contain significant amounts of data. Modernapproaches of backup archive protection are described in the paper. Also the SGBP format is compared to heavily used ZIP format(both Z...

  3. Storage and Retrieval of Encrypted Data Blocks with In-Line Message Authentication Codes

    NARCIS (Netherlands)

    Bosch, H.G.P.; McLellan Jr, Hubert Rae; Mullender, Sape J.

    2007-01-01

    Techniques are disclosed for in-line storage of message authentication codes with respective encrypted data blocks. In one aspect, a given data block is encrypted and a message authentication code is generated for the encrypted data block. A target address is determined for storage of the encrypted

  4. Cloud Storage and Bioinformatics in a private cloud deployment: Lessons for Data Intensive research

    OpenAIRE

    Chang, Victor; Walters, Robert John; Wills, Gary

    2013-01-01

    This paper describes service portability for a private cloud deployment, including a detailed case study about Cloud Storage and bioinformatics services developed as part of the Cloud Computing Adoption Framework (CCAF). Our Cloud Storage design and deployment is based on Storage Area Network (SAN) technologies, details of which include functionalities, technical implementation, architecture and user support. Experiments for data services (backup automation, data recovery and data migration) ...

  5. Identifying Non-Volatile Data Storage Areas: Unique Notebook Identification Information as Digital Evidence

    Directory of Open Access Journals (Sweden)

    Nikica Budimir

    2007-03-01

    Full Text Available The research reported in this paper introduces new techniques to aid in the identification of recovered notebook computers so they may be returned to the rightful owner. We identify non-volatile data storage areas as a means of facilitating the safe storing of computer identification information. A forensic proof of concept tool has been designed to test the feasibility of several storage locations identified within this work to hold the data needed to uniquely identify a computer. The tool was used to perform the creation and extraction of created information in order to allow the analysis of the non-volatile storage locations as valid storage areas capable of holding and preserving the data created within them.  While the format of the information used to identify the machine itself is important, this research only discusses the insertion, storage and ability to retain such information.

  6. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    Science.gov (United States)

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  7. Cavity-enhanced eigenmode and angular hybrid multiplexing in holographic data storage systems.

    Science.gov (United States)

    Miller, Bo E; Takashima, Yuzuru

    2016-12-26

    Resonant optical cavities have been demonstrated to improve energy efficiencies in Holographic Data Storage Systems (HDSS). The orthogonal reference beams supported as cavity eigenmodes can provide another multiplexing degree of freedom to push storage densities toward the limit of 3D optical data storage. While keeping the increased energy efficiency of a cavity enhanced reference arm, image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at two Bragg angles. We experimentally confirmed write rates are enhanced by an average factor of 1.1, and page crosstalk is about 2.5%. This hybrid multiplexing opens up a pathway to increase storage density while minimizing modification of current angular multiplexing HDSS.

  8. Searchable Data Vault: Encrypted Queries in Secure Distributed Cloud Storage

    Directory of Open Access Journals (Sweden)

    Geong Sen Poh

    2017-05-01

    Full Text Available Cloud storage services allow users to efficiently outsource their documents anytime and anywhere. Such convenience, however, leads to privacy concerns. While storage providers may not read users’ documents, attackers may possibly gain access by exploiting vulnerabilities in the storage system. Documents may also be leaked by curious administrators. A simple solution is for the user to encrypt all documents before submitting them. This method, however, makes it impossible to efficiently search for documents as they are all encrypted. To resolve this problem, we propose a multi-server searchable symmetric encryption (SSE scheme and construct a system called the searchable data vault (SDV. A unique feature of the scheme is that it allows an encrypted document to be divided into blocks and distributed to different storage servers so that no single storage provider has a complete document. By incorporating the scheme, the SDV protects the privacy of documents while allowing for efficient private queries. It utilizes a web interface and a controller that manages user credentials, query indexes and submission of encrypted documents to cloud storage services. It is also the first system that enables a user to simultaneously outsource and privately query documents from a few cloud storage services. Our preliminary performance evaluation shows that this feature introduces acceptable computation overheads when compared to submitting documents directly to a cloud storage service.

  9. Researchers wrangle petabytes of data storage with NAS, tape

    CERN Multimedia

    Pariseau, Beth

    2007-01-01

    "Much is made in the enterprise data storage industry about the performance of disk systems over tape drives, but the managers of one data center that has eached the far limits of capacity say otherwise. Budget and performance demands forced them to build access protocols and data management tools for disk systems from scratch."

  10. Managing security and privacy concerns over data storage in healthcare research.

    Science.gov (United States)

    Mackenzie, Isla S; Mantay, Brian J; McDonnell, Patrick G; Wei, Li; MacDonald, Thomas M

    2011-08-01

    Issues surrounding data security and privacy are of great importance when handling sensitive health-related data for research. The emphasis in the past has been on balancing the risks to individuals with the benefit to society of the use of databases for research. However, a new way of looking at such issues is that by optimising procedures and policies regarding security and privacy of data to the extent that there is no appreciable risk to the privacy of individuals, we can create a 'win-win' situation in which everyone benefits, and pharmacoepidemiological research can flourish with public support. We discuss holistic measures, involving both information technology and people, taken to improve the security and privacy of data storage. After an internal review, we commissioned an external audit by an independent consultant with a view to optimising our data storage and handling procedures. Improvements to our policies and procedures were implemented as a result of the audit. By optimising our storage of data, we hope to inspire public confidence and hence cooperation with the use of health care data in research. Copyright © 2011 John Wiley & Sons, Ltd.

  11. BRISK--research-oriented storage kit for biology-related data.

    Science.gov (United States)

    Tan, Alan; Tripp, Ben; Daley, Denise

    2011-09-01

    In genetic science, large-scale international research collaborations represent a growing trend. These collaborations have demanding and challenging database, storage, retrieval and communication needs. These studies typically involve demographic and clinical data, in addition to the results from numerous genomic studies (omics studies) such as gene expression, eQTL, genome-wide association and methylation studies, which present numerous challenges, thus the need for data integration platforms that can handle these complex data structures. Inefficient methods of data transfer and access control still plague research collaboration. As science becomes more and more collaborative in nature, the need for a system that adequately manages data sharing becomes paramount. Biology-Related Information Storage Kit (BRISK) is a package of several web-based data management tools that provide a cohesive data integration and management platform. It was specifically designed to provide the architecture necessary to promote collaboration and expedite data sharing between scientists. The software, documentation, Java source code and demo are available at http://genapha.icapture.ubc.ca/brisk/index.jsp. BRISK was developed in Java, and tested on an Apache Tomcat 6 server with a MySQL database. denise.daley@hli.ubc.ca.

  12. Privacy-Preserving Outsourced Auditing Scheme for Dynamic Data Storage in Cloud

    OpenAIRE

    Tu, Tengfei; Rao, Lu; Zhang, Hua; Wen, Qiaoyan; Xiao, Jia

    2017-01-01

    As information technology develops, cloud storage has been widely accepted for keeping volumes of data. Remote data auditing scheme enables cloud user to confirm the integrity of her outsourced file via the auditing against cloud storage, without downloading the file from cloud. In view of the significant computational cost caused by the auditing process, outsourced auditing model is proposed to make user outsource the heavy auditing task to third party auditor (TPA). Although the first outso...

  13. A protect solution for data security in mobile cloud storage

    Science.gov (United States)

    Yu, Xiaojun; Wen, Qiaoyan

    2013-03-01

    It is popular to access the cloud storage by mobile devices. However, this application suffer data security risk, especial the data leakage and privacy violate problem. This risk exists not only in cloud storage system, but also in mobile client platform. To reduce the security risk, this paper proposed a new security solution. It makes full use of the searchable encryption and trusted computing technology. Given the performance limit of the mobile devices, it proposes the trusted proxy based protection architecture. The design basic idea, deploy model and key flows are detailed. The analysis from the security and performance shows the advantage.

  14. Impact of Data Placement on Resilience in Large-Scale Object Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Carns, Philip; Harms, Kevin; Jenkins, John; Mubarak, Misbah; Ross, Robert; Carothers, Christopher

    2016-05-02

    Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model to investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.

  15. 21 CFR 58.190 - Storage and retrieval of records and data.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Storage and retrieval of records and data. 58.190 Section 58.190 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES..., protocols, specimens, and interim and final reports. Conditions of storage shall minimize deterioration of...

  16. Analysing I/O bottlenecks in LHC data analysis on grid storage resources

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    We describe recent I/O testing frameworks that we have developed and applied within the UK GridPP Collaboration, the ATLAS experiment and the DPM team, for a variety of distinct purposes. These include benchmarking vendor supplied storage products, discovering scaling limits of SRM solutions, tuning of storage systems for experiment data analysis, evaluating file access protocols, and exploring IO read patterns of experiment software and their underlying event data models. With multiple grid sites now dealing with petabytes of data, such studies are becoming increasingly essential. We describe how the tests build, and improve, on previous work and contrast how the use-cases differ. We also detail the results obtained and the implications for storage hardware, middleware and experiment software.

  17. Analysing I/O bottlenecks in LHC data analysis on grid storage resources

    International Nuclear Information System (INIS)

    Bhimji, W; Clark, P; Doidge, M; Hellmich, M P; Skipsey, S; Vukotic, I

    2012-01-01

    We describe recent I/O testing frameworks that we have developed and applied within the UK GridPP Collaboration, the ATLAS experiment and the DPM team, for a variety of distinct purposes. These include benchmarking vendor supplied storage products, discovering scaling limits of SRM solutions, tuning of storage systems for experiment data analysis, evaluating file access protocols, and exploring I/O read patterns of experiment software and their underlying event data models. With multiple grid sites now dealing with petabytes of data, such studies are becoming essential. We describe how the tests build, and improve, on previous work and contrast how the use-cases differ. We also detail the results obtained and the implications for storage hardware, middleware and experiment software.

  18. Evaluation of the Huawei UDS cloud storage system for CERN specific data

    International Nuclear Information System (INIS)

    Resines, M Zotes; Hughes, J; Wang, L; Heikkila, S S; Duellmann, D; Adde, G; Toebbicke, R

    2014-01-01

    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.

  19. Evaluation of the Huawei UDS cloud storage system for CERN specific data

    Science.gov (United States)

    Zotes Resines, M.; Heikkila, S. S.; Duellmann, D.; Adde, G.; Toebbicke, R.; Hughes, J.; Wang, L.

    2014-06-01

    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.

  20. Move It or Lose It: Cloud-Based Data Storage

    Science.gov (United States)

    Waters, John K.

    2010-01-01

    There was a time when school districts showed little interest in storing or backing up their data to remote servers. Nothing seemed less secure than handing off data to someone else. But in the last few years the buzz around cloud storage has grown louder, and the idea that data backup could be provided as a service has begun to gain traction in…

  1. Rewritable 3D bit optical data storage in a PMMA-based photorefractive polymer

    Energy Technology Data Exchange (ETDEWEB)

    Day, D.; Gu, M. [Swinburne Univ. of Tech., Hawthorn, Vic. (Australia). Centre for Micro-Photonics; Smallridge, A. [Victoria Univ., Melbourne (Australia). School of Life Sciences and Technology

    2001-07-04

    A cheap, compact, and rewritable high-density optical data storage system for CD and DVD applications is presented by the authors. Continuous-wave illumination under two-photon excitation in a new poly(methylmethacrylate) (PMMA) based photorefractive polymer allows 3D bit storage of sub-Tbyte data. (orig.)

  2. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    Science.gov (United States)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  3. Analyzing the Impact of Storage Shortage on Data Availability in Decentralized Online Social Networks

    Directory of Open Access Journals (Sweden)

    Songling Fu

    2014-01-01

    Full Text Available Maintaining data availability is one of the biggest challenges in decentralized online social networks (DOSNs. The existing work often assumes that the friends of a user can always contribute to the sufficient storage capacity to store all data. However, this assumption is not always true in today’s online social networks (OSNs due to the fact that nowadays the users often use the smart mobile devices to access the OSNs. The limitation of the storage capacity in mobile devices may jeopardize the data availability. Therefore, it is desired to know the relation between the storage capacity contributed by the OSN users and the level of data availability that the OSNs can achieve. This paper addresses this issue. In this paper, the data availability model over storage capacity is established. Further, a novel method is proposed to predict the data availability on the fly. Extensive simulation experiments have been conducted to evaluate the effectiveness of the data availability model and the on-the-fly prediction.

  4. Partial storage optimization and load control strategy of cloud data centers.

    Science.gov (United States)

    Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

  5. Partial Storage Optimization and Load Control Strategy of Cloud Data Centers

    Science.gov (United States)

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444

  6. Dual-Wavelength Sensitized Photopolymer for Holographic Data Storage

    Science.gov (United States)

    Tao, Shiquan; Zhao, Yuxia; Wan, Yuhong; Zhai, Qianli; Liu, Pengfei; Wang, Dayong; Wu, Feipeng

    2010-08-01

    Novel photopolymers for holographic storage were investigated by combining acrylate monomers and/or vinyl monomers as recording media and liquid epoxy resins plus an amine harder as binder. In order to improve the holographic performances of the material at blue-green wavelength band two novel dyes were used as sensitizer. The methods of evaluating the holographic performances of the material, including the shrinkage and noise characteristics, are described in detail. Preliminary experiments show that samples with optimized composite have good holographic performances, and it is possible to record dual-wavelength hologram simultaneously in this photopolymer by sharing the same optical system, thus the storage density and data rate can be doubly increased.

  7. An intelligent data model for the storage of structured grids

    Science.gov (United States)

    Clyne, John; Norton, Alan

    2013-04-01

    With support from the U.S. National Science Foundation we have developed, and currently maintain, VAPOR: a geosciences-focused, open source visual data analysis package. VAPOR enables highly interactive exploration, as well as qualitative and quantitative analysis of high-resolution simulation outputs using only a commodity, desktop computer. The enabling technology behind VAPOR's ability to interact with a data set, whose size would overwhelm all but the largest analysis computing resources, is a progressive data access file format, called the VAPOR Data Collection (VDC). The VDC is based on the discrete wavelet transform and their information compaction properties. Prior to analysis, raw data undergo a wavelet transform, concentrating the information content into a fraction of the coefficients. The coefficients are then sorted by their information content (magnitude) into a small number of bins. Data are reconstructed by applying an inverse wavelet transform. If all of the coefficient bins are used during reconstruction the process is lossless (up to floating point round-off). If only a subset of the bins are used, an approximation of the original data is produced. A crucial point here is that the principal benefit to reconstruction from a subset of wavelet coefficients is a reduction in I/O. Further, if smaller coefficients are simply discarded, or perhaps stored on more capacious tertiary storage, secondary storage requirements (e.g. disk) can be reduced as well. In practice, these reductions in I/O or storage can be on the order of tens or even hundreds. This talk will briefly describe the VAPOR Data Collection, and will present real world success stories from the geosciences that illustrate how progressive data access enables highly interactive exploration of Big Data.

  8. A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data

    Science.gov (United States)

    Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.

    2017-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  9. Development of Data Storage System for Portable Multichannel Analyzer using S D Card

    International Nuclear Information System (INIS)

    Suksompong, Tanate; Ngernvijit, Narippawaj; Sudprasert, Wanwisa

    2009-07-01

    Full text: The development of data storage system for portable multichannel analyzer (MCA) focused on the application of SD card as a storage device instead of the older devices that could not easily extend their capacity. The entire work consisted of two parts: the first part was the study for pulse detection by designing the input pulse detecting circuit. The second part dealed with the accuracy testing of data storage system for portable MCA, consisting of the design of connecting circuit between micro controller and SD card, the transfer of input pulse data into SD card and the ability of data storage system for radiation detection. It was found that the input pulse detecting circuit could detect the input pulse with the maximum voltage, then the signal was transferred to micro controller for data processing. The micro controller could connect to SD card via SPI MODE. The portable MCA could perfectly verify the input signal ranging from 0.2 to 5.0 volts. The SD card could store the data as . xls file which could easily be accessed by the compatible software such as Microsoft Excel

  10. Changes in cod muscle proteins during frozen storage revealed by proteome analysis and multivariate data analysis

    DEFF Research Database (Denmark)

    Kjærsgård, Inger Vibeke Holst; Nørrelykke, M.R.; Jessen, Flemming

    2006-01-01

    Multivariate data analysis has been combined with proteomics to enhance the recovery of information from 2-DE of cod muscle proteins during different storage conditions. Proteins were extracted according to 11 different storage conditions and samples were resolved by 2-DE. Data generated by 2-DE...... was subjected to principal component analysis (PCA) and discriminant partial least squares regression (DPLSR). Applying PCA to 2-DE data revealed the samples to form groups according to frozen storage time, whereas differences due to different storage temperatures or chilled storage in modified atmosphere...... light chain 1, 2 and 3, triose-phosphate isomerase, glyceraldehyde-3-phosphate dehydrogenase, aldolase A and two ?-actin fragments, and a nuclease diphosphate kinase B fragment to change in concentration, during frozen storage. Application of proteomics, multivariate data analysis and MS/MS to analyse...

  11. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    Science.gov (United States)

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  12. Empirical Analysis of Using Erasure Coding in Outsourcing Data Storage With Provable Security

    Science.gov (United States)

    2016-06-01

    computing and communication technologies become powerful and advanced , people are exchanging a huge amount of data, and they are de- manding more storage...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS EMPIRICAL ANALYSIS OF USING ERASURE CODING IN OUTSOURCING DATA STORAGEWITH PROVABLE SECURITY by...2015 to 06-17-2016 4. TITLE AND SUBTITLE EMPIRICAL ANALYSIS OF USING ERASURE CODING IN OUTSOURCING DATA STORAGE WITH PROVABLE SECURITY 5. FUNDING

  13. A novel data storage logic in the cloud.

    Science.gov (United States)

    Mátyás, Bence; Szarka, Máté; Járvás, Gábor; Kusper, Gábor; Argay, István; Fialowski, Alice

    2016-01-01

    Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. The solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table.

  14. Land Water Storage within the Congo Basin Inferred from GRACE Satellite Gravity Data

    Science.gov (United States)

    Crowley, John W.; Mitrovica, Jerry X.; Bailey, Richard C.; Tamisiea, Mark E.; Davis, James L.

    2006-01-01

    GRACE satellite gravity data is used to estimate terrestrial (surface plus ground) water storage within the Congo Basin in Africa for the period of April, 2002 - May, 2006. These estimates exhibit significant seasonal (30 +/- 6 mm of equivalent water thickness) and long-term trends, the latter yielding a total loss of approximately 280 km(exp 3) of water over the 50-month span of data. We also combine GRACE and precipitation data set (CMAP, TRMM) to explore the relative contributions of the source term to the seasonal hydrological balance within the Congo Basin. We find that the seasonal water storage tends to saturate for anomalies greater than 30-44 mm of equivalent water thickness. Furthermore, precipitation contributed roughly three times the peak water storage after anomalously rainy seasons, in early 2003 and 2005, implying an approximately 60-70% loss from runoff and evapotranspiration. Finally, a comparison of residual land water storage (monthly estimates minus best-fitting trends) in the Congo and Amazon Basins shows an anticorrelation, in agreement with the 'see-saw' variability inferred by others from runoff data.

  15. The Grid Enabled Mass Storage System (GEMMS): the Storage and Data management system used at the INFN Tier1 at CNAF.

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The storage solution currently used in production at the INFN Tier-1 at CNAF, is the result of several years of case studies, software development and tests. This solution, called the Grid Enabled Mass Storage System (GEMSS), is based on a custom integration between a fast and reliable parallel filesystem (IBM GPFS), with a complete integrated tape backend based on TIVOLI TSM Hierarchical storage management (HSM) and the Storage Resource Manager (StoRM), providing access to grid users through a standard SRM interface. Since the start of the operations of the Large Hadron Collider (LHC), all the LHC experiments have been using GEMMS at CNAF for both the fast access to data on disk and the long-term tape archive. Moreover, during the last year, GEMSS has become the standard solution for all the other experiments hosted at CNAF, allowing the definitive consolidation of the data storage layer. Our choice has proved to be successful in the last two years of production with constant enhancements in the software re...

  16. Selective phase masking to reduce material saturation in holographic data storage systems

    Science.gov (United States)

    Phillips, Seth; Fair, Ivan

    2014-09-01

    Emerging networks and applications require enormous data storage. Holographic techniques promise high-capacity storage, given resolution of a few remaining technical issues. In this paper, we propose a technique to overcome one such issue: mitigation of large magnitude peaks in the stored image that cause material saturation resulting in readout errors. We consider the use of ternary data symbols, with modulation in amplitude and phase, and use a phase mask during the encoding stage to reduce the probability of large peaks arising in the stored Fourier domain image. An appropriate mask is selected from a predefined set of pseudo-random masks by computing the Fourier transform of the raw data array as well as the data array multiplied by each mask. The data array or masked array with the lowest Fourier domain peak values is recorded. On readout, the recorded array is multiplied by the mask used during recording to recover the original data array. Simulations are presented that demonstrate the benefit of this approach, and provide insight into the appropriate number of phase masks to use in high capacity holographic data storage systems.

  17. Structured storage in ATLAS Distributed Data Management: use cases and experiences

    International Nuclear Information System (INIS)

    Lassnig, Mario; Garonne, Vincent; Beermann, Thomas; Dimitrov, Gancho; Canali, Luca; Molfetas, Angelos; Zang Donal; Azzurra Chinzer, Lisa

    2012-01-01

    The distributed data management system of the high-energy physics experiment ATLAS has a critical dependency on the Oracle Relational Database Management System. Recently however, the increased appearance of data warehouselike workload in the experiment has put considerable and increasing strain on the Oracle database. In particular, the analysis of archived data, and the aggregation of data for summary purposes has been especially demanding. For this reason, structured storage systems were evaluated to offload the Oracle database, and to handle processing of data in a non-transactional way. This includes distributed file systems like HDFS that support parallel execution of computational tasks on distributed data, as well as non-relational databases like HBase, Cassandra, or MongoDB. In this paper, the most important analysis and aggregation use cases of the data management system are presented, and how structured storage systems were established to process them.

  18. Holographic data storage: science fiction or science fact?

    Science.gov (United States)

    Anderson, Ken; Ayres, Mark; Askham, Fred; Sissom, Brad

    2014-09-01

    To compete in the archive and backup industries, holographic data storage must be highly competitive in four critical areas: total cost of ownership (TCO), cost/TB, capacity/footprint, and transfer rate. New holographic technology advancements by Akonia Holographics have enabled the potential for ultra-high capacity holographic storage devices that are capable of world record bit densities of over 2-4Tbit/in2, up to 200MB/s transfer rates, and media costs less than $10/TB in the next few years. Additional advantages include more than a 3x lower TCO than LTO, a 3.5x decrease in volumetric footprint, 30ms random access times, and 50 year archive life. At these bit densities, 4.5 Petabytes of uncompressed user data could be stored in a 19" rack system. A demonstration platform based on these new advances has been designed and built by Akonia to progressively demonstrate bit densities of 2Tb/in2, 4Tb/in2, and 8Tb/in2 over the next year. Keywords: holographic

  19. ESGF and WDCC: The Double Structure of the Digital Data Storage at DKRZ

    Science.gov (United States)

    Toussaint, F.; Höck, H.

    2016-12-01

    Since a couple of years, Digital Repositories of climate science face new challenges: International projects are global collaborations. The data storage in parallel moved to federated, distributed storage systems like ESGF. For the long term archival storage (LTA) on the other hand, communities, funders, and data users make stronger demands for data and metadata quality to facilitate data use and reuse. At DKRZ, this situation led to a twofold data dissemination system - a situation which has influence on administration, workflows, and sustainability of the data. The ESGF system is focused on the needs of users as partners in global projects. It includes replication tools, detailed global project standards, and efficient search for the data to download. In contrast, DKRZ's classical CERA LTA storage aims for long term data holding and data curation as well as for data reuse requiring high metadata quality standards. In addition, for LTA data a Digital Object Identifier publication service for the direct integration of research data in scientific publications has been implemented. The editorial process at DKRZ-LTA ensures the quality of metadata and research data. The DOI and a citation code are provided and afterwards registered under DataCite's (datacite.org) regulations. In the overall data life cycle continuous reliability of the data and metadata quality is essential to allow for data handling at Petabytes level, data long term usability, and adequate publication of the results. These considerations lead to the question "What is quality" - with respect to data, to the repository itself, to the publisher, and the user? Global consensus is needed for these assessments as the phases of the end to end workflow gear into each other: For data and metadata, checks need to go hand in hand with the processes of production and storage. The results can be judged following a Quality Maturity Matrix (QMM). Repositories can be certified according to their trustworthiness

  20. A Survey on the Architectures of Data Security in Cloud Storage Infrastructure

    OpenAIRE

    T.Brindha; R.S.Shaji; G.P.Rajesh

    2013-01-01

    Cloud computing is a most alluring technology that facilitates conducive, on-demand network access based on the requirement of users with nominal effort on management and interaction among cloud providers. The cloud storage serves as a dependable platform for long term storage needs which enables the users to move the data to the cloud in a rapid and secure manner. It assists activities and government agencies considerably decrease their economic overhead of data organization, as they can sto...

  1. Multiplexed optical data storage and vectorial ray tracing

    Directory of Open Access Journals (Sweden)

    Foreman M.R.

    2010-06-01

    Full Text Available With the motivation of creating a terabyte-sized optical disk, a novel imaging technique is implemented. This technique merges two existing technologies: confocal microscopy and Mueller matrix imaging. Mueller matrix images from a high numerical space are obtained. The acquisition of these images makes the exploration of polarisation properties in a sample possible. The particular case of optical data storage is used as an example in this presentation. Since we encode information into asymmetric datapits (see Figure 1, the study of the polarisation of the scattered light can then be used to recover the orientation of the pit. It is thus possible to multiplex information by changing the angle of the mark. The storage capacity in the system is hence limited by the number of distinct angles that the optical system can resolve. This presentation thus answers the question; what is the current storage capacity of a polarisation sensitive optical disk? After a brief introduction to polarisation, the decoding method and experimental results are presented so as to provide an answer to this question. With the aim of understanding high NA focusing, an introduction to vectorial ray tracing is then given.

  2. 10 CFR 95.25 - Protection of National Security Information and Restricted Data in storage.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Protection of National Security Information and Restricted Data in storage. 95.25 Section 95.25 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) FACILITY SECURITY... Protection of National Security Information and Restricted Data in storage. (a) Secret matter, while...

  3. Analysis Report for Exascale Storage Requirements for Scientific Data.

    Energy Technology Data Exchange (ETDEWEB)

    Ruwart, Thomas M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-02-01

    Over the next 10 years, the Department of Energy will be transitioning from Petascale to Exascale Computing resulting in data storage, networking, and infrastructure requirements to increase by three orders of magnitude. The technologies and best practices used today are the result of a relatively slow evolution of ancestral technologies developed in the 1950s and 1960s. These include magnetic tape, magnetic disk, networking, databases, file systems, and operating systems. These technologies will continue to evolve over the next 10 to 15 years on a reasonably predictable path. Experience with the challenges involved in transitioning these fundamental technologies from Terascale to Petascale computing systems has raised questions about how these will scale another 3 or 4 orders of magnitude to meet the requirements imposed by Exascale computing systems. This report is focused on the most concerning scaling issues with data storage systems as they relate to High Performance Computing- and presents options for a path forward. Given the ability to store exponentially increasing amounts of data, far more advanced concepts and use of metadata will be critical to managing data in Exascale computing systems.

  4. Hierarchical storage of large volume of multidector CT data using distributed servers

    Science.gov (United States)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  5. Integrated data acquisition, storage, retrieval and processing using the COMPASS DataBase (CDB)

    Energy Technology Data Exchange (ETDEWEB)

    Urban, J., E-mail: urban@ipp.cas.cz [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Pipek, J.; Hron, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Janky, F.; Papřok, R.; Peterka, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Department of Surface and Plasma Science, Faculty of Mathematics and Physics, Charles University in Prague, V Holešovičkách 2, 180 00 Praha 8 (Czech Republic); Duarte, A.S. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)

    2014-05-15

    Highlights: • CDB is used as a new data storage solution for the COMPASS tokamak. • The software is light weight, open, fast and easily extensible and scalable. • CDB seamlessly integrates with any data acquisition system. • Rich metadata are stored for physics signals. • Data can be processed automatically, based on dependence rules. - Abstract: We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (COMPASS DataBase), integrates different data sources as an assortment of data acquisition hardware and software from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks.

  6. Converged photonic data storage and switch platform for exascale disaggregated data centers

    Science.gov (United States)

    Pitwon, R.; Wang, K.; Worrall, A.

    2017-02-01

    We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.

  7. The Grid Enabled Mass Storage System (GEMSS): the Storage and Data management system used at the INFN Tier1 at CNAF

    International Nuclear Information System (INIS)

    Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Gregori, Daniele; Prosperini, Andrea; Rinaldi, Lorenzo; Sapunenko, Vladimir; Bonacorsi, Daniele; Vagnoni, Vincenzo

    2012-01-01

    The storage system currently used in production at the INFN Tier1 at CNAF is the result of several years of case studies, software development and tests. This solution, called the Grid Enabled Mass Storage System (GEMSS), is based on a custom integration between a fast and reliable parallel filesystem (the IBM General Parallel File System, GPFS), with a complete integrated tape backend based on the Tivoli Storage Manager (TSM), which provides Hierarchical Storage Management (HSM) capabilities, and the Grid Storage Resource Manager (StoRM), providing access to grid users through a standard SRM interface. Since the start of the Large Hadron Collider (LHC) operation, all LHC experiments have been using GEMSS at CNAF for both disk data access and long-term archival on tape media. Moreover, during last year, GEMSS has become the standard solution for all other experiments hosted at CNAF, allowing the definitive consolidation of the data storage layer. Our choice has proved to be very successful during the last two years of production with continuous enhancements, accurate monitoring and effective customizations according to the end-user requests. In this paper a description of the system is reported, addressing recent developments and giving an overview of the administration and monitoring tools. We also discuss the solutions adopted in order to grant the maximum availability of the service and the latest optimization features within the data access process. Finally, we summarize the main results obtained during these last years of activity from the perspective of some of the end-users, showing the reliability and the high performances that can be achieved using GEMSS.

  8. Online data handling and storage at the CMS experiment

    CERN Document Server

    Andre, Jean-marc Olivier; Behrens, Ulf; Branson, James; Chaze, Olivier; Demiragli, Zeynep; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez Ceballos, Guillelmo; Hegeman, Jeroen Guido; Holzner, Andre Georg; Jimenez Estupinan, Raul; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; Nunez Barranco Fernandez, Carlos; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrucci, Andrea; Pieri, Marco; Racz, Attila; Roberts, Penelope Amelia; Sakulin, Hannes; Schwick, Christoph; Stieger, Benjamin Bastian; Sumorok, Konstanty; Veverka, Jan; Zaza, Salvatore; Zejdl, Petr

    2015-01-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small 'documents' using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store ...

  9. Archiving and Managing Remote Sensing Data using State of the Art Storage Technologies

    Science.gov (United States)

    Lakshmi, B.; Chandrasekhara Reddy, C.; Kishore, S. V. S. R. K.

    2014-11-01

    Integrated Multi-mission Ground Segment for Earth Observation Satellites (IMGEOS) was established with an objective to eliminate human interaction to the maximum extent. All emergency data products will be delivered within an hour of acquisition through FTP delivery. All other standard data products will be delivered through FTP within a day. The IMGEOS activity was envisaged to reengineer the entire chain of operations at the ground segment facilities of NRSC at Shadnagar and Balanagar campuses to adopt an integrated multi-mission approach. To achieve this, the Information Technology Infrastructure was consolidated by implementing virtualized tiered storage and network computing infrastructure in a newly built Data Centre at Shadnagar Campus. One important activity that influences all other activities in the integrated multi-mission approach is the design of appropriate storage and network architecture for realizing all the envisaged operations in a highly streamlined, reliable and secure environment. Storage was consolidated based on the major factors like accessibility, long term data protection, availability, manageability and scalability. The broad operational activities are reception of satellite data, quick look, generation of browse, production of standard and valueadded data products, production chain management, data quality evaluation, quality control and product dissemination. For each of these activities, there are numerous other detailed sub-activities and pre-requisite tasks that need to be implemented to support the above operations. The IMGEOS architecture has taken care of choosing the right technology for the given data sizes, their movement and long-term lossless retention policies. Operational costs of the solution are kept to the minimum possible. Scalability of the solution is also ensured. The main function of the storage is to receive and store the acquired satellite data, facilitate high speed availability of the data for further

  10. Phase-image-based content-addressable holographic data storage

    Science.gov (United States)

    John, Renu; Joseph, Joby; Singh, Kehar

    2004-03-01

    We propose and demonstrate the use of phase images for content-addressable holographic data storage. Use of binary phase-based data pages with 0 and π phase changes, produces uniform spectral distribution at the Fourier plane. The absence of strong DC component at the Fourier plane and more intensity of higher order spatial frequencies facilitate better recording of higher spatial frequencies, and improves the discrimination capability of the content-addressable memory. This improves the results of the associative recall in a holographic memory system, and can give low number of false hits even for small search arguments. The phase-modulated pixels also provide an opportunity of subtraction among data pixels leading to better discrimination between similar data pages.

  11. StorNet: Integrated Dynamic Storage and Network Resource Provisioning and Management for Automated Data Transfers

    International Nuclear Information System (INIS)

    Gu Junmin; Natarajan, Vijaya; Shoshani, Arie; Sim, Alex; Katramatos, Dimitrios; Liu Xin; Yu Dantong; Bradley, Scott; McKee, Shawn

    2011-01-01

    StorNet is a joint project of Brookhaven National Laboratory (BNL) and Lawrence Berkeley National Laboratory (LBNL) to research, design, and develop an integrated end-to-end resource provisioning and management framework for high-performance data transfers. The StorNet framework leverages heterogeneous network protocols and storage types in a federated computing environment to provide the capability of predictable, efficient delivery of high-bandwidth data transfers for data intensive applications. The framework incorporates functional modules to perform such data transfers through storage and network bandwidth co-scheduling, storage and network resource provisioning, and performance monitoring, and is based on LBNL's BeStMan/SRM, BNL's TeraPaths, and ESNet's OSCARS systems.

  12. Effective grouping for energy and performance: Construction of adaptive, sustainable, and maintainable data storage

    Science.gov (United States)

    Essary, David S.

    The performance gap between processors and storage systems has been increasingly critical over the years. Yet the performance disparity remains, and further, storage energy consumption is rapidly becoming a new critical problem. While smarter caching and predictive techniques do much to alleviate this disparity, the problem persists, and data storage remains a growing contributor to latency and energy consumption. Attempts have been made at data layout maintenance, or intelligent physical placement of data, yet in practice, basic heuristics remain predominant. Problems that early studies sought to solve via layout strategies were proven to be NP-Hard, and data layout maintenance today remains more art than science. With unknown potential and a domain inherently full of uncertainty, layout maintenance persists as an area largely untapped by modern systems. But uncertainty in workloads does not imply randomness; access patterns have exhibited repeatable, stable behavior. Predictive information can be gathered, analyzed, and exploited to improve data layouts. Our goal is a dynamic, robust, sustainable predictive engine, aimed at improving existing layouts by replicating data at the storage device level. We present a comprehensive discussion of the design and construction of such a predictive engine, including workload evaluation, where we present and evaluate classical workloads as well as our own highly detailed traces collected over an extended period. We demonstrate significant gains through an initial static grouping mechanism, and compare against an optimal grouping method of our own construction, and further show significant improvement over competing techniques. We also explore and illustrate the challenges faced when moving from static to dynamic (i.e. online) grouping, and provide motivation and solutions for addressing these challenges. These challenges include metadata storage, appropriate predictive collocation, online performance, and physical placement

  13. Compressing Control System Data for Efficient Storage and Retrieval

    International Nuclear Information System (INIS)

    Christopher Larrieu

    2003-01-01

    The controls group at the Thomas Jefferson National Accelerator Facility (Jefferson Lab), acquires multiple terabytes of EPICS control system data per year via CZAR, its new archiving system. By heuristically applying a combination of rudimentary compression techniques, in conjunction with several specialized data transformations and algorithms, the CZAR storage engine reduces the size of this data by approximately 88 percent, without any loss of information. While the compression process requires significant memory and processor time, the decompression routine suffers only slightly in this regard

  14. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    Science.gov (United States)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  15. A Secure and Effective Anonymous Integrity Checking Protocol for Data Storage in Multicloud

    Directory of Open Access Journals (Sweden)

    Lingwei Song

    2015-01-01

    Full Text Available How to verify the integrity of outsourced data is an important problem in cloud storage. Most of previous work focuses on three aspects, which are providing data dynamics, public verifiability, and privacy against verifiers with the help of a third party auditor. In this paper, we propose an identity-based data storage and integrity verification protocol on untrusted cloud. And the proposed protocol can guarantee fair results without any third verifying auditor. The theoretical analysis and simulation results show that our protocols are secure and efficient.

  16. An evaluation of Oracle for persistent data storage and analysis of LHC physics data

    International Nuclear Information System (INIS)

    Grancher, E.; Marczukajtis, M.

    2001-01-01

    CERN's IT/DB group is currently exploring the possibility of using oracle to store LHC physics data. It presents preliminary results from this work, concentrating on two aspects: the storage of RAW and the analysis of TAG data. The RAW data part of the study discusses the throughput that one can achieve with the oracle database system, the options for storing the data and an estimation of the associated overheads. The TAG data analysis focuses on the use of new and extended indexing features of oracle to perform efficient cuts on the data. The tests were performed with Oracle 8.1.7

  17. Data needs for long-term dry storage of LWR fuel. Interim report

    International Nuclear Information System (INIS)

    Einziger, R.E.; Baldwin, D.L.; Pitman, S.G.

    1998-04-01

    The NRC approved dry storage of spent fuel in an inert environment for a period of 20 years pursuant to 10CFR72. However, at-reactor dry storage of spent LWR fuel may need to be implemented for periods of time significantly longer than the NRC's original 20-year license period, largely due to uncertainty as to the date the US DOE will begin accepting commercial spent fuel. This factor is leading utilities to plan not only for life-of-plant spent-fuel storage during reactor operation but also for the contingency of a lengthy post-shutdown storage. To meet NRC standards, dry storage must (1) maintain subcriticality, (2) prevent release of radioactive material above acceptable limits, (3) ensure that radiation rates and doses do not exceed acceptable limits, and (4) maintain retrievability of the stored radioactive material. In light of these requirements, this study evaluates the potential for storing spent LWR fuel for up to 100 years. It also identifies major uncertainties as well as the data required to eliminate them. Results show that the lower radiation fields and temperatures after 20 years of dry storage promote acceptable fuel behavior and the extension of storage for up to 100 years. Potential changes in the properties of dry storage system components, other than spent-fuel assemblies, must still be evaluated

  18. On-Chip Fluorescence Switching System for Constructing a Rewritable Random Access Data Storage Device.

    Science.gov (United States)

    Nguyen, Hoang Hiep; Park, Jeho; Hwang, Seungwoo; Kwon, Oh Seok; Lee, Chang-Soo; Shin, Yong-Beom; Ha, Tai Hwan; Kim, Moonil

    2018-01-10

    We report the development of on-chip fluorescence switching system based on DNA strand displacement and DNA hybridization for the construction of a rewritable and randomly accessible data storage device. In this study, the feasibility and potential effectiveness of our proposed system was evaluated with a series of wet experiments involving 40 bits (5 bytes) of data encoding a 5-charactered text (KRIBB). Also, a flexible data rewriting function was achieved by converting fluorescence signals between "ON" and "OFF" through DNA strand displacement and hybridization events. In addition, the proposed system was successfully validated on a microfluidic chip which could further facilitate the encoding and decoding process of data. To the best of our knowledge, this is the first report on the use of DNA hybridization and DNA strand displacement in the field of data storage devices. Taken together, our results demonstrated that DNA-based fluorescence switching could be applicable to construct a rewritable and randomly accessible data storage device through controllable DNA manipulations.

  19. UV-Photodimerization in Uracil-substituted dendrimers for high density data storage

    DEFF Research Database (Denmark)

    Lohse, Brian; Vestberg, Robert; Ivanov, Mario Tonev

    2007-01-01

    Two series of uracil-functionalized dendritic macromolecules based on poly (amidoamine) PAMAM and 2,2-bis(hydroxymethylpropionic acid) bis-MPA backbones were prepared and their photoinduced (2 pi+2 pi) cycloaddition reactions upon exposure to UV light at 257 nm examined. Dendrimers up to 4th...... generation were synthesized and investigated as potential materials for high capacity optical data storage with their dimerization efficiency compared to uracil as a reference compound. This allows the impact of increasing the generation number of the dendrimers, both the number of chromophores, as well...... nm with an intensity of 70 mW/cm(2) could be obtained suggesting future use as recording media for optical data storage. (c) 2007 Wiley Periodicals, Inc....

  20. An Open-Source Data Storage and Visualization Back End for Experimental Data

    DEFF Research Database (Denmark)

    Nielsen, Kenneth; Andersen, Thomas; Jensen, Robert

    2014-01-01

    and to interfere with the experiment if needed. The data stored consist both of specific measurements and of continuously logged system parameters. The latter is crucial to a variety of automation and surveillance features, and three cases of such features are described: monitoring system health, getting status......In this article, a flexible free and open-source software system for data logging and presentation will be described. The system is highly modular and adaptable and can be used in any laboratory in which continuous and/or ad hoc measurements require centralized storage. A presentation component...... for the data back end has furthermore been written that enables live visualization of data on any device capable of displaying Web pages. The system consists of three parts: data-logging clients, a data server, and a data presentation Web site. The logging of data from independent clients leads to high...

  1. Clone-based Data Index in Cloud Storage Systems

    Directory of Open Access Journals (Sweden)

    He Jing

    2016-01-01

    Full Text Available The storage systems have been challenged by the development of cloud computing. The traditional data index cannot satisfy the requirements of cloud computing because of the huge index volumes and quick response time. Meanwhile, because of the increasing size of data index and its dynamic characteristics, the previous ways, which rebuilding the index or fully backup the index before the data has changed, cannot satisfy the need of today’s big data index. To solve these problems, we propose a double-layer index structure that overcomes the throughput limitation of single point server. Then, a clone based B+ tree structure is proposed to achieve high performance and adapt dynamic environment. The experimental results show that our clone-based solution has high efficiency.

  2. Ultrasonic identity data storage and archival system

    International Nuclear Information System (INIS)

    Mc Kenzie, J.M.; Self, B.G.; Walker, J.E.

    1987-01-01

    Ultrasonic seals are being used to determine if an underwater stored spent fuel container has been compromised and can be used to determine if a nuclear material container has been compromised. The Seal Pattern Reader (SPAR) is a microprocessor controlled instrument which interrogates an ultrasonic seal to obtain its identity. The SPAR can compare the present identity with a previous identity, which it obtains from a magnetic bubble cassette memory. A system has been developed which allows an IAEA inspector to transfer seal information obtained at a facility by the SPAR to an IAEA-based data storage and retrieval system, using the bubble cassette memory. Likewise, magnetic bubbles can be loaded at the IAEA with seal signature data needed at a facility for comparison purposes. The archived signatures can be retrieved from the data base for relevant statistical manipulation and for plotting

  3. Data storage and retrieval system abstract

    Science.gov (United States)

    Matheson, Barbara

    1992-09-01

    The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.

  4. Distributed Scheme to Authenticate Data Storage Security in Cloud Computing

    OpenAIRE

    B. Rakesh; K. Lalitha; M. Ismail; H. Parveen Sultana

    2017-01-01

    Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which h...

  5. Petaminer: Using ROOT for efficient data storage in MySQL database

    Science.gov (United States)

    Cranshaw, J.; Malon, D.; Vaniachine, A.; Fine, V.; Lauret, J.; Hamill, P.

    2010-04-01

    High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.

  6. Petaminer: Using ROOT for efficient data storage in MySQL database

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Vaniachine, A; Fine, V; Lauret, J; Hamill, P

    2010-01-01

    High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.

  7. Hybrid data storage system in an HPC exascale environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Gupta, Uday K.; Tzelnic, Percy; Ting, Dennis P. J.

    2015-08-18

    A computer-executable method, system, and computer program product for managing I/O requests from a compute node in communication with a data storage system, including a first burst buffer node and a second burst buffer node, the computer-executable method, system, and computer program product comprising striping data on the first burst buffer node and the second burst buffer node, wherein a first portion of the data is communicated to the first burst buffer node and a second portion of the data is communicated to the second burst buffer node, processing the first portion of the data at the first burst buffer node, and processing the second portion of the data at the second burst buffer node.

  8. Technology Assessment of High Capacity Data Storage Systems: Can We Avoid a Data Survivability Crisis?

    Science.gov (United States)

    Halem, Milton

    1999-01-01

    In a recent address at the California Science Center in Los Angeles, Vice President Al Gore articulated a Digital Earth Vision. That vision spoke to developing a multi-resolution, three-dimensional visual representation of the planet into which we can roam and zoom into vast quantities of embedded geo-referenced data. The vision was not limited to moving through space, but also allowing travel over a time-line, which can be set for days, years, centuries, or even geological epochs. A working group of Federal Agencies, developing a coordinated program to implement the Vice President's vision, developed the definition of the Digital Earth as a visual representation of our planet that enables a person to explore and interact with the vast amounts of natural and cultural geo-referenced information gathered about the Earth. One of the challenges identified by the agencies was whether the technology existed that would be available to permanently store and deliver all the digital data that enterprises might want to save for decades and centuries. Satellite digital data is growing by Moore's Law as is the growth of computer generated data. Similarly, the density of digital storage media in our information-intensive society is also increasing by a factor of four every three years. The technological bottleneck is that the bandwidth for transferring data is only growing at a factor of four every nine years. This implies that the migration of data to viable long-term storage is growing more slowly. The implication is that older data stored on increasingly obsolete media are at considerable risk if they cannot be continuously migrated to media with longer life times. Another problem occurs when the software and hardware systems for which the media were designed are no longer serviced by their manufacturers. Many instances exist where support for these systems are phased out after mergers or even in going out of business. In addition, survivability of older media can suffer from

  9. Sensor data storage performance: SQL or NoSQL, physical or virtual

    NARCIS (Netherlands)

    Veen, J.S. van der; Waaij, B.D. van der; Meijer, R.J.

    2012-01-01

    Sensors are used to monitor certain aspects of the physical or virtual world and databases are typically used to store the data that these sensors provide. The use of sensors is increasing, which leads to an increasing demand on sensor data storage platforms. Some sensor monitoring applications need

  10. Benefits and Pitfalls of GRACE Terrestrial Water Storage Data Assimilation

    Science.gov (United States)

    Girotto, Manuela

    2018-01-01

    Satellite observations of terrestrial water storage (TWS) from the Gravity Recovery and Climate Experiment (GRACE) mission have a coarse resolution in time (monthly) and space (roughly 150,000 sq km at midlatitudes) and vertically integrate all water storage components over land, including soil moisture and groundwater. Nonetheless, data assimilation can be used to horizontally downscale and vertically partition GRACE-TWS observations. This presentation illustrates some of the benefits and drawbacks of assimilating TWS observations from GRACE into a land surface model over the continental United States and India. The assimilation scheme yields improved skill metrics for groundwater compared to the no-assimilation simulations. A smaller impact is seen for surface and root-zone soil moisture. Further, GRACE observes TWS depletion associated with anthropogenic groundwater extraction. Results from the assimilation emphasize the importance of representing anthropogenic processes in land surface modeling and data assimilation systems.

  11. Towards regional, error-bounded landscape carbon storage estimates for data-deficient areas of the world.

    Directory of Open Access Journals (Sweden)

    Simon Willcock

    Full Text Available Monitoring landscape carbon storage is critical for supporting and validating climate change mitigation policies. These may be aimed at reducing deforestation and degradation, or increasing terrestrial carbon storage at local, regional and global levels. However, due to data-deficiencies, default global carbon storage values for given land cover types such as 'lowland tropical forest' are often used, termed 'Tier 1 type' analyses by the Intergovernmental Panel on Climate Change (IPCC. Such estimates may be erroneous when used at regional scales. Furthermore uncertainty assessments are rarely provided leading to estimates of land cover change carbon fluxes of unknown precision which may undermine efforts to properly evaluate land cover policies aimed at altering land cover dynamics. Here, we present a repeatable method to estimate carbon storage values and associated 95% confidence intervals (CI for all five IPCC carbon pools (aboveground live carbon, litter, coarse woody debris, belowground live carbon and soil carbon for data-deficient regions, using a combination of existing inventory data and systematic literature searches, weighted to ensure the final values are regionally specific. The method meets the IPCC 'Tier 2' reporting standard. We use this method to estimate carbon storage over an area of33.9 million hectares of eastern Tanzania, reporting values for 30 land cover types. We estimate that this area stored 6.33 (5.92-6.74 Pg C in the year 2000. Carbon storage estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been underestimated for this region of Africa. Our study demonstrates the importance of obtaining regionally appropriate carbon storage estimates, and shows how such values can be produced

  12. The TDR: A Repository for Long Term Storage of Geophysical Data and Metadata

    Science.gov (United States)

    Wilson, A.; Baltzer, T.; Caron, J.

    2006-12-01

    For many years Unidata has provided easy, low cost data access to universities and research labs. Historically Unidata technology provided access to data in near real time. In recent years Unidata has additionally turned to providing middleware to serve longer term data and associated metadata via its THREDDS technology, the most recent offering being the THREDDS Data Server (TDS). The TDS provides middleware for metadata access and management, OPeNDAP data access, and integration with the Unidata Integrated Data Viewer (IDV), among other benefits. The TDS was designed to support rolling archives of data, that is, data that exist only for a relatively short, predefined time window. Now we are creating an addition to the TDS, called the THREDDS Data Repository (TDR), which allows users to store and retrieve data and other objects for an arbitrarily long time period. Data in the TDR can also be served by the TDS. The TDR performs important functions of locating storage for the data, moving the data to and from the repository, assigning unique identifiers, and generating metadata. The TDR framework supports pluggable components that allow tailoring an implementation for a particular application. The Linked Environments for Atmospheric Discovery (LEAD) project provides an excellent use case for the TDR. LEAD is a multi-institutional Large Information Technology Research project funded by the National Science Foundation (NSF). The goal of LEAD is to create a framework based on Grid and Web Services to support mesoscale meteorology research and education. This includes capabilities such as launching forecast models, mining data for meteorological phenomena, and dynamic workflows that are automatically reconfigurable in response to changing weather. LEAD presents unique challenges in managing and storing large data volumes from real-time observational systems as well as data that are dynamically created during the execution of adaptive workflows. For example, in order to

  13. Compression and decompression of digital seismic waveform data for storage and communication

    International Nuclear Information System (INIS)

    Bhadauria, Y.S.; Kumar, Vijai

    1991-01-01

    Two different classes of data compression schemes, namely physical data compression schemes and logical data compression schemes are examined for their use in storage and communication of digital seismic waveform data. In physical data compression schemes, the physical size of the waveform is reduced. One, therefore, gets only a broad picture of the original waveform, when the data are retrieved and the waveform is reconstituted. Coerrelation between original and decompressed waveform varies inversely with the data compresion ratio. In the logical data compression schemes, the data are stored in a logically encoded form. Storage of unnecessary characters like blank space is avoided. On decompression original data are retrieved and compression error is nil. Three algorithms of logical data compression schemes have been developed and studied. These are : 1) optimum formatting schemes, 2) differential bit reduction scheme, and 3) six bit compression scheme. Results of the above three algorithms of logical compression class are compared with those of physical compression schemes reported in literature. It is found that for all types of data, six bit compression scheme gives the highest value of data compression ratio. (author). 6 refs., 8 figs., 1 appendix, 2 tabs

  14. Statistical analyses of the magnet data for the advanced photon source storage ring magnets

    International Nuclear Information System (INIS)

    Kim, S.H.; Carnegie, D.W.; Doose, C.; Hogrefe, R.; Kim, K.; Merl, R.

    1995-01-01

    The statistics of the measured magnetic data of 80 dipole, 400 quadrupole, and 280 sextupole magnets of conventional resistive designs for the APS storage ring is summarized. In order to accommodate the vacuum chamber, the curved dipole has a C-type cross section and the quadrupole and sextupole cross sections have 180 degrees and 120 degrees symmetries, respectively. The data statistics include the integrated main fields, multipole coefficients, magnetic and mechanical axes, and roll angles of the main fields. The average and rms values of the measured magnet data meet the storage ring requirements

  15. Towards Blockchain-based Auditable Storage and Sharing of IoT Data

    OpenAIRE

    Shafagh , Hossein; Hithnawi , Anwar; Duquennoy , Simon

    2017-01-01

    International audience; Today the cloud plays a central role in storing, processing , and distributing data. Despite contributing to the rapid development of various applications, including the IoT, the current centralized storage architecture has led into a myriad of isolated data silos and is preventing the full potential of holistic data-driven analytics for IoT data. In this abstract, we advocate a data-centric design for IoT with focus on resilience, sharing, and auditable protection of ...

  16. Mahanaxar: quality of service guarantees in high-bandwidth, real-time streaming data storage

    Energy Technology Data Exchange (ETDEWEB)

    Bigelow, David [Los Alamos National Laboratory; Bent, John [Los Alamos National Laboratory; Chen, Hsing-Bung [Los Alamos National Laboratory; Brandt, Scott [UCSC

    2010-04-05

    Large radio telescopes, cyber-security systems monitoring real-time network traffic, and others have specialized data storage needs: guaranteed capture of an ultra-high-bandwidth data stream, retention of the data long enough to determine what is 'interesting,' retention of interesting data indefinitely, and concurrent read/write access to determine what data is interesting, without interrupting the ongoing capture of incoming data. Mahanaxar addresses this problem. Mahanaxar guarantees streaming real-time data capture at (nearly) the full rate of the raw device, allows concurrent read and write access to the device on a best-effort basis without interrupting the data capture, and retains data as long as possible given the available storage. It has built in mechanisms for reliability and indexing, can scale to meet arbitrary bandwidth requirements, and handles both small and large data elements equally well. Results from our prototype implementation shows that Mahanaxar provides both better guarantees and better performance than traditional file systems.

  17. Computer program for storage and retrieval of thermal-stability data for explosives

    International Nuclear Information System (INIS)

    Ashcraft, R.W.

    1981-06-01

    A computer program for storage and retrieval of thermal stability data has been written in HP Basic for the HP-9845 system. The data library is stored on a 9885 flexible disk. A program listing and sample outputs are included as appendices

  18. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    Science.gov (United States)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  19. Dendronized macromonomers for three-dimensional data storage

    DEFF Research Database (Denmark)

    Khan, A.; Daugaard, Anders Egede; Bayles, A.

    2009-01-01

    A series of dendritic macromonomers have been synthesized and utilized as the photoactive component in holographic storage systems leading to high performance, low shrinkage materials.......A series of dendritic macromonomers have been synthesized and utilized as the photoactive component in holographic storage systems leading to high performance, low shrinkage materials....

  20. Large-scale electrophysiology: acquisition, compression, encryption, and storage of big data.

    Science.gov (United States)

    Brinkmann, Benjamin H; Bower, Mark R; Stengel, Keith A; Worrell, Gregory A; Stead, Matt

    2009-05-30

    The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single-neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single-neuron action potentials, high frequency oscillations, and high amplitude ultra-slow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range-encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information.

  1. Monitoring of large-scale federated data storage: XRootD and beyond

    International Nuclear Information System (INIS)

    Andreeva, J; Beche, A; Arias, D Diguez; Giordano, D; Saiz, P; Tuckett, D; Belov, S; Oleynik, D; Petrosyan, A; Tadel, M; Vukotic, I

    2014-01-01

    The computing models of the LHC experiments are gradually moving from hierarchical data models with centrally managed data pre-placement towards federated storage which provides seamless access to data files independently of their location and dramatically improve recovery due to fail-over mechanisms. Construction of the data federations and understanding the impact of the new approach to data management on user analysis requires complete and detailed monitoring. Monitoring functionality should cover the status of all components of the federated storage, measuring data traffic and data access performance, as well as being able to detect any kind of inefficiencies and to provide hints for resource optimization and effective data distribution policy. Data mining of the collected monitoring data provides a deep insight into new usage patterns. In the WLCG context, there are several federations currently based on the XRootD technology. This paper will focus on monitoring for the ATLAS and CMS XRootD federations implemented in the Experiment Dashboard monitoring framework. Both federations consist of many dozens of sites accessed by many hundreds of clients and they continue to grow in size. Handling of the monitoring flow generated by these systems has to be well optimized in order to achieve the required performance. Furthermore, this paper demonstrates the XRootD monitoring architecture is sufficiently generic to be easily adapted for other technologies, such as HTTP/WebDAV dynamic federations.

  2. Privacy-Preserving Outsourced Auditing Scheme for Dynamic Data Storage in Cloud

    Directory of Open Access Journals (Sweden)

    Tengfei Tu

    2017-01-01

    Full Text Available As information technology develops, cloud storage has been widely accepted for keeping volumes of data. Remote data auditing scheme enables cloud user to confirm the integrity of her outsourced file via the auditing against cloud storage, without downloading the file from cloud. In view of the significant computational cost caused by the auditing process, outsourced auditing model is proposed to make user outsource the heavy auditing task to third party auditor (TPA. Although the first outsourced auditing scheme can protect against the malicious TPA, this scheme enables TPA to have read access right over user’s outsourced data, which is a potential risk for user data privacy. In this paper, we introduce the notion of User Focus for outsourced auditing, which emphasizes the idea that lets user dominate her own data. Based on User Focus, our proposed scheme not only can prevent user’s data from leaking to TPA without depending on data encryption but also can avoid the use of additional independent random source that is very difficult to meet in practice. We also describe how to make our scheme support dynamic updates. According to the security analysis and experimental evaluations, our proposed scheme is provably secure and significantly efficient.

  3. Optically Addressed Nanostructures for High Density Data Storage

    Science.gov (United States)

    2005-10-14

    beam to sub-wavelength resolutions. X. Refereed Journal Publications I. M. D. Stenner , D. J. Gauthier, and M. A. Neifeld, "The speed of information in a...profiles for high-density optical data storage," Optics Communications, Vol.253, pp.56-69, 2005. 5. M. D. Stenner , D. J. Gauthier, and M. A. Neifeld, "Fast...causal information transmission in a medium with a slow group velocity," Physical Review Letters, Vol.94, February 2005. 6. M. D. Stenner , M. A

  4. Challenges for data storage in medical imaging research.

    Science.gov (United States)

    Langer, Steve G

    2011-04-01

    Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher's need grows by leveraging the on-demand provisioning ability of cloud computing.

  5. Unleashed Microactuators electrostatic wireless actuation for probe-based data storage

    NARCIS (Netherlands)

    Hoexum, A.M.

    2007-01-01

    Summary A hierarchical overview of the currently available data storage systems for desktop computer systems can be visualised as a pyramid in which the height represents both the price per bit and the access rate. The width of the pyramid represents the capacity of the medium. At the bottom slow,

  6. A pre-research on GWAC massive catalog data storage and processing system

    NARCIS (Netherlands)

    M. Wan (Meng); C. Wu (Chao); Y. Zhang (Ying); Y. Xu (Yang); J. Wei (Jianyan)

    2016-01-01

    htmlabstractGWAC (Ground Wide Angle Camera) poses huge challenges in large-scale catalogue storage and real-time processing of quick search of transients among wide field-of-view time-series data. Firstly, this paper proposes the concept of using databases’ capabilities of fast data processing and

  7. Using Object Storage Technology vs Vendor Neutral Archives for an Image Data Repository Infrastructure.

    Science.gov (United States)

    Bialecki, Brian; Park, James; Tilkin, Mike

    2016-08-01

    The intent of this project was to use object storage and its database, which has the ability to add custom extensible metadata to an imaging object being stored within the system, to harness the power of its search capabilities, and to close the technology gap that healthcare faces. This creates a non-disruptive tool that can be used natively by both legacy systems and the healthcare systems of today which leverage more advanced storage technologies. The base infrastructure can be populated alongside current workflows without any interruption to the delivery of services. In certain use cases, this technology can be seen as a true alternative to the VNA (Vendor Neutral Archive) systems implemented by healthcare today. The scalability, security, and ability to process complex objects makes this more than just storage for image data and a commodity to be consumed by PACS (Picture Archiving and Communication System) and workstations. Object storage is a smart technology that can be leveraged to create vendor independence, standards compliance, and a data repository that can be mined for truly relevant content by adding additional context to search capabilities. This functionality can lead to efficiencies in workflow and a wealth of minable data to improve outcomes into the future.

  8. Data Storage for Social Networks A Socially Aware Approach

    CERN Document Server

    Tran, Duc A

    2012-01-01

    Evidenced by the success of Facebook, Twitter, and LinkedIn, online social networks (OSNs) have become ubiquitous, offering novel ways for people to access information and communicate with each other. As the increasing popularity of social networking is undeniable, scalability is an important issue for any OSN that wants to serve a large number of users. Storing user data for the entire network on a single server can quickly lead to a bottleneck, and, consequently, more servers are needed to expand storage capacity and lower data request traffic per server. Adding more servers is just one step

  9. Determination of the size of an imaging data storage device at a full PACS hospital

    International Nuclear Information System (INIS)

    Cha, S. J.; Kim, Y. H.; Hur, G.

    2000-01-01

    To determine the appropriate size of a short and long-term storage device, bearing in mind the design factors involved and the installation costs. The number of radiologic studies quoted is the number of these undertaken during a one-year period at a university hospital with 650 beds, and reflects the actual number of each type of examination performed at a full PACS hospital. The average daily number of outpatients was 1586, while that of inpatients was 639.5. The numbers of radiologic studies performed were as follows : 378 among 189 outpatients, and 165 among 41 inpatients. The average daily number of examinations was 543, comprising 460 CR, 30 ultrasonograms, 25 CT, 8 MRI, 20 others. The total amount of digital images was 17.4 GB per day, while the amount of short-term data with lossless compression was 6.7 GB per day. During 14 days short-term storage, the amount of image data was 93.7 GB in disk array. The amount of data stored mid term (1 year), with lossy compression, was 369.1 GB. The amount of data stored in the form of long-term cache and educational images was 38.7 GB and 30 GB, respectively, The total size of disk array was 531.5 GB. A device suitable for the long-term storage of images, for at least five years, requires a capacity of 1845.5 GB. At a full PACS hospital with 600 beds, the minimum disk space required for the short-and mid-term storage of image data in disk array is 540 GB. The capacity required for long term storage (at least five years) is 1900 GB. (author)

  10. Online data handling and storage at the CMS experiment

    Science.gov (United States)

    Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gómez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, RK; Morovic, S.; Nuñez-Barranco-Fernández, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.

  11. Online Data Handling and Storage at the CMS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Andre, J. M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.

  12. Water storage changes in North America retrieved from GRACE gravity and GPS data

    Directory of Open Access Journals (Sweden)

    Hansheng Wang

    2015-07-01

    Full Text Available As global warming continues, the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management. In North America as elsewhere in the world, changes in water resources strongly impact agriculture and animal husbandry. From a combination of Gravity Recovery and Climate Experiment (GRACE gravity and Global Positioning System (GPS data, it is recently found that water storage from August, 2002 to March, 2011 recovered after the extreme Canadian Prairies drought between 1999 and 2005. In this paper, we use GRACE monthly gravity data of Release 5 to track the water storage change from August, 2002 to June, 2014. In Canadian Prairies and the Great Lakes areas, the total water storage is found to have increased during the last decade by a rate of 73.8 ± 14.5 Gt/a, which is larger than that found in the previous study due to the longer time span of GRACE observations used and the reduction of the leakage error. We also find a long term decrease of water storage at a rate of −12.0 ± 4.2 Gt/a in Ungava Peninsula, possibly due to permafrost degradation and less snow accumulation during the winter in the region. In addition, the effect of total mass gain in the surveyed area, on present-day sea level, amounts to −0.18 mm/a, and thus should be taken into account in studies of global sea level change.

  13. Measuring Mangrove Type, Structure And Carbon Storage With UAVSAR And ALOS/PALSAR Data

    Science.gov (United States)

    Fatoyinbo, T. E.; Cornforth, W.; Pinto, N.; Simard, M.; Pettorelli, N.

    2011-12-01

    Mangrove forests provide a great number of ecosystem services ranging from shoreline protection (e.g. against erosion, tsunamis and storms), nutrient cycling, fisheries production, building materials and habitat. Mangrove forests have been shown to store very large amounts of Carbon, both above and belowground, with storage capacities even greater than tropical rainforests. But as a result of their location and economic value, they are among the most rapidly changing landscapes in the World. Mangrove extent is limited 1) in total extent to tidally influenced coastal areas and 2) to tropical and subtropical regions. This can lead to difficulties mapping mangrove type (such as degraded vs non degraded, scrub vs tall, dense vs sparse) because of cloud cover and limited access to high-resolution optical data. To accurately quantify the effect of land use and climate change on tropical wetland ecosystems, we must develop effective mapping methodologies that take into account not only extent, but also the structure and health of the ecosystem. This must be done by including Synthetic Aperture Radar (SAR) data. In this research, we used L-band Synthetic Aperture Radar data from the ALOS/PALSAR and UAVSAR instruments over selected sites in the Americas (Sierpe, Costa Rica and Everglades, Florida)and Asia (Sundarbans). In particular, we used the SAR data in combination with other remotely sensed data and field data to 1) map mangrove extent 2) determine mangrove type, health and adjascent land use, and 3) estimate aboveground biomass and carbon storage for entire mangrove systems. We used different classification methodologies such as polarimetric decomposition, unsupervised classification and image segmentation to map mangrove type. Because of the high resolution of the radar data, and its ability to interact with forest volume, we are able to identify mangrove zones and differentiate between mangroves and other forests/land uses. We also integrated InSAR data (SRTM

  14. Spatially coupled low-density parity-check error correction for holographic data storage

    Science.gov (United States)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  15. The National Institute on Aging Genetics of Alzheimer’s Disease Data Storage Site (NIAGADS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The National Institute on Aging Genetics of Alzheimer's Disease Data Storage Site (NIAGADS) is a national genetics data repository facilitating access to genotypic...

  16. Data systems and computer science space data systems: Onboard memory and storage

    Science.gov (United States)

    Shull, Tom

    1991-01-01

    The topics are presented in viewgraph form and include the following: technical objectives; technology challenges; state-of-the-art assessment; mass storage comparison; SODR drive and system concepts; program description; vertical Bloch line (VBL) device concept; relationship to external programs; and backup charts for memory and storage.

  17. Probe Storage

    NARCIS (Netherlands)

    Gemelli, Marcellino; Abelmann, Leon; Engelen, Johannes Bernardus Charles; Khatib, M.G.; Koelmans, W.W.; Zaboronski, Olog; Campardo, Giovanni; Tiziani, Federico; Laculo, Massimo

    2011-01-01

    This chapter gives an overview of probe-based data storage research over the last three decades, encompassing all aspects of a probe recording system. Following the division found in all mechanically addressed storage systems, the different subsystems (media, read/write heads, positioning, data

  18. [Carbon storage of forest stands in Shandong Province estimated by forestry inventory data].

    Science.gov (United States)

    Li, Shi-Mei; Yang, Chuan-Qiang; Wang, Hong-Nian; Ge, Li-Qiang

    2014-08-01

    Based on the 7th forestry inventory data of Shandong Province, this paper estimated the carbon storage and carbon density of forest stands, and analyzed their distribution characteristics according to dominant tree species, age groups and forest category using the volume-derived biomass method and average-biomass method. In 2007, the total carbon storage of the forest stands was 25. 27 Tg, of which the coniferous forests, mixed conifer broad-leaved forests, and broad-leaved forests accounted for 8.6%, 2.0% and 89.4%, respectively. The carbon storage of forest age groups followed the sequence of young forests > middle-aged forests > mature forests > near-mature forests > over-mature forests. The carbon storage of young forests and middle-aged forests accounted for 69.3% of the total carbon storage. Timber forest, non-timber product forest and protection forests accounted for 37.1%, 36.3% and 24.8% of the total carbon storage, respectively. The average carbon density of forest stands in Shandong Province was 10.59 t x hm(-2), which was lower than the national average level. This phenomenon was attributed to the imperfect structure of forest types and age groups, i. e., the notably higher percentage of timber forests and non-timber product forest and the excessively higher percentage of young forests and middle-aged forest than mature forests.

  19. Development and evaluation of a low-cost and high-capacity DICOM image data storage system for research.

    Science.gov (United States)

    Yakami, Masahiro; Ishizu, Koichi; Kubo, Takeshi; Okada, Tomohisa; Togashi, Kaori

    2011-04-01

    Thin-slice CT data, useful for clinical diagnosis and research, is now widely available but is typically discarded in many institutions, after a short period of time due to data storage capacity limitations. We designed and built a low-cost high-capacity Digital Imaging and COmmunication in Medicine (DICOM) storage system able to store thin-slice image data for years, using off-the-shelf consumer hardware components, such as a Macintosh computer, a Windows PC, and network-attached storage units. "Ordinary" hierarchical file systems, instead of a centralized data management system such as relational database, were adopted to manage patient DICOM files by arranging them in directories enabling quick and easy access to the DICOM files of each study by following the directory trees with Windows Explorer via study date and patient ID. Software used for this system was open-source OsiriX and additional programs we developed ourselves, both of which were freely available via the Internet. The initial cost of this system was about $3,600 with an incremental storage cost of about $900 per 1 terabyte (TB). This system has been running since 7th Feb 2008 with the data stored increasing at the rate of about 1.3 TB per month. Total data stored was 21.3 TB on 23rd June 2009. The maintenance workload was found to be about 30 to 60 min once every 2 weeks. In conclusion, this newly developed DICOM storage system is useful for research due to its cost-effectiveness, enormous capacity, high scalability, sufficient reliability, and easy data access.

  20. International Network Performance and Security Testing Based on Distributed Abyss Storage Cluster and Draft of Data Lake Framework

    Directory of Open Access Journals (Sweden)

    ByungRae Cha

    2018-01-01

    Full Text Available The megatrends and Industry 4.0 in ICT (Information Communication & Technology are concentrated in IoT (Internet of Things, BigData, CPS (Cyber Physical System, and AI (Artificial Intelligence. These megatrends do not operate independently, and mass storage technology is essential as large computing technology is needed in the background to support them. In order to evaluate the performance of high-capacity storage based on open source Ceph, we carry out the network performance test of Abyss storage with domestic and overseas sites using KOREN (Korea Advanced Research Network. And storage media and network bonding are tested to evaluate the performance of the storage itself. Additionally, the security test is demonstrated by Cuckoo sandbox and Yara malware detection among Abyss storage cluster and oversea sites. Lastly, we have proposed the draft design of Data Lake framework in order to solve garbage dump problem.

  1. Data Blocks : Hybrid OLTP and OLAP on compressed storage using both vectorization and compilation

    NARCIS (Netherlands)

    Lang, Harald; Mühlbauer, Tobias; Funke, Florian; Boncz, Peter; Neumann, Thomas; Kemper, Alfons

    2016-01-01

    This work aims at reducing the main-memory footprint in high performance hybrid OLTP&OLAP databases, while retaining high query performance and transactional throughput. For this purpose, an innovative compressed columnar storage format for cold data, called Data Blocks is introduced. Data Blocks

  2. Online data handling and storage at the CMS experiment

    International Nuclear Information System (INIS)

    Andre, J-M; Andronidis, A; Chaze, O; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Hegeman, J; Jimenez-Estupiñán, R; Masetti, L; Meijers, F; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Darlea, G-L; Demiragli, Z; Gómez-Ceballos, G; Erhan, S

    2015-01-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system. (paper)

  3. NOSQL FOR STORAGE AND RETRIEVAL OF LARGE LIDAR DATA COLLECTIONS

    Directory of Open Access Journals (Sweden)

    J. Boehm

    2015-08-01

    Full Text Available Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.

  4. Nosql for Storage and Retrieval of Large LIDAR Data Collections

    Science.gov (United States)

    Boehm, J.; Liu, K.

    2015-08-01

    Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file) in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.

  5. Data Storage and sharing for the long tail of science

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B. [Purdue Univ., West Lafayette, IN (United States); Pouchard, L. [Purdue Univ., West Lafayette, IN (United States); Smith, P. M. [Purdue Univ., West Lafayette, IN (United States); Gasc, A. [Purdue Univ., West Lafayette, IN (United States); Pijanowski, B. C. [Purdue Univ., West Lafayette, IN (United States)

    2016-11-21

    Research data infrastructure such as storage must now accommodate new requirements resulting from trends in research data management that require researchers to store their data for the long term and make it available to other researchers. We propose Data Depot, a system and service that provides capabilities for shared space within a group, shared applications, flexible access patterns and ease of transfer at Purdue University. We evaluate Depot as a solution for storing and sharing multiterabytes of data produced in the long tail of science with a use case in soundscape ecology studies from the Human- Environment Modeling and Analysis Laboratory. We observe that with the capabilities enabled by Data Depot, researchers can easily deploy fine-grained data access control, manage data transfer and sharing, as well as integrate their workflows into a High Performance Computing environment.

  6. Data exchange system in cooler-storage-ring virtual accelerator

    International Nuclear Information System (INIS)

    Liu Wufeng; Qiao Weimin; Jing Lan; Guo Yuhui

    2009-01-01

    The data exchange system of the cooler-storage-ring (CSR) control system for heavy ion radiotherapy has been introduced for the heavy ion CSR at Lanzhou (HIRFL-CSR). Using techniques of Java, component object model (COM), Oracle, DSP and FPGA, this system can achieve real-time control of magnet power supplies sanctimoniously, and control beams and their switching in 256 energy levels. It has been used in the commissioning of slow extraction for the main CSR (CSRm), showing stable and reliable performance. (authors)

  7. A split-path schema-based RFID data storage model in supply chain management.

    Science.gov (United States)

    Fan, Hua; Wu, Quanyuan; Lin, Yisong; Zhang, Jianfeng

    2013-05-03

    In modern supply chain management systems, Radio Frequency IDentification (RFID) technology has become an indispensable sensor technology and massive RFID data sets are expected to become commonplace. More and more space and time are needed to store and process such huge amounts of RFID data, and there is an increasing realization that the existing approaches cannot satisfy the requirements of RFID data management. In this paper, we present a split-path schema-based RFID data storage model. With a data separation mechanism, the massive RFID data produced in supply chain management systems can be stored and processed more efficiently. Then a tree structure-based path splitting approach is proposed to intelligently and automatically split the movement paths of products . Furthermore, based on the proposed new storage model, we design the relational schema to store the path information and time information of tags, and some typical query templates and SQL statements are defined. Finally, we conduct various experiments to measure the effect and performance of our model and demonstrate that it performs significantly better than the baseline approach in both the data expression and path-oriented RFID data query performance.

  8. A Split-Path Schema-Based RFID Data Storage Model in Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Jianfeng Zhang

    2013-05-01

    Full Text Available In modern supply chain management systems, Radio Frequency IDentification (RFID technology has become an indispensable sensor technology and massive RFID data sets are expected to become commonplace. More and more space and time are needed to store and process such huge amounts of RFID data, and there is an increasing realization that the existing approaches cannot satisfy the requirements of RFID data management. In this paper, we present a split-path schema-based RFID data storage model. With a data separation mechanism, the massive RFID data produced in supply chain management systems can be stored and processed more efficiently. Then a tree structure-based path splitting approach is proposed to intelligently and automatically split the movement paths of products . Furthermore, based on the proposed new storage model, we design the relational schema to store the path information and time information of tags, and some typical query templates and SQL statements are defined. Finally, we conduct various experiments to measure the effect and performance of our model and demonstrate that it performs significantly better than the baseline approach in both the data expression and path-oriented RFID data query performance.

  9. GraphStore: A Distributed Graph Storage System for Big Data Networks

    Science.gov (United States)

    Martha, VenkataSwamy

    2013-01-01

    Networks, such as social networks, are a universal solution for modeling complex problems in real time, especially in the Big Data community. While previous studies have attempted to enhance network processing algorithms, none have paved a path for the development of a persistent storage system. The proposed solution, GraphStore, provides an…

  10. Heavy vehicle simulator operations: protocol for instrumentation, data collection and data storage - 2nd draft

    CSIR Research Space (South Africa)

    Jones, DJ

    2002-09-01

    Full Text Available The instrumentation used is discussed under the relevant sections. Keywords: Accelerated pavement testing (APT), Heavy Vehicle Simulator (HVS) Proposals for implementation: Follow protocol in all future HVS testing. Update as required... future HVS testing. The protocol discusses staffing, site selection and establishment, and data collection, analysis and storage. 1.2. Accelerated Pavement Testing Accelerated Pavement Testing (APT) can be described as a controlled application...

  11. Comparison of Decadal Water Storage Trends from Global Hydrological Models and GRACE Satellite Data

    Science.gov (United States)

    Scanlon, B. R.; Zhang, Z. Z.; Save, H.; Sun, A. Y.; Mueller Schmied, H.; Van Beek, L. P.; Wiese, D. N.; Wada, Y.; Long, D.; Reedy, R. C.; Doll, P. M.; Longuevergne, L.

    2017-12-01

    Global hydrology is increasingly being evaluated using models; however, the reliability of these global models is not well known. In this study we compared decadal trends (2002-2014) in land water storage from 7 global models (WGHM, PCR-GLOBWB, and GLDAS: NOAH, MOSAIC, VIC, CLM, and CLSM) to storage trends from new GRACE satellite mascon solutions (CSR-M and JPL-M). The analysis was conducted over 186 river basins, representing about 60% of the global land area. Modeled total water storage trends agree with those from GRACE-derived trends that are within ±0.5 km3/yr but greatly underestimate large declining and rising trends outside this range. Large declining trends are found mostly in intensively irrigated basins and in some basins in northern latitudes. Rising trends are found in basins with little or no irrigation and are generally related to increasing trends in precipitation. The largest decline is found in the Ganges (-12 km3/yr) and the largest rise in the Amazon (43 km3/yr). Differences between models and GRACE are greatest in large basins (>0.5x106 km2) mostly in humid regions. There is very little agreement in storage trends between models and GRACE and among the models with values of r2 mostly store water over decadal timescales that is underrepresented by the models. The storage capacity in the modeled soil and groundwater compartments may be insufficient to accommodate the range in water storage variations shown by GRACE data. The inability of the models to capture the large storage trends indicates that model projections of climate and human-induced changes in water storage may be mostly underestimated. Future GRACE and model studies should try to reduce the various sources of uncertainty in water storage trends and should consider expanding the modeled storage capacity of the soil profiles and their interaction with groundwater.

  12. Measuring and processing measured data in the MAW and HTR fuel element storage experiment. Pt. 2

    International Nuclear Information System (INIS)

    Henze, R.

    1987-01-01

    The central data collection plant for the MAW experimental storage in the Asse salt mine consists of 3 components: a) Front end computers assigned to the experiment for data collection, with few and simple components for the difficult ambient conditions underground. b) An overground central computer, which carries out the tasks of intermediate data storage, display at site, monitoring of the experiment, alarms and remote data transmission for final evaluation. c) A local network connects the front end computers to the central computer. It should take over network tasks (data transmission reports) from the front end computers and should make a flexible implementation of new experiments possible. (orig./RB) [de

  13. Data on the no-load performance analysis of a tomato postharvest storage system.

    Science.gov (United States)

    Ayomide, Orhewere B; Ajayi, Oluseyi O; Banjo, Solomon O; Ajayi, Adesola A

    2017-08-01

    In this present investigation, an original and detailed empirical data on the transfer of heat in a tomato postharvest storage system was presented. No-load tests were performed for a period of 96 h. The heat distribution at different locations, namely the top, middle and bottom of the system was acquired, at a time interval of 30 min for the test period. The humidity inside the system was taken into consideration. Thus, No-load tests with or without introduction of humidity were carried out and data showing the effect of a rise in humidity level, on temperature distribution were acquired. The temperatures at the external mechanical cooling components were acquired and could be used for showing the performance analysis of the storage system.

  14. Use of DBMS-10 for storage and retrieval of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Dunford, C.L.

    1977-01-01

    The use of a data base management system (DBMS) for storage of, and retrieval from, the many scientific data bases maintained by the National Nuclear Data Center is currently being investigated. It would appear that a commercially available DBMS package would save the Center considerable money and manpower when adding new data files to the library and in the long-term maintenance of current data files. Current DBMS technology and experience with an internal DBMS system suggests an inherent inefficiency in processing large data networks where significant portions are accessed in a sequential manner. Such a file is the Evaluated Nuclear Data File (ENDF/B), which contains many large data tables, each one normally accessed in a sequential manner. After gaining some experience and success in small applications of the commercially available DBMS package, DBMS-10, on the Center's DECsystem-10 computer, it was decided to select a large data base as a test case before making a final decision on the implementation of DBMS-10 for all data bases. The obvious approach is to utilize the DBMS to index a random-access file. In this way one is able to increase the storage and retrieval efficiency at the one-time cost of additional programing effort. 2 figures

  15. Use of DBMS-10 for storage and retrieval of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Dunford, C.L.

    1978-01-01

    The use of a data base management system (DBMS) for storage of, and retrieval from, the many scientific data bases maintained by the National Nuclear Data Center is currently being investigated. It would appear that a commercially available DBMS package would save the Center considerable money and manpower when adding new data files to our library and in the long-term maintenance of our current data files. Current DBMS technology and experience with our internal DBMS system suggests an inherent inefficiency in processing large data networks where significant portions are accessed in a sequential manner. Such a file is the Evaluated Nuclear Data File (ENDF/B) which contains many large data tables, each one normally accessed in a sequential manner. After gaining some experience and success in small applications of the commercially available DBMS package, DBMS-10, on the Center's DECsystem-10 computer, it was decided to select one of our large data bases as a test case before making a final decision on the implementation of DBMS-10 for all our data bases. The obvious approach is to utilize the DBMS to index a random access file. In this way one is able to increase the storage and retrieval efficiency at the one-time cost of additional programming effort

  16. nmrML: A Community Supported Open Data Standard for the Description, Storage, and Exchange of NMR Data.

    Science.gov (United States)

    Schober, Daniel; Jacob, Daniel; Wilson, Michael; Cruz, Joseph A; Marcu, Ana; Grant, Jason R; Moing, Annick; Deborde, Catherine; de Figueiredo, Luis F; Haug, Kenneth; Rocca-Serra, Philippe; Easton, John; Ebbels, Timothy M D; Hao, Jie; Ludwig, Christian; Günther, Ulrich L; Rosato, Antonio; Klein, Matthias S; Lewis, Ian A; Luchinat, Claudio; Jones, Andrew R; Grauslys, Arturas; Larralde, Martin; Yokochi, Masashi; Kobayashi, Naohiro; Porzel, Andrea; Griffin, Julian L; Viant, Mark R; Wishart, David S; Steinbeck, Christoph; Salek, Reza M; Neumann, Steffen

    2018-01-02

    NMR is a widely used analytical technique with a growing number of repositories available. As a result, demands for a vendor-agnostic, open data format for long-term archiving of NMR data have emerged with the aim to ease and encourage sharing, comparison, and reuse of NMR data. Here we present nmrML, an open XML-based exchange and storage format for NMR spectral data. The nmrML format is intended to be fully compatible with existing NMR data for chemical, biochemical, and metabolomics experiments. nmrML can capture raw NMR data, spectral data acquisition parameters, and where available spectral metadata, such as chemical structures associated with spectral assignments. The nmrML format is compatible with pure-compound NMR data for reference spectral libraries as well as NMR data from complex biomixtures, i.e., metabolomics experiments. To facilitate format conversions, we provide nmrML converters for Bruker, JEOL and Agilent/Varian vendor formats. In addition, easy-to-use Web-based spectral viewing, processing, and spectral assignment tools that read and write nmrML have been developed. Software libraries and Web services for data validation are available for tool developers and end-users. The nmrML format has already been adopted for capturing and disseminating NMR data for small molecules by several open source data processing tools and metabolomics reference spectral libraries, e.g., serving as storage format for the MetaboLights data repository. The nmrML open access data standard has been endorsed by the Metabolomics Standards Initiative (MSI), and we here encourage user participation and feedback to increase usability and make it a successful standard.

  17. Spectroscopic Feedback for High Density Data Storage and Micromachining

    Science.gov (United States)

    Carr, Christopher W.; Demos, Stavros; Feit, Michael D.; Rubenchik, Alexander M.

    2008-09-16

    Optical breakdown by predetermined laser pulses in transparent dielectrics produces an ionized region of dense plasma confined within the bulk of the material. Such an ionized region is responsible for broadband radiation that accompanies a desired breakdown process. Spectroscopic monitoring of the accompanying light in real-time is utilized to ascertain the morphology of the radiated interaction volume. Such a method and apparatus as presented herein, provides commercial realization of rapid prototyping of optoelectronic devices, optical three-dimensional data storage devices, and waveguide writing.

  18. Mass storage for microprocessor farms

    International Nuclear Information System (INIS)

    Areti, H.

    1990-01-01

    Experiments in high energy physics require high density and high speed mass storage. Mass storage is needed for data logging during the online data acquisition, data retrieval and storage during the event reconstruction and data manipulation during the physics analysis. This paper examines the storage and speed requirements at the first two stages of the experiments and suggests a possible starting point to deal with the problem. 3 refs., 3 figs

  19. Evaluating water storage variations in the MENA region using GRACE satellite data

    KAUST Repository

    Lopez, Oliver

    2013-12-01

    Terrestrial water storage (TWS) variations over large river basins can be derived from temporal gravity field variations observed by the Gravity Recovery and Climate Experiment (GRACE) satellites. These signals are useful for determining accurate estimates of water storage and fluxes over areas covering a minimum of 150,000 km2 (length scales of a few hundred kilometers) and thus prove to be a valuable tool for regional water resources management, particularly for areas with a lack of in-situ data availability or inconsistent monitoring, such as the Middle East and North Africa (MENA) region. This already stressed arid region is particularly vulnerable to climate change and overdraft of its non-renewable freshwater sources, and thus direction in managing its resources is a valuable aid. An inter-comparison of different GRACE-derived TWS products was done in order to provide a quantitative assessment on their uncertainty and their utility for diagnosing spatio-temporal variability in water storage over the MENA region. Different processing approaches for the inter-satellite tracking data from the GRACE mission have resulted in the development of TWS products, with resolutions in time from 10 days to 1 month and in space from 0.5 to 1 degree global gridded data, while some of them use input from land surface models in order to restore the original signal amplitudes. These processing differences and the difficulties in recovering the mass change signals over arid regions will be addressed. Output from the different products will be evaluated and compared over basins inside the MENA region, and compared to output from land surface models.

  20. Evaluating Water Storage Variations in the MENA region using GRACE Satellite Data

    Science.gov (United States)

    Lopez, O.; Houborg, R.; McCabe, M. F.

    2013-12-01

    Terrestrial water storage (TWS) variations over large river basins can be derived from temporal gravity field variations observed by the Gravity Recovery and Climate Experiment (GRACE) satellites. These signals are useful for determining accurate estimates of water storage and fluxes over areas covering a minimum of 150,000 km2 (length scales of a few hundred kilometers) and thus prove to be a valuable tool for regional water resources management, particularly for areas with a lack of in-situ data availability or inconsistent monitoring, such as the Middle East and North Africa (MENA) region. This already stressed arid region is particularly vulnerable to climate change and overdraft of its non-renewable freshwater sources, and thus direction in managing its resources is a valuable aid. An inter-comparison of different GRACE-derived TWS products was done in order to provide a quantitative assessment on their uncertainty and their utility for diagnosing spatio-temporal variability in water storage over the MENA region. Different processing approaches for the inter-satellite tracking data from the GRACE mission have resulted in the development of TWS products, with resolutions in time from 10 days to 1 month and in space from 0.5 to 1 degree global gridded data, while some of them use input from land surface models in order to restore the original signal amplitudes. These processing differences and the difficulties in recovering the mass change signals over arid regions will be addressed. Output from the different products will be evaluated and compared over basins inside the MENA region, and compared to output from land surface models.

  1. Globally distributed software defined storage (proposal)

    Science.gov (United States)

    Shevel, A.; Khoruzhnikov, S.; Grudinin, V.; Sadov, O.; Kairkanov, A.

    2017-10-01

    The volume of the coming data in HEP is growing. The volume of the data to be held for a long time is growing as well. Large volume of data - big data - is distributed around the planet. The methods, approaches how to organize and manage the globally distributed data storage are required. The distributed storage has several examples for personal needs like own-cloud.org, pydio.com, seafile.com, sparkleshare.org. For enterprise-level there is a number of systems: SWIFT - distributed storage systems (part of Openstack), CEPH and the like which are mostly object storage. When several data center’s resources are integrated, the organization of data links becomes very important issue especially if several parallel data links between data centers are used. The situation in data centers and in data links may vary each hour. All that means each part of distributed data storage has to be able to rearrange usage of data links and storage servers in each data center. In addition, for each customer of distributed storage different requirements could appear. The above topics are planned to be discussed in data storage proposal.

  2. Conceptual design report: Nuclear materials storage facility renovation. Part 7, Estimate data

    International Nuclear Information System (INIS)

    1995-01-01

    The Nuclear Materials Storage Facility (NMSF) at the Los Alamos National Laboratory (LANL) was a Fiscal Year (FY) 1984 line-item project completed in 1987 that has never been operated because of major design and construction deficiencies. This renovation project, which will correct those deficiencies and allow operation of the facility, is proposed as an FY 97 line item. The mission of the project is to provide centralized intermediate and long-term storage of special nuclear materials (SNM) associated with defined LANL programmatic missions and to establish a centralized SNM shipping and receiving location for Technical Area (TA)-55 at LANL. Based on current projections, existing storage space for SNM at other locations at LANL will be loaded to capacity by approximately 2002. This will adversely affect LANUs ability to meet its mission requirements in the future. The affected missions include LANL's weapons research, development, and testing (WRD ampersand T) program; special materials recovery; stockpile survelliance/evaluation; advanced fuels and heat sources development and production; and safe, secure storage of existing nuclear materials inventories. The problem is further exacerbated by LANL's inability to ship any materials offsite because of the lack of receiver sites for mate rial and regulatory issues. Correction of the current deficiencies and enhancement of the facility will provide centralized storage close to a nuclear materials processing facility. The project will enable long-term, cost-effective storage in a secure environment with reduced radiation exposure to workers, and eliminate potential exposures to the public. This report is organized according to the sections and subsections outlined by Attachment III-2 of DOE Document AL 4700.1, Project Management System. It is organized into seven parts. This document, Part VII - Estimate Data, contains the project cost estimate information

  3. Conceptual design report: Nuclear materials storage facility renovation. Part 7, Estimate data

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-14

    The Nuclear Materials Storage Facility (NMSF) at the Los Alamos National Laboratory (LANL) was a Fiscal Year (FY) 1984 line-item project completed in 1987 that has never been operated because of major design and construction deficiencies. This renovation project, which will correct those deficiencies and allow operation of the facility, is proposed as an FY 97 line item. The mission of the project is to provide centralized intermediate and long-term storage of special nuclear materials (SNM) associated with defined LANL programmatic missions and to establish a centralized SNM shipping and receiving location for Technical Area (TA)-55 at LANL. Based on current projections, existing storage space for SNM at other locations at LANL will be loaded to capacity by approximately 2002. This will adversely affect LANUs ability to meet its mission requirements in the future. The affected missions include LANL`s weapons research, development, and testing (WRD&T) program; special materials recovery; stockpile survelliance/evaluation; advanced fuels and heat sources development and production; and safe, secure storage of existing nuclear materials inventories. The problem is further exacerbated by LANL`s inability to ship any materials offsite because of the lack of receiver sites for mate rial and regulatory issues. Correction of the current deficiencies and enhancement of the facility will provide centralized storage close to a nuclear materials processing facility. The project will enable long-term, cost-effective storage in a secure environment with reduced radiation exposure to workers, and eliminate potential exposures to the public. This report is organized according to the sections and subsections outlined by Attachment III-2 of DOE Document AL 4700.1, Project Management System. It is organized into seven parts. This document, Part VII - Estimate Data, contains the project cost estimate information.

  4. Meta-Key: A Secure Data-Sharing Protocol under Blockchain-Based Decentralised Storage Architecture

    OpenAIRE

    Fu, Yue

    2017-01-01

    In this paper a secure data-sharing protocol under blockchain-based decentralised storage architecture is proposed, which fulfils users who need to share their encrypted data on-cloud. It implements a remote data-sharing mechanism that enables data owners to share their encrypted data to other users without revealing the original key. Nor do they have to download on-cloud data with re-encryption and re-uploading. Data security as well as efficiency are ensured by symmetric encryption, whose k...

  5. Pulse-modulated multilevel data storage in an organic ferroelectric resistive memory diode

    NARCIS (Netherlands)

    Lee, J.; Breemen, A.J.J.M. van; Khikhlovskyi, V.; Kemerink, M.; Janssen, R.A.J.; Gelinck, G.H.

    2016-01-01

    We demonstrate multilevel data storage in organic ferroelectric resistive memory diodes consisting of a phase-separated blend of P(VDF-TrFE) and a semiconducting polymer. The dynamic behaviour of the organic ferroelectric memory diode can be described in terms of the inhomogeneous field mechanism

  6. Development of EDFSRS: evaluated data files storage and retrieval system

    International Nuclear Information System (INIS)

    Hasegawa, Akira

    1985-07-01

    EDFSRS: Evaluated Data Files Storage and Retrieval System has been developed, which is a complete service system for the evaluated nuclear data files compiled in the major three formats: ENDF/B, UKNDL and KEDAK. This system intends to give efficient loading and maintenance of evaluated nuclear data files to the data base administrators and efficient retrievals to their users not only with the easiness but with the best confidence. It can give users all of the information available in these major three formats. The system consists of more than fifteen independent programs and some 150 Mega byte data files and index files (data-base) of the loaded data. In addition it is designed to be operated in the on-line TSS (Time Sharing System) mode, so that users can get any information from their desk top terminals. This report is prepared as a reference manual of the EDFSRS. (author)

  7. Report from SG 1.2: use of 3-D seismic data in exploration, production and underground storage

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    The objective of this study was to investigate the experience gained from using 3D and 4D techniques in exploration, production and underground storage. The use of 3D seismic data is increasing and considerable progress in the application of such data has been achieved in recent years. 3D is now in extensive use in exploration, field and storage development planning and reservoir management. By using 4D (or time-lapse) seismic data from a given producing area, it is also possible to monitor gas movement as a function of time in a gas field or storage. This emerging technique is therefore very useful in reservoir management, in order to obtain increased recovery, higher production, and to reduce the risk of infill wells. These techniques can also be used for monitoring underground gas storage. The study gives recommendations on the use of 3D and 4D seismic in the gas industry. For this purpose, three specific questionnaires were proposed: the first one dedicated to exploration, development and production of gas fields (Production questionnaire), the second one dedicated to gas storages (Storage questionnaire) and the third one dedicated to the servicing companies. The main results are: - The benefit from 3D is clear for both producing and storage operators in improving structural shape, fault pattern and reservoir knowledge. The method usually saves wells and improve gas volume management. - 4D seismic is an emerging technique with high potential benefits for producers. Research in 4D must focus on the integration of seismic methodology and interpretation of results with production measurements in reservoir models. (author)

  8. Data Blocks: hybrid OLTP and OLAP on compressed storage using both vectorization and compilation

    NARCIS (Netherlands)

    H. Lang (Harald); T. Mühlbauer; F. Funke; P.A. Boncz (Peter); T. Neumann (Thomas); A. Kemper (Alfons)

    2016-01-01

    htmlabstractThis work aims at reducing the main-memory footprint in high performance hybrid OLTP & OLAP databases, while retaining high query performance and transactional throughput. For this purpose, an innovative compressed columnar storage format for cold data, called Data Blocks is introduced.

  9. Parallel file system performances in fusion data storage

    International Nuclear Information System (INIS)

    Iannone, F.; Podda, S.; Bracco, G.; Manduchi, G.; Maslennikov, A.; Migliori, S.; Wolkersdorfer, K.

    2012-01-01

    High I/O flow rates, up to 10 GB/s, are required in large fusion Tokamak experiments like ITER where hundreds of nodes store simultaneously large amounts of data acquired during the plasma discharges. Typical network topologies such as linear arrays (systolic), rings, meshes (2-D arrays), tori (3-D arrays), trees, butterfly, hypercube in combination with high speed data transports like Infiniband or 10G-Ethernet, are the main areas in which the effort to overcome the so-called parallel I/O bottlenecks is most focused. The high I/O flow rates were modelled in an emulated testbed based on the parallel file systems such as Lustre and GPFS, commonly used in High Performance Computing. The test runs on High Performance Computing–For Fusion (8640 cores) and ENEA CRESCO (3392 cores) supercomputers. Message Passing Interface based applications were developed to emulate parallel I/O on Lustre and GPFS using data archival and access solutions like MDSPLUS and Universal Access Layer. These methods of data storage organization are widely diffused in nuclear fusion experiments and are being developed within the EFDA Integrated Tokamak Modelling – Task Force; the authors tried to evaluate their behaviour in a realistic emulation setup.

  10. Parallel file system performances in fusion data storage

    Energy Technology Data Exchange (ETDEWEB)

    Iannone, F., E-mail: francesco.iannone@enea.it [Associazione EURATOM-ENEA sulla Fusione, C.R.ENEA Frascati, via E.Fermi, 45 - 00044 Frascati, Rome (Italy); Podda, S.; Bracco, G. [ENEA Information Communication Tecnologies, Lungotevere Thaon di Revel, 76 - 00196 Rome (Italy); Manduchi, G. [Associazione EURATOM-ENEA sulla Fusione, Consorzio RFX, Corso Stati Uniti, 4 - 35127 Padua (Italy); Maslennikov, A. [CASPUR Inter-University Consortium for the Application of Super-Computing for Research, via dei Tizii, 6b - 00185 Rome (Italy); Migliori, S. [ENEA Information Communication Tecnologies, Lungotevere Thaon di Revel, 76 - 00196 Rome (Italy); Wolkersdorfer, K. [Juelich Supercomputing Centre-FZJ, D-52425 Juelich (Germany)

    2012-12-15

    High I/O flow rates, up to 10 GB/s, are required in large fusion Tokamak experiments like ITER where hundreds of nodes store simultaneously large amounts of data acquired during the plasma discharges. Typical network topologies such as linear arrays (systolic), rings, meshes (2-D arrays), tori (3-D arrays), trees, butterfly, hypercube in combination with high speed data transports like Infiniband or 10G-Ethernet, are the main areas in which the effort to overcome the so-called parallel I/O bottlenecks is most focused. The high I/O flow rates were modelled in an emulated testbed based on the parallel file systems such as Lustre and GPFS, commonly used in High Performance Computing. The test runs on High Performance Computing-For Fusion (8640 cores) and ENEA CRESCO (3392 cores) supercomputers. Message Passing Interface based applications were developed to emulate parallel I/O on Lustre and GPFS using data archival and access solutions like MDSPLUS and Universal Access Layer. These methods of data storage organization are widely diffused in nuclear fusion experiments and are being developed within the EFDA Integrated Tokamak Modelling - Task Force; the authors tried to evaluate their behaviour in a realistic emulation setup.

  11. Evaluating water storage variations in the MENA region using GRACE satellite data

    KAUST Repository

    Lopez, Oliver; Houborg, Rasmus; McCabe, Matthew

    2013-01-01

    estimates of water storage and fluxes over areas covering a minimum of 150,000 km2 (length scales of a few hundred kilometers) and thus prove to be a valuable tool for regional water resources management, particularly for areas with a lack of in-situ data

  12. The realization of the storage of XML and middleware-based data of electronic medical records

    International Nuclear Information System (INIS)

    Liu Shuzhen; Gu Peidi; Luo Yanlin

    2007-01-01

    In this paper, using the technology of XML and middleware to design and implement a unified electronic medical records storage archive management system and giving a common storage management model. Using XML to describe the structure of electronic medical records, transform the medical data from traditional 'business-centered' medical information into a unified 'patient-centered' XML document and using middleware technology to shield the types of the databases at different departments of the hospital and to complete the information integration of the medical data which scattered in different databases, conducive to information sharing between different hospitals. (authors)

  13. Holographic storage of three-dimensional image and data using photopolymer and polymer dispersed liquid crystal films

    International Nuclear Information System (INIS)

    Gao Hong-Yue; Liu Pan; Zeng Chao; Yao Qiu-Xiang; Zheng Zhiqiang; Liu Jicheng; Zheng Huadong; Yu Ying-Jie; Zeng Zhen-Xiang; Sun Tao

    2016-01-01

    We present holographic storage of three-dimensional (3D) images and data in a photopolymer film without any applied electric field. Its absorption and diffraction efficiency are measured, and reflective analog hologram of real object and image of digital information are recorded in the films. The photopolymer is compared with polymer dispersed liquid crystals as holographic materials. Besides holographic diffraction efficiency of the former is little lower than that of the latter, this work demonstrates that the photopolymer is more suitable for analog hologram and big data permanent storage because of its high definition and no need of high voltage electric field. Therefore, our study proposes a potential holographic storage material to apply in large size static 3D holographic displays, including analog hologram displays, digital hologram prints, and holographic disks. (special topic)

  14. Storage and analysis of radioisotope scan data using a microcomputer

    Energy Technology Data Exchange (ETDEWEB)

    Crawshaw, I P; Diffey, B L [Dryburn Hospital, Durham (UK)

    1981-08-01

    A data storage system has been created for recording clinical radioisotope scan data on a microcomputer system, located and readily available for use in an imaging department. The input of patient data from the request cards and the results sheets is straightforward as menus and code numbers are used throughout a logical sequence of steps in the program. The questions fall into four categories; patient information, referring centre information, diagnosis and symptoms and results of the investigation. The main advantage of the analysis program is its flexibility in that it follows the same format as the input program and any combination of criteria required for analysis may be selected. The menus may readily be altered and the programs adapted for use in other hospital departments.

  15. Storage and analysis of radioisotope scan data using a microcomputer

    International Nuclear Information System (INIS)

    Crawshaw, I.P.; Diffey, B.L.

    1981-01-01

    A data storage system has been created for recording clinical radioisotope scan data on a microcomputer system, located and readily available for use in an imaging department. The input of patient data from the request cards and the results sheets is straightforward as menus and code numbers are used throughout a logical sequence of steps in the program. The questions fall into four categories; patient information, referring centre information, diagnosis and symptoms and results of the investigation. The main advantage of the analysis program is its flexibility in that it follows the same format as the input program and any combination of criteria required for analysis may be selected. The menus may readily be altered and the programs adapted for use in other hospital departments. (U.K.)

  16. ``Recent experiences and future expectations in data storage technology''

    Science.gov (United States)

    Pfister, Jack

    1990-08-01

    For more than 10 years the conventional media for High Energy Physics has been 9 track magnetic tape in various densities. More recently, especially in Europe, the IBM 3480 technology has been adopted while in the United States, especially at Fermilab, 8 mm is being used by the largest experiments as a primary recording media and where possible they are using 8 mm for the production, analysis and distribution of data summary tapes. VHS and Digital Audio tape have recurrently appeared but seem to serve primarily as a back-up storage media. The reasons for what appear to be a radical departure are many. Economics (media and controllers are inexpensive), form factor (two gigabytes per shirt pocket), and convenience (fewer mounts/dismounts per minute) are dominant among the reasons. The traditional data media suppliers seem to have been content to evolve the traditional media at their own pace with only modest enhancements primarily in ``value engineering'' of extant products. Meanwhile, start-up companies providing small system and workstations sought other media both to reduce the price of their offerings and respond to the real need of lower cost back-up for lower cost systems. This happening in a market context where traditional computer systems vendors were leaving the tape market altogether or shifting to ``3480'' technology which has certainly created a climate for reconsideration and change. The newest data storage products, in most cases, are not coming from the technologies developed by the computing industry but by the audio and video industry. Just where these flopticals, opticals, 19 mm tape and the new underlying technologies, such as, ``digital paper'' may fit in the HEP computing requirement picture will be reviewed. What these technologies do for and to HEP will be discussed along with some suggestions for a methodology for tracking and evaluating extant and emerging technologies.

  17. Technology for organization of the onboard system for processing and storage of ERS data for ultrasmall spacecraft

    Science.gov (United States)

    Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.

    2017-10-01

    Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.

  18. Handling the data management needs of high-throughput sequencing data: SpeedGene, a compression algorithm for the efficient storage of genetic data

    Science.gov (United States)

    2012-01-01

    Background As Next-Generation Sequencing data becomes available, existing hardware environments do not provide sufficient storage space and computational power to store and process the data due to their enormous size. This is and will be a frequent problem that is encountered everyday by researchers who are working on genetic data. There are some options available for compressing and storing such data, such as general-purpose compression software, PBAT/PLINK binary format, etc. However, these currently available methods either do not offer sufficient compression rates, or require a great amount of CPU time for decompression and loading every time the data is accessed. Results Here, we propose a novel and simple algorithm for storing such sequencing data. We show that, the compression factor of the algorithm ranges from 16 to several hundreds, which potentially allows SNP data of hundreds of Gigabytes to be stored in hundreds of Megabytes. We provide a C++ implementation of the algorithm, which supports direct loading and parallel loading of the compressed format without requiring extra time for decompression. By applying the algorithm to simulated and real datasets, we show that the algorithm gives greater compression rate than the commonly used compression methods, and the data-loading process takes less time. Also, The C++ library provides direct-data-retrieving functions, which allows the compressed information to be easily accessed by other C++ programs. Conclusions The SpeedGene algorithm enables the storage and the analysis of next generation sequencing data in current hardware environment, making system upgrades unnecessary. PMID:22591016

  19. Estimating continental water storage variations in Central Asia area using GRACE data

    International Nuclear Information System (INIS)

    Dapeng, Mu; Zhongchang, Sun; Jinyun, Guo

    2014-01-01

    The goal of GRACE satellite is to determine time-variations of the Earth's gravity, and particularly the effects of fluid mass redistributions at the surface of the Earth. This paper uses GRACE Level-2 RL05 data provided by CSR to estimate water storage variations of four river basins in Asia area for the period from 2003 to 2011. We apply a two-step filtering method to reduce the errors in GRACE data, which combines Gaussian averaging function and empirical de-correlation method. We use GLDAS hydrology to validate the result from GRACE. Special averaging approach is preformed to reduce the errors in GLDAS. The results of former three basins from GRACE are consistent with GLDAS hydrology model. In the Tarim River basin, there is more discrepancy between GRACE and GLDAS. Precipitation data from weather station proves that the results of GRACE are more plausible. We use spectral analysis to obtain the main periods of GRACE and GLDAS time series and then use least squares adjustment to determine the amplitude and phase. The results show that water storage in Central Asia is decreasing

  20. Sharing Privacy Protected and Statistically Sound Clinical Research Data Using Outsourced Data Storage

    Directory of Open Access Journals (Sweden)

    Geontae Noh

    2014-01-01

    Full Text Available It is critical to scientific progress to share clinical research data stored in outsourced generally available cloud computing services. Researchers are able to obtain valuable information that they would not otherwise be able to access; however, privacy concerns arise when sharing clinical data in these outsourced publicly available data storage services. HIPAA requires researchers to deidentify private information when disclosing clinical data for research purposes and describes two available methods for doing so. Unfortunately, both techniques degrade statistical accuracy. Therefore, the need to protect privacy presents a significant problem for data sharing between hospitals and researchers. In this paper, we propose a controlled secure aggregation protocol to secure both privacy and accuracy when researchers outsource their clinical research data for sharing. Since clinical data must remain private beyond a patient’s lifetime, we take advantage of lattice-based homomorphic encryption to guarantee long-term security against quantum computing attacks. Using lattice-based homomorphic encryption, we design an aggregation protocol that aggregates outsourced ciphertexts under distinct public keys. It enables researchers to get aggregated results from outsourced ciphertexts of distinct researchers. To the best of our knowledge, our protocol is the first aggregation protocol which can aggregate ciphertexts which are encrypted with distinct public keys.

  1. Robust Secure Authentication and Data Storage with Perfect Secrecy

    Directory of Open Access Journals (Sweden)

    Sebastian Baur

    2018-04-01

    Full Text Available We consider an authentication process that makes use of biometric data or the output of a physical unclonable function (PUF, respectively, from an information theoretical point of view. We analyse different definitions of achievability for the authentication model. For the secrecy of the key generated for authentication, these definitions differ in their requirements. In the first work on PUF based authentication, weak secrecy has been used and the corresponding capacity regions have been characterized. The disadvantages of weak secrecy are well known. The ultimate performance criteria for the key are perfect secrecy together with uniform distribution of the key. We derive the corresponding capacity region. We show that, for perfect secrecy and uniform distribution of the key, we can achieve the same rates as for weak secrecy together with a weaker requirement on the distribution of the key. In the classical works on PUF based authentication, it is assumed that the source statistics are known perfectly. This requirement is rarely met in applications. That is why the model is generalized to a compound model, taking into account source uncertainty. We also derive the capacity region for the compound model requiring perfect secrecy. Additionally, we consider results for secure storage using a biometric or PUF source that follow directly from the results for authentication. We also generalize known results for this problem by weakening the assumption concerning the distribution of the data that shall be stored. This allows us to combine source compression and secure storage.

  2. Bookshelf: a simple curation system for the storage of biomolecular simulation data.

    Science.gov (United States)

    Vohra, Shabana; Hall, Benjamin A; Holdbrook, Daniel A; Khalid, Syma; Biggin, Philip C

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call 'Bookshelf', that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/

  3. System for secure storage

    NARCIS (Netherlands)

    2005-01-01

    A system (100) comprising read means (112) for reading content data and control logic data from a storage medium (101), the control logic data being uniquely linked to the storage medium (101), processing means (113-117), for processing the content data and feeding the processed content data to an

  4. A price and performance comparison of three different storage architectures for data in cloud-based systems

    Science.gov (United States)

    Gallagher, J. H. R.; Jelenak, A.; Potter, N.; Fulker, D. W.; Habermann, T.

    2017-12-01

    Providing data services based on cloud computing technology that is equivalent to those developed for traditional computing and storage systems is critical for successful migration to cloud-based architectures for data production, scientific analysis and storage. OPeNDAP Web-service capabilities (comprising the Data Access Protocol (DAP) specification plus open-source software for realizing DAP in servers and clients) are among the most widely deployed means for achieving data-as-service functionality in the Earth sciences. OPeNDAP services are especially common in traditional data center environments where servers offer access to datasets stored in (very large) file systems, and a preponderance of the source data for these services is being stored in the Hierarchical Data Format Version 5 (HDF5). Three candidate architectures for serving NASA satellite Earth Science HDF5 data via Hyrax running on Amazon Web Services (AWS) were developed and their performance examined for a set of representative use cases. The performance was based both on runtime and incurred cost. The three architectures differ in how HDF5 files are stored in the Amazon Simple Storage Service (S3) and how the Hyrax server (as an EC2 instance) retrieves their data. The results for both the serial and parallel access to HDF5 data in the S3 will be presented. While the study focused on HDF5 data, OPeNDAP and the Hyrax data server, the architectures are generic and the analysis can be extrapolated to many different data formats, web APIs, and data servers.

  5. Cloud object store for archive storage of high performance computing data using decoupling middleware

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  6. Threshold response using modulated continuous wave illumination for multilayer 3D optical data storage

    Science.gov (United States)

    Saini, A.; Christenson, C. W.; Khattab, T. A.; Wang, R.; Twieg, R. J.; Singer, K. D.

    2017-01-01

    In order to achieve a high capacity 3D optical data storage medium, a nonlinear or threshold writing process is necessary to localize data in the axial dimension. To this end, commercial multilayer discs use thermal ablation of metal films or phase change materials to realize such a threshold process. This paper addresses a threshold writing mechanism relevant to recently reported fluorescence-based data storage in dye-doped co-extruded multilayer films. To gain understanding of the essential physics, single layer spun coat films were used so that the data is easily accessible by analytical techniques. Data were written by attenuating the fluorescence using nanosecond-range exposure times from a 488 nm continuous wave laser overlapping with the single photon absorption spectrum. The threshold writing process was studied over a range of exposure times and intensities, and with different fluorescent dyes. It was found that all of the dyes have a common temperature threshold where fluorescence begins to attenuate, and the physical nature of the thermal process was investigated.

  7. Model-independent and fast determination of optical functions in storage rings via multiturn and closed-orbit data

    Directory of Open Access Journals (Sweden)

    Bernard Riemann

    2011-06-01

    Full Text Available Multiturn (or turn-by-turn data acquisition has proven to be a new source of direct measurements for Twiss parameters in storage rings. On the other hand, closed-orbit measurements are a long-known tool for analyzing closed-orbit perturbations with conventional beam position monitor (BPM systems and are necessarily available at every storage ring. This paper aims at combining the advantages of multiturn measurements and closed-orbit data. We show that only two multiturn BPMs and four correctors in one localized drift space in the storage ring (diagnostic drift are sufficient for model-independent and absolute measuring of β and φ functions at all BPMs, including the conventional ones, instead of requiring all BPMs being equipped with multiturn electronics.

  8. Model-independent and fast determination of optical functions in storage rings via multiturn and closed-orbit data

    Science.gov (United States)

    Riemann, Bernard; Grete, Patrick; Weis, Thomas

    2011-06-01

    Multiturn (or turn-by-turn) data acquisition has proven to be a new source of direct measurements for Twiss parameters in storage rings. On the other hand, closed-orbit measurements are a long-known tool for analyzing closed-orbit perturbations with conventional beam position monitor (BPM) systems and are necessarily available at every storage ring. This paper aims at combining the advantages of multiturn measurements and closed-orbit data. We show that only two multiturn BPMs and four correctors in one localized drift space in the storage ring (diagnostic drift) are sufficient for model-independent and absolute measuring of β and φ functions at all BPMs, including the conventional ones, instead of requiring all BPMs being equipped with multiturn electronics.

  9. dCache data storage system implementations at a Tier-2 centre

    Energy Technology Data Exchange (ETDEWEB)

    Tsigenov, Oleg; Nowack, Andreas; Kress, Thomas [III. Physikalisches Institut B, RWTH Aachen (Germany)

    2009-07-01

    The experimental high energy physics groups of the RWTH Aachen University operate one of the largest Grid Tier-2 sites in the world and offer more than 2000 modern CPU cores and about 550 TB of disk space mainly to the CMS experiment and to a lesser extent to the Auger and Icecube collaborations.Running such a large data cluster requires a flexible storage system with high performance. We use dCache for this purpose and are integrated into the dCache support team to the benefit of the German Grid sites. Recently, a storage pre-production cluster has been built to study the setup and the behavior of novel dCache features within Chimera without interfering with the production system. This talk gives an overview about the practical experience gained with dCache on both the production and the testbed cluster and discusses future plans.

  10. An Empirical Study on Android for Saving Non-shared Data on Public Storage

    OpenAIRE

    Liu, Xiangyu; Zhou, Zhe; Diao, Wenrui; Li, Zhou; Zhang, Kehuan

    2014-01-01

    With millions of apps that can be downloaded from official or third-party market, Android has become one of the most popular mobile platforms today. These apps help people in all kinds of ways and thus have access to lots of user's data that in general fall into three categories: sensitive data, data to be shared with other apps, and non-sensitive data not to be shared with others. For the first and second type of data, Android has provided very good storage models: an app's private sensitive...

  11. Ensuring Data Storage Security in Tree cast Routing Architecture for Sensor Networks

    Science.gov (United States)

    Kumar, K. E. Naresh; Sagar, U. Vidya; Waheed, Mohd. Abdul

    2010-10-01

    In this paper presents recent advances in technology have made low-cost, low-power wireless sensors with efficient energy consumption. A network of such nodes can coordinate among themselves for distributed sensing and processing of certain data. For which, we propose an architecture to provide a stateless solution in sensor networks for efficient routing in wireless sensor networks. This type of architecture is known as Tree Cast. We propose a unique method of address allocation, building up multiple disjoint trees which are geographically inter-twined and rooted at the data sink. Using these trees, routing messages to and from the sink node without maintaining any routing state in the sensor nodes is possible. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, this routing architecture moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this paper, we focus on data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in this architecture, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server

  12. Data storage for managing the health enterprise and achieving business continuity.

    Science.gov (United States)

    Hinegardner, Sam

    2003-01-01

    As organizations move away from a silo mentality to a vision of enterprise-level information, more healthcare IT departments are rejecting the idea of information storage as an isolated, system-by-system solution. IT executives want storage solutions that act as a strategic element of an IT infrastructure, centralizing storage management activities to effectively reduce operational overhead and costs. This article focuses on three areas of enterprise storage: tape, disk, and disaster avoidance.

  13. Hydrological storage variations in a lake water balance, observed from multi-sensor satellite data and hydrological models.

    Science.gov (United States)

    Singh, Alka; Seitz, Florian; Schwatke, Christian; Guentner, Andreas

    2013-04-01

    Freshwater lakes and reservoirs account for 74.5% of continental water storage in surface water bodies and only 1.8% resides in rivers. Lakes and reservoirs are a key component of the continental hydrological cycle but in-situ monitoring networks are very limited either because of sparse spatial distribution of gauges or national data policy. Monitoring and predicting extreme events is very challenging in that case. In this study we demonstrate the use of optical remote sensing, satellite altimetry and the GRACE gravity field mission to monitor the lake water storage variations in the Aral Sea. Aral Sea is one of the most unfortunate examples of a large anthropogenic catastrophe. The 4th largest lake of 1960s has been decertified for more than 75% of its area due to the diversion of its primary rivers for irrigation purposes. Our study is focused on the time frame of the GRACE mission; therefore we consider changes from 2002 onwards. Continuous monthly time series of water masks from Landsat satellite data and water level from altimetry missions were derived. Monthly volumetric variations of the lake water storage were computed by intersecting a digital elevation model of the lake with respective water mask and altimetry water level. With this approach we obtained volume from two independent remote sensing methods to reduce the error in the estimated volume through least square adjustment. The resultant variations were then compared with mass variability observed by GRACE. In addition, GARCE estimates of water storage variations were compared with simulation results of the Water Gap Hydrology Model (WGHM). The different observations from all missions agree that the lake reached an absolute minimum in autumn 2009. A marked reversal of the negative trend occured in 2010 but water storage in the lake decreased again afterwards. The results reveal that water storage variations in the Aral Sea are indeed the principal, but not the only contributor to the GRACE signal of

  14. Data acquisition, storage and control architecture for the SuperNova Acceleration Probe

    International Nuclear Information System (INIS)

    Prosser, Alan; Fermilab; Cardoso, Guilherme; Chramowicz, John; Marriner, John; Rivera, Ryan; Turqueti, Marcos; Fermilab

    2007-01-01

    The SuperNova Acceleration Probe (SNAP) instrument is being designed to collect image and spectroscopic data for the study of dark energy in the universe. In this paper, we describe a distributed architecture for the data acquisition system which interfaces to visible light and infrared imaging detectors. The architecture includes the use of NAND flash memory for the storage of exposures in a file system. Also described is an FPGA-based lossless data compression algorithm with a configurable pre-scaler based on a novel square root data compression method to improve compression performance. The required interactions of the distributed elements with an instrument control unit will be described as well

  15. Data compilation report: Gas and liquid samples from K West Basin fuel storage canisters

    International Nuclear Information System (INIS)

    Trimble, D.J.

    1995-01-01

    Forty-one gas and liquid samples were taken from spent fuel storage canisters in the K West Basin during a March 1995 sampling campaign. (Spent fuel from the N Reactor is stored in sealed canisters at the bottom of the K West Basin.) A description of the sampling process, gamma energy analysis data, and quantitative gas mass spectroscopy data are documented. This documentation does not include data analysis

  16. All-optical signal processing data communication and storage applications

    CERN Document Server

    Eggleton, Benjamin

    2015-01-01

    This book provides a comprehensive review of the state-of-the art of optical signal processing technologies and devices. It presents breakthrough solutions for enabling a pervasive use of optics in data communication and signal storage applications. It presents presents optical signal processing as solution to overcome the capacity crunch in communication networks. The book content ranges from the development of innovative materials and devices, such as graphene and slow light structures, to the use of nonlinear optics for secure quantum information processing and overcoming the classical Shannon limit on channel capacity and microwave signal processing. Although it holds the promise for a substantial speed improvement, today’s communication infrastructure optics remains largely confined to the signal transport layer, as it lags behind electronics as far as signal processing is concerned. This situation will change in the near future as the tremendous growth of data traffic requires energy efficient and ful...

  17. Adaptation of PyFlag to Efficient Analysis of Overtaken Computer Data Storage

    Directory of Open Access Journals (Sweden)

    Aleksander Byrski

    2010-03-01

    Full Text Available Based on existing software aimed at investigation support in the analysis of computer data storage overtaken during investigation (PyFlag, an extension is proposed involving the introduction of dedicated components for data identification and filtering. Hash codes for popular software contained in NIST/NSRL database are considered in order to avoid unwanted files while searching and to classify them into several categories. The extension allows for further analysis, e.g. using artificial intelligence methods. The considerations are illustrated by the overview of the system's design.

  18. Discrete event simulation and the resultant data storage system response in the operational mission environment of Jupiter-Saturn /Voyager/ spacecraft

    Science.gov (United States)

    Mukhopadhyay, A. K.

    1978-01-01

    The Data Storage Subsystem Simulator (DSSSIM) simulating (by ground software) occurrence of discrete events in the Voyager mission is described. Functional requirements for Data Storage Subsystems (DSS) simulation are discussed, and discrete event simulation/DSSSIM processing is covered. Four types of outputs associated with a typical DSSSIM run are presented, and DSSSIM limitations and constraints are outlined.

  19. Developing semi-analytical solution for multiple-zone transient storage model with spatially non-uniform storage

    Science.gov (United States)

    Deng, Baoqing; Si, Yinbing; Wang, Jia

    2017-12-01

    Transient storages may vary along the stream due to stream hydraulic conditions and the characteristics of storage. Analytical solutions of transient storage models in literature didn't cover the spatially non-uniform storage. A novel integral transform strategy is presented that simultaneously performs integral transforms to the concentrations in the stream and in storage zones by using the single set of eigenfunctions derived from the advection-diffusion equation of the stream. The semi-analytical solution of the multiple-zone transient storage model with the spatially non-uniform storage is obtained by applying the generalized integral transform technique to all partial differential equations in the multiple-zone transient storage model. The derived semi-analytical solution is validated against the field data in literature. Good agreement between the computed data and the field data is obtained. Some illustrative examples are formulated to demonstrate the applications of the present solution. It is shown that solute transport can be greatly affected by the variation of mass exchange coefficient and the ratio of cross-sectional areas. When the ratio of cross-sectional areas is big or the mass exchange coefficient is small, more reaches are recommended to calibrate the parameter.

  20. Research on high-performance mass storage system

    International Nuclear Information System (INIS)

    Cheng Yaodong; Wang Lu; Huang Qiulan; Zheng Wei

    2010-01-01

    With the enlargement of scientific experiments, more and more data will be produced, which brings great challenge to storage system. Large storage capacity and high data access performance are both important to Mass storage system. This paper firstly reviews some kinds of popular storage systems including network storage system, SAN-based sharing system, WAN File system, object-based parallel file system, hierarchical storage system and cloud storage systems. Then some key technologies are presented. Finally, this paper takes BES storage system as an example and introduces its requirements, architecture and operation results. (authors)

  1. Extracting Biological Meaning From Global Proteomic Data on Circulating-Blood Platelets: Effects of Diabetes and Storage Time

    Energy Technology Data Exchange (ETDEWEB)

    Miller, John H.; Suleiman, Atef; Daly, Don S.; Springer, David L.; Spinelli, Sherry L.; Blumberg, Neil; Phipps, Richard P.

    2008-11-25

    Transfusion of platelets into patients suffering from trauma and a variety of disease is a common medical practice that involves millions of units per year. Partial activation of platelets can result in the release of bioactive proteins and lipid mediators that increase the risk of adverse post-transfusion effects. Type-2 diabetes and storage are two factors known to cause partial activation of platelets. A global proteomic study was undertaken to investigate these effects. In this paper we discuss the methods used to interpret these data in terms of biological processes affected by diabetes and storage. The main emphasis is on the processing of proteomic data for gene ontology enrichment analysis by techniques originally designed for microarray data.

  2. Public storage for the Open Science Grid

    International Nuclear Information System (INIS)

    Levshina, T; Guru, A

    2014-01-01

    The Open Science Grid infrastructure doesn't provide efficient means to manage public storage offered by participating sites. A Virtual Organization that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its utilization. The involvement of the production manager, site administrators and VO support personnel is required to allocate or rescind storage space. One of the main requirements for Public Storage implementation is that it should use SRM or GridFTP protocols to access the Storage Elements provided by the OSG Sites and not put any additional burden on sites. By policy, no new services related to Public Storage can be installed and run on OSG sites. Opportunistic users also have difficulties in accessing the OSG Storage Elements during the execution of jobs. A typical users' data management workflow includes pre-staging common data on sites before a job's execution, then storing for a subsequent download to a local institution the output data produced by a job on a worker node. When the amount of data is significant, the only means to temporarily store the data is to upload it to one of the Storage Elements. In order to do that, a user's job should be aware of the storage location, availability, and free space. After a successful data upload, users must somehow keep track of the data's location for future access. In this presentation we propose solutions for storage management and data handling issues in the OSG. We are investigating the feasibility of using the integrated Rule-Oriented Data System developed at RENCI as a front-end service to the OSG SEs. The current architecture, state of deployment and performance test results will be discussed. We will also provide examples of current usage of the system by beta-users.

  3. Public storage for the Open Science Grid

    Science.gov (United States)

    Levshina, T.; Guru, A.

    2014-06-01

    The Open Science Grid infrastructure doesn't provide efficient means to manage public storage offered by participating sites. A Virtual Organization that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its utilization. The involvement of the production manager, site administrators and VO support personnel is required to allocate or rescind storage space. One of the main requirements for Public Storage implementation is that it should use SRM or GridFTP protocols to access the Storage Elements provided by the OSG Sites and not put any additional burden on sites. By policy, no new services related to Public Storage can be installed and run on OSG sites. Opportunistic users also have difficulties in accessing the OSG Storage Elements during the execution of jobs. A typical users' data management workflow includes pre-staging common data on sites before a job's execution, then storing for a subsequent download to a local institution the output data produced by a job on a worker node. When the amount of data is significant, the only means to temporarily store the data is to upload it to one of the Storage Elements. In order to do that, a user's job should be aware of the storage location, availability, and free space. After a successful data upload, users must somehow keep track of the data's location for future access. In this presentation we propose solutions for storage management and data handling issues in the OSG. We are investigating the feasibility of using the integrated Rule-Oriented Data System developed at RENCI as a front-end service to the OSG SEs. The current architecture, state of deployment and performance test results will be discussed. We will also provide examples of current usage of the system by beta-users.

  4. Aflatoxins & Safe Storage

    Directory of Open Access Journals (Sweden)

    Philippe eVillers

    2014-04-01

    Full Text Available The paper examines both field experience and research on the prevention of the exponential growth of aflatoxins during multi-month post harvest storage in hot, humid countries. The approach described is the application of modern safe storage methods using flexible, Ultra Hermetic™ structures that create an unbreatheable atmosphere through insect and microorganism respiration alone, without use of chemicals, fumigants, or pumps. Laboratory and field data are cited and specific examples are given describing the uses of Ultra Hermetic storage to prevent the growth of aflatoxins with their significant public health consequences. Also discussed is the presently limited quantitative information on the relative occurrence of excessive levels of aflatoxin (>20 ppb before versus after multi-month storage of such crops as maize, rice and peanuts when under high humidity, high temperature conditions and, consequently, the need for further research to determine the frequency at which excessive aflatoxin levels are reached in the field versus after months of post-harvest storage. The significant work being done to reduce aflatoxin levels in the field is mentioned, as well as its probable implications on post harvest storage. Also described is why, with some crops such as peanuts, using Ultra Hermetic storage may require injection of carbon dioxide or use of an oxygen absorber as an accelerant. The case of peanuts is discussed and experimental data is described.

  5. Protecting location privacy for outsourced spatial data in cloud storage.

    Science.gov (United States)

    Tian, Feng; Gui, Xiaolin; An, Jian; Yang, Pan; Zhao, Jianqiang; Zhang, Xuejun

    2014-01-01

    As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC(∗)) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC(∗) and DSC are more secure than SHC, and DSC achieves the best index generation performance.

  6. Sustainable storage of data. Energy conservation by sustainable storage in colleges; Duurzame opslag van data. Energiebesparing door duurzame opslag binnen het hoger onderwijs

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-11-15

    SURFnet, the Dutch organization of for colleges and universities in the field of ICT, issued another innovation scheme in the field of sustainability and ICT for 2012. The aim of the innovation scheme is to encourage people to start sustainable projects by means of ICT. In this context the College of Arnhem and Nijmegen (HAN) executed a project in which the possibilities to save energy through sustainable storage of data in its educational facilities [Dutch] SURFnet, de samenwerkingsorganisatie van hogescholen en universiteiten op het gebied van ICT, heeft voor 2012 opnieuw een innovatieregeling op het gebied van duurzaamheid en ICT uitgeschreven. Doel van de innovatieregeling is om instellingen te stimuleren projecten te starten om door middel van of met ICT structureel bij te dragen aan verduurzaming. De Hogeschool van Arnhem en Nijmegen (HAN) heeft in dit kader een project uitgevoerd waarin is onderzocht wat de mogelijkheden zijn om energie te besparen d.m.v. duurzame opslag van data in haar onderwijsinstelling.

  7. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    International Nuclear Information System (INIS)

    Hipp, James R.; Moore, Susan G.; Myers, Stephen C.; Schultz, Craig A.; Shepherd, Ellen; Young, Christopher J.

    1999-01-01

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis for accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation

  8. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip

    Science.gov (United States)

    Shulaker, Max M.; Hills, Gage; Park, Rebecca S.; Howe, Roger T.; Saraswat, Krishna; Wong, H.-S. Philip; Mitra, Subhasish

    2017-07-01

    The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors—promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage—fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce ‘highly processed’ information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

  9. Long-time data storage: relevant time scales

    NARCIS (Netherlands)

    Elwenspoek, Michael Curt

    2011-01-01

    Dynamic processes relevant for long-time storage of information about human kind are discussed, ranging from biological and geological processes to the lifecycle of stars and the expansion of the universe. Major results are that life will end ultimately and the remaining time that the earth is

  10. Online mass storage system detailed requirements document

    Science.gov (United States)

    1976-01-01

    The requirements for an online high density magnetic tape data storage system that can be implemented in a multipurpose, multihost environment is set forth. The objective of the mass storage system is to provide a facility for the compact storage of large quantities of data and to make this data accessible to computer systems with minimum operator handling. The results of a market survey and analysis of candidate vendor who presently market high density tape data storage systems are included.

  11. Comparative analysis on operation strategies of CCHP system with cool thermal storage for a data center

    International Nuclear Information System (INIS)

    Song, Xu; Liu, Liuchen; Zhu, Tong; Zhang, Tao; Wu, Zhu

    2016-01-01

    Highlights: • Load characteristics of the data center make a good match with CCHP systems. • TRNSYS models was used to simulate the discussed CCHP system in a data center. • Comprehensive system performance under two operation strategies were evaluated. • Cool thermal storage was introduced to reuse the energy surplus by FEL system. • The suitable principle of equipment selection for a FEL system were proposed. - Abstract: Combined Cooling, Heating, and Power (CCHP) systems with cool thermal storage can provide an appropriate energy supply for data centers. In this work, we evaluate the CCHP system performance under two different operation strategies, i.e., following thermal load (FTL) and following electric load (FEL). The evaluation is performed through a case study by using TRNSYS software. In the FEL system, the amount of cool thermal energy generated by the absorption chillers is larger than the cooling load and it can be therefore stored and reused at the off-peak times. Results indicate that systems under both operation strategies have advantages in the fields of energy saving and environmental protection. The largest percentage of reduction of primary energy consumption, CO_2 emissions, and operation cost for the FEL system, are 18.5%, 37.4% and 46.5%, respectively. Besides, the system performance is closely dependent on the equipment selection. The relation between the amount of energy recovered through cool thermal storage and the primary energy consumption has also been taken into account. Moreover, the introduction of cool thermal storage can adjust the heat to power ratio on the energy supply side close to that on the consumer side and consequently promote system flexibility and energy efficiency.

  12. Eternal 5D optical data storage in glass (Conference Presentation)

    Science.gov (United States)

    Kazansky, Peter G.; Cerkauskaite, Ausra; Drevinskas, Rokas; Zhang, Jingyu

    2016-09-01

    A decade ago it has been discovered that during femtosecond laser writing self-organized subwavelength structures with record small features of 20 nm, could be created in the volume of silica glass. On the macroscopic scale the self-assembled nanostructure behaves as a uniaxial optical crystal with negative birefringence. The optical anisotropy, which results from the alignment of nano-platelets, referred to as form birefringence, is of the same order of magnitude as positive birefringence in crystalline quartz. The two independent parameters describing birefringence, the slow axis orientation (4th dimension) and the strength of retardance (5th dimension), are explored for the optical encoding of information in addition to three spatial coordinates. The slow axis orientation and the retardance are independently manipulated by the polarization and intensity of the femtosecond laser beam. The data optically encoded into five dimensions is successfully retrieved by quantitative birefringence measurements. The storage allows unprecedented parameters including hundreds of terabytes per disc data capacity and thermal stability up to 1000°. Even at elevated temperatures of 160oC, the extrapolated decay time of nanogratings is comparable with the age of the Universe - 13.8 billion years. The recording of the digital documents, which will survive the human race, including the eternal copies of Universal Declaration of Human Rights, Newton's Opticks, Kings James Bible and Magna Carta, is a vital step towards an eternal archive. Additionally, a number of projects (such as Time Capsule to Mars, MoonMail, and the Google Lunar XPRIZE) could benefit from the technique's extreme durability, which fulfills a crucial requirement for storage on the Moon or Mars.

  13. Geochemical modelling of CO2-water-rock interactions for carbon storage : data requirements and outputs

    International Nuclear Information System (INIS)

    Kirste, D.

    2008-01-01

    A geochemical model was used to predict the short-term and long-term behaviour of carbon dioxide (CO 2 ), formation water, and reservoir mineralogy at a carbon sequestration site. Data requirements for the geochemical model included detailed mineral petrography; formation water chemistry; thermodynamic and kinetic data for mineral phases; and rock and reservoir physical characteristics. The model was used to determine the types of outputs expected for potential CO 2 storage sites and natural analogues. Reaction path modelling was conducted to determine the total reactivity or CO 2 storage capability of the rock by applying static equilibrium and kinetic simulations. Potential product phases were identified using the modelling technique, which also enabled the identification of the chemical evolution of the system. Results of the modelling study demonstrated that changes in porosity and permeability over time should be considered during the site selection process.

  14. Tiered Storage For LHC

    CERN Multimedia

    CERN. Geneva; Hanushevsky, Andrew

    2012-01-01

    For more than a year, the ATLAS Western Tier 2 (WT2) at SLAC National Accelerator has been successfully operating a two tiered storage system based on Xrootd's flexible cross-cluster data placement framework, the File Residency Manager. The architecture allows WT2 to provide both, high performance storage at the higher tier to ATLAS analysis jobs, as well as large, low cost disk capacity at the lower tier. Data automatically moves between the two storage tiers based on the needs of analysis jobs and is completely transparent to the jobs.

  15. Long-term data storage in diamond

    OpenAIRE

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A.

    2016-01-01

    The negatively charged nitrogen vacancy (NV?) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV? optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multic...

  16. Rack Aware Data Placement for Network Consumption in Erasure-Coded Clustered Storage Systems

    Directory of Open Access Journals (Sweden)

    Bilin Shao

    2018-06-01

    Full Text Available The amount of encoded data replication in an erasure-coded clustered storage system has a great impact on the bandwidth consumption and network latency, mostly during data reconstruction. Aimed at the reasons that lead to the excess data transmission between racks, a rack aware data block placement method is proposed. In order to ensure rack-level fault tolerance and reduce the frequency and amount of the cross-rack data transmission during data reconstruction, the method deploys partial data block concentration to store the data blocks of a file in fewer racks. Theoretical analysis and simulation results show that our proposed strategy greatly reduces the frequency and data volume of the cross-rack transmission during data reconstruction. At the same time, it has better performance than the typical random distribution method in terms of network usage and data reconstruction efficiency.

  17. Eosin blue dye based poly(methacrylate) films for data storage

    Science.gov (United States)

    Sankar, Deepa; Palanisamy, P. K.; Manickasundaram, S.; Kannan, P.

    2006-06-01

    Eosin dye based poly(methacrylates) with variation in the number of methylene spacers have been prepared by free radical polymerization process. The utility of the polymers for high-density optical data storage using holography has been studied by grating formation with the 514.5 nm line of the Argon ion laser as source. The influence of various parameters on the diffraction efficiency of the polymers has been studied. The effect of increase in the number of methylene spacers, hooked to the eosin blue dye, on the diffraction efficiency of the grating formed has also been discussed. Optical microscopic observations showing grating formation in the polymers have also been presented.

  18. Efficient storage, retrieval and analysis of poker hands: An adaptive data framework

    Directory of Open Access Journals (Sweden)

    Gorawski Marcin

    2017-12-01

    Full Text Available In online gambling, poker hands are one of the most popular and fundamental units of the game state and can be considered objects comprising all the events that pertain to the single hand played. In a situation where tens of millions of poker hands are produced daily and need to be stored and analysed quickly, the use of relational databases no longer provides high scalability and performance stability. The purpose of this paper is to present an efficient way of storing and retrieving poker hands in a big data environment. We propose a new, read-optimised storage model that offers significant data access improvements over traditional database systems as well as the existing Hadoop file formats such as ORC, RCFile or SequenceFile. Through index-oriented partition elimination, our file format allows reducing the number of file splits that needs to be accessed, and improves query response time up to three orders of magnitude in comparison with other approaches. In addition, our file format supports a range of new indexing structures to facilitate fast row retrieval at a split level. Both index types operate independently of the Hive execution context and allow other big data computational frameworks such as MapReduce or Spark to benefit from the optimized data access path to the hand information. Moreover, we present a detailed analysis of our storage model and its supporting index structures, and how they are organised in the overall data framework. We also describe in detail how predicate based expression trees are used to build effective file-level execution plans. Our experimental tests conducted on a production cluster, holding nearly 40 billion hands which span over 4000 partitions, show that multi-way partition pruning outperforms other existing file formats, resulting in faster query execution times and better cluster utilisation.

  19. Phase modulated high density collinear holographic data storage system with phase-retrieval reference beam locking and orthogonal reference encoding.

    Science.gov (United States)

    Liu, Jinpeng; Horimai, Hideyoshi; Lin, Xiao; Huang, Yong; Tan, Xiaodi

    2018-02-19

    A novel phase modulation method for holographic data storage with phase-retrieval reference beam locking is proposed and incorporated into an amplitude-encoding collinear holographic storage system. Unlike the conventional phase retrieval method, the proposed method locks the data page and the corresponding phase-retrieval interference beam together at the same location with a sequential recording process, which eliminates piezoelectric elements, phase shift arrays and extra interference beams, making the system more compact and phase retrieval easier. To evaluate our proposed phase modulation method, we recorded and then recovered data pages with multilevel phase modulation using two spatial light modulators experimentally. For 4-level, 8-level, and 16-level phase modulation, we achieved the bit error rate (BER) of 0.3%, 1.5% and 6.6% respectively. To further improve data storage density, an orthogonal reference encoding multiplexing method at the same position of medium is also proposed and validated experimentally. We increased the code rate of pure 3/16 amplitude encoding method from 0.5 up to 1.0 and 1.5 using 4-level and 8-level phase modulation respectively.

  20. Spent fuel storage requirements: the need for away-from-reactor storage

    International Nuclear Information System (INIS)

    1980-01-01

    The analyses of on-site storage capabilities of domestic utilities and estimates of timing and magnitude of away-from-reactor (AFR) storage requirements were presented in the report DOE/ET-0075 entitled Spent Fuel Storage Requirements: The Need For Away-From-Reactor Storage published in February 1979 by the US Department of Energy. Since utility plans and requirements continue to change with time, a need exists to update the AFR requirements estimates as appropriate. This short report updates the results presented in DOE/ET-0075 to reflect recent data on reactor operations and spent fuel storage. In addition to the updates of cases representing the range of AFR requirements in DOE/ET-0075, new cases of interest reflecting utility and regulatory trends are presented

  1. AnalyzeThis: An Analysis Workflow-Aware Storage System

    Energy Technology Data Exchange (ETDEWEB)

    Sim, Hyogi [ORNL; Kim, Youngjae [ORNL; Vazhkudai, Sudharshan S [ORNL; Tiwari, Devesh [ORNL; Anwar, Ali [Virginia Tech, Blacksburg, VA; Butt, Ali R [Virginia Tech, Blacksburg, VA; Ramakrishnan, Lavanya [Lawrence Berkeley National Laboratory (LBNL)

    2015-01-01

    The need for novel data analysis is urgent in the face of a data deluge from modern applications. Traditional approaches to data analysis incur significant data movement costs, moving data back and forth between the storage system and the processor. Emerging Active Flash devices enable processing on the flash, where the data already resides. An array of such Active Flash devices allows us to revisit how analysis workflows interact with storage systems. By seamlessly blending together the flash storage and data analysis, we create an analysis workflow-aware storage system, AnalyzeThis. Our guiding principle is that analysis-awareness be deeply ingrained in each and every layer of the storage, elevating data analyses as first-class citizens, and transforming AnalyzeThis into a potent analytics-aware appliance. We implement the AnalyzeThis storage system atop an emulation platform of the Active Flash array. Our results indicate that AnalyzeThis is viable, expediting workflow execution and minimizing data movement.

  2. The Experiment Data Depot: A Web-Based Software Tool for Biological Experimental Data Storage, Sharing, and Visualization

    DEFF Research Database (Denmark)

    Morell, William C.; Birkel, Garrett W.; Forrer, Mark

    2017-01-01

    Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high quality data to be parametrized and tested, which are not gener......Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high quality data to be parametrized and tested, which...... algorithms. In this paper, we describe EDD and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes....

  3. Determining water storage depletion within Iran by assimilating GRACE data into the W3RA hydrological model

    Science.gov (United States)

    Khaki, M.; Forootan, E.; Kuhn, M.; Awange, J.; van Dijk, A. I. J. M.; Schumacher, M.; Sharifi, M. A.

    2018-04-01

    Groundwater depletion, due to both unsustainable water use and a decrease in precipitation, has been reported in many parts of Iran. In order to analyze these changes during the recent decade, in this study, we assimilate Terrestrial Water Storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) into the World-Wide Water Resources Assessment (W3RA) model. This assimilation improves model derived water storage simulations by introducing missing trends and correcting the amplitude and phase of seasonal water storage variations. The Ensemble Square-Root Filter (EnSRF) technique is applied, which showed stable performance in propagating errors during the assimilation period (2002-2012). Our focus is on sub-surface water storage changes including groundwater and soil moisture variations within six major drainage divisions covering the whole Iran including its eastern part (East), Caspian Sea, Centre, Sarakhs, Persian Gulf and Oman Sea, and Lake Urmia. Results indicate an average of -8.9 mm/year groundwater reduction within Iran during the period 2002 to 2012. A similar decrease is also observed in soil moisture storage especially after 2005. We further apply the canonical correlation analysis (CCA) technique to relate sub-surface water storage changes to climate (e.g., precipitation) and anthropogenic (e.g., farming) impacts. Results indicate an average correlation of 0.81 between rainfall and groundwater variations and also a large impact of anthropogenic activities (mainly for irrigations) on Iran's water storage depletions.

  4. Intersecting-storage-rings inclusive data and the charge ratio of cosmic-ray muons

    CERN Document Server

    Yen, E

    1973-01-01

    The ( mu /sup +// mu /sup -/) ratio at sea level has been calculated by Frazer et al (1972) using the hypothesis of limiting fragmentation together with the inclusive data below 30 GeV/c. They obtained a value of mu /sup +// mu /sup -/ approximately=1.56, to be compared with experimental value of 1.2 to 1.4. The ratio has been calculated using the recent ISR (CERN Intersecting Storage Rings) data, and obtained a value of mu /sup +// mu /sup -/ approximately 1.40 in good agreement with the experimental result. (8 refs).

  5. A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems.

    Science.gov (United States)

    Ma, Xingpo; Liang, Junbin; Liu, Renping; Ni, Wei; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda

    2018-02-10

    In the post-Cloud era, the proliferation of Internet of Things (IoT) has pushed the horizon of Edge computing, which is a new computing paradigm with data are processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs) play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems.

  6. A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems

    Science.gov (United States)

    Liang, Junbin; Liu, Renping; Ni, Wei; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda

    2018-01-01

    In the post-Cloud era, the proliferation of Internet of Things (IoT) has pushed the horizon of Edge computing, which is a new computing paradigm with data processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs) play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems. PMID:29439442

  7. A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems

    Directory of Open Access Journals (Sweden)

    Xingpo Ma

    2018-02-01

    Full Text Available In the post-Cloud era, the proliferation of Internet of Things (IoT has pushed the horizon of Edge computing, which is a new computing paradigm with data processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems.

  8. Building a mass storage system for physics applications

    International Nuclear Information System (INIS)

    Holmes, H.; Loken, S.

    1991-03-01

    The IEEE Mass Storage Reference Model and forthcoming standards based on it provide a standardized architecture to facilitate designing and building mass storage systems, and standard interfaces so that hardware and software from different vendors can interoperate in providing mass storage capabilities. A key concept of this architecture is the separation of control and data flows. This separation allows a smaller machine to provide control functions, while the data can flow directly between high-performance channels. Another key concept is the layering of the file system and the storage functions. This layering allows the designers of the mass storage system to focus on storage functions, which can support a variety of file systems, such as the Network File System, the Andrew File System, and others. The mass storage system provides location-independent file naming, essential if files are to be migrated to different storage devices without requiring changes in application programs. Physics data analysis applications are particularly challenging for mass storage systems because they stream vast amounts of data through analysis applications. Special mechanisms are required, to handle the high data rates and to avoid upsetting the caching mechanisms commonly used for smaller, repetitive-use files. High data rates are facilitated by direct channel connections, where, for example, a dual-ported drive will be positioned by the mass storage controller on one channel, then the data will flow on a second channel directly into the user machine, or directly to a high capacity network, greatly reducing the I/O capacity required in the mass storage control computer. Intelligent storage allocation can be used to bypass the cache devices entirely when large files are being moved

  9. Data collection and storage in long-term ecological and evolutionary studies: The Mongoose 2000 system.

    Science.gov (United States)

    Marshall, Harry H; Griffiths, David J; Mwanguhya, Francis; Businge, Robert; Griffiths, Amber G F; Kyabulima, Solomon; Mwesige, Kenneth; Sanderson, Jennifer L; Thompson, Faye J; Vitikainen, Emma I K; Cant, Michael A

    2018-01-01

    Studying ecological and evolutionary processes in the natural world often requires research projects to follow multiple individuals in the wild over many years. These projects have provided significant advances but may also be hampered by needing to accurately and efficiently collect and store multiple streams of the data from multiple individuals concurrently. The increase in the availability and sophistication of portable computers (smartphones and tablets) and the applications that run on them has the potential to address many of these data collection and storage issues. In this paper we describe the challenges faced by one such long-term, individual-based research project: the Banded Mongoose Research Project in Uganda. We describe a system we have developed called Mongoose 2000 that utilises the potential of apps and portable computers to meet these challenges. We discuss the benefits and limitations of employing such a system in a long-term research project. The app and source code for the Mongoose 2000 system are freely available and we detail how it might be used to aid data collection and storage in other long-term individual-based projects.

  10. Anamorphic and Local Characterization of a Holographic Data Storage System with a Liquid-Crystal on Silicon Microdisplay as Data Pager

    Directory of Open Access Journals (Sweden)

    Fco. Javier Martínez-Guardiola

    2018-06-01

    Full Text Available In this paper, we present a method to characterize a complete optical Holographic Data Storage System (HDSS, where we identify the elements that limit the capacity to register and restore the information introduced by means of a Liquid Cristal on Silicon (LCoS microdisplay as the data pager. In the literature, it has been shown that LCoS exhibits an anamorphic and frequency dependent effect when periodic optical elements are addressed to LCoS microdisplays in diffractive optics applications. We tested whether this effect is still relevant in the application to HDSS, where non-periodic binary elements are applied, as it is the case in binary data pages codified by Binary Intensity Modulation (BIM. To test the limits in storage data density and in spatial bandwidth of the HDSS, we used anamorphic patterns with different resolutions. We analyzed the performance of the microdisplay in situ using figures of merit adapted to HDSS. A local characterization across the aperture of the system was also demonstrated with our proposed methodology, which results in an estimation of the illumination uniformity and the contrast generated by the LCoS. We show the extent of the increase in the Bit Error Rate (BER when introducing a photopolymer as the recording material, thus all the important elements in a HDSS are considered in the characterization methodology demonstrated in this paper.

  11. Oxidation of graphene 'bow tie' nanofuses for permanent, write-once-read-many data storage devices.

    Science.gov (United States)

    Pearson, A C; Jamieson, S; Linford, M R; Lunt, B M; Davis, R C

    2013-04-05

    We have fabricated nanoscale fuses from CVD graphene sheets with a 'bow tie' geometry for write-once-read-many data storage applications. The fuses are programmed using thermal oxidation driven by Joule heating. Fuses that were 250 nm wide with 2.5 μm between contact pads were programmed with average voltages and powers of 4.9 V and 2.1 mW, respectively. The required voltages and powers decrease with decreasing fuse sizes. Graphene shows extreme chemical and electronic stability; fuses require temperatures of about 400 °C for oxidation, indicating that they are excellent candidates for permanent data storage. To further demonstrate this stability, fuses were subjected to applied biases in excess of typical read voltages; stable currents were observed when a voltage of 10 V was applied to the devices in the off state and 1 V in the on state for 90 h each.

  12. Data security in genomics: A review of Australian privacy requirements and their relation to cryptography in data storage.

    Science.gov (United States)

    Schlosberg, Arran

    2016-01-01

    The advent of next-generation sequencing (NGS) brings with it a need to manage large volumes of patient data in a manner that is compliant with both privacy laws and long-term archival needs. Outside of the realm of genomics there is a need in the broader medical community to store data, and although radiology aside the volume may be less than that of NGS, the concepts discussed herein are similarly relevant. The relation of so-called "privacy principles" to data protection and cryptographic techniques is explored with regards to the archival and backup storage of health data in Australia, and an example implementation of secure management of genomic archives is proposed with regards to this relation. Readers are presented with sufficient detail to have informed discussions - when implementing laboratory data protocols - with experts in the fields.

  13. Architecture and Implementation of a Scalable Sensor Data Storage and Analysis System Using Cloud Computing and Big Data Technologies

    Directory of Open Access Journals (Sweden)

    Galip Aydin

    2015-01-01

    Full Text Available Sensors are becoming ubiquitous. From almost any type of industrial applications to intelligent vehicles, smart city applications, and healthcare applications, we see a steady growth of the usage of various types of sensors. The rate of increase in the amount of data produced by these sensors is much more dramatic since sensors usually continuously produce data. It becomes crucial for these data to be stored for future reference and to be analyzed for finding valuable information, such as fault diagnosis information. In this paper we describe a scalable and distributed architecture for sensor data collection, storage, and analysis. The system uses several open source technologies and runs on a cluster of virtual servers. We use GPS sensors as data source and run machine-learning algorithms for data analysis.

  14. The Storage of Thermal Reactor Safety Analysis data (STRESA)

    International Nuclear Information System (INIS)

    Tanarro Colodron, J.

    2016-01-01

    Full text: Storage of Thermal Reactor Safety Analysis data (STRESA) is an online information system that contains three technical databases: 1) European Nuclear Research Facilities, open to all online visitors; 2) Nuclear Experiments, available only to registered users; 3) Results Data, being the core content of the information system, its availability depends on the role and organisation of each user. Its main purpose is to facilitate the exchange of experimental data produced by large Euratom funded scientific projects addressing severe accidents, providing at the same time a secure repository for this information. Due to its purpose and architecture, it has become an important asset for networks of excellence as SARNET or NUGENIA. The Severe Accident ResearchNetwork of Excellence (SARNET)was set up in 2004 under the aegis of the research Euratom Framework Programmes to study severe accidents in watercooled nuclear power plants. Coordinated by the IRSN, SARNET unites 43 organizations involved in research on nuclear reactor safety in 18 European countries plus the USA, Canada, South Korea and India. In 2013, SARNET became fully integrated in the Technical Area N2(TA2), named “Severe accidents” of NUGENIA association, devoted to R&D on fission technology of Generation II and III. (author

  15. ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization

    CERN Document Server

    Naumann, Axel; Ballintijn, Maarten; Bellenot, Bertrand; Biskup, Marek; Brun, Rene; Buncic, Nenad; Canal, Philippe; Casadei, Diego; Couet, Olivier; Fine, Valery; Franco, Leandro; Ganis, Gerardo; Gheata, Andrei; Gonzalez~Maline, David; Goto, Masaharu; Iwaszkiewicz, Jan; Kreshuk, Anna; Marcos Segura, Diego; Maunder, Richard; Moneta, Lorenzo; Offermann, Eddy; Onuchin, Valeriy; Panacek, Suzanne; Rademakers, Fons; Russo, Paul; Tadel, Matevz

    2009-01-01

    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, the RooFit package allows the user to perform complex data modeling and fitting while the RooStats library provides abstractions and implementations for advance...

  16. BE (fuel element)/ZL (interim storage facility) module. Constituents of the fuel BE data base for BE documentation with respect to the disposal planning and the support of the BE container storage administration

    International Nuclear Information System (INIS)

    Hoffmann, V.; Deutsch, S.; Busch, V.; Braun, A.

    2012-01-01

    The securing of spent fuel element disposal from German nuclear power plants is the main task of GNS. This includes the container supply and the disposal analysis and planning. Therefore GNS operates a data base comprising all in Germany implemented fuel elements and all fuel element containers in interim storage facilities. With specific program modules the data base serves an optimized repository planning for all spent fuel elements from German NPPS and the supply of required data for future final disposal. The data base has two functional models: the BE (fuel element) and the ZL (interim storage) module. The contribution presents the data structure of the modules and details of the data base operation.

  17. High-performance mass storage system for workstations

    Science.gov (United States)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  18. EXP-PAC: providing comparative analysis and storage of next generation gene expression data.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Lefèvre, Christophe

    2012-07-01

    Microarrays and more recently RNA sequencing has led to an increase in available gene expression data. How to manage and store this data is becoming a key issue. In response we have developed EXP-PAC, a web based software package for storage, management and analysis of gene expression and sequence data. Unique to this package is SQL based querying of gene expression data sets, distributed normalization of raw gene expression data and analysis of gene expression data across experiments and species. This package has been populated with lactation data in the international milk genomic consortium web portal (http://milkgenomics.org/). Source code is also available which can be hosted on a Windows, Linux or Mac APACHE server connected to a private or public network (http://mamsap.it.deakin.edu.au/~pcc/Release/EXP_PAC.html). Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Storm: A Manager for Storage Resource in Grid

    International Nuclear Information System (INIS)

    Ghiselli, A.; Magnoni, L.; Zappi, R.

    2009-01-01

    Nowadays, data intensive applications demand high-performance and large-storage systems capable of serving up to various Peta bytes of storage space. Therefore, common solutions adopted in data centres include Storage Area Networks (San) and cluster parallel file systems, such as GPFS from IBM and Lustre from Sun Microsystems. In order to make these storage system solutions available in modern Data Grid architectures, standard interfaces are needed. The Grid Storage Resource Manager (SRM) interface is one of these standard interfaces. Grid storage services implementing the SRM standard provide common capabilities and advanced functionality such as dynamic space allocation and file management on shared storage systems. In this paper, we describe Storm (Storage Resource Manager). Storm is a flexible and high-performing implementation of the standard SRM interface version 2.2. The software architecture of Storm allows for an easy integration to different underlying storage systems via a plug-in mechanism. In particular, Storm takes advantage from storage systems based on cluster file systems. Currently, Storm is installed and used in production in various data centres, including the WLCG Italian Tier-1. In addition, Economics and Financial communities, as represented by the EGRID Project, adopt Storm in production as well.

  20. Carbon storage estimation of main forestry ecosystems in Northwest Yunnan Province using remote sensing data

    Science.gov (United States)

    Wang, Jinliang; Wang, Xiaohua; Yue, Cairong; Xu, Tian-shu; Cheng, Pengfei

    2014-05-01

    Estimating regional forest organic carbon pool has became a hot issue in the study of forest ecosystem carbon cycle. The forest ecosystem in Shangri-La County, Northwest Yunnan Province, are well preserved, and the area of Picea Likiangensis, Quercus Aquifolioides, Pinus Densata and Pinus Yunnanensis amounts to 80% of the total arboreal forest area in Shangri-La County. Based on the field measurements, remote sensing data and GIS analysis, three models were established for carbon storage estimation. The remote sensing information model with the highest accuracy were used to calculate the carbon storages of the four main forest ecosystems. The results showed: (1) the total carbon storage of the four forest ecosystems in Shangri-La is 302.984 TgC, in which tree layer, shrub layer, herb layer, litter layer, soil layer are 60.196TgC, 5.433TgC, 1.080TgC, 3.582TgC and 232.692TgC, accounting for 19.87%, 1.79%, 0.36%, 1.18%, 76.80% of the total carbon storage respectively. (2)The order of the carbon storage from high to low is soil layer, tree layer, shrub layer, litter layer and herb layer respectively for the four main forest ecosystems. (3)The total average carbon density of the four main forest ecosystems is 403.480 t/hm2, and the carbon densities of the Picea Likiangensis, Quercus Aquifolioides, Pinus Densata and Pinus Yunnanensis are 576.889 t/hm2, 326.947 t/hm2, 279.993 t/hm2 and 255.792 t/hm2 respectively.

  1. First experiences with large SAN storage and Linux

    International Nuclear Information System (INIS)

    Wezel, Jos van; Marten, Holger; Verstege, Bernhard; Jaeger, Axel

    2004-01-01

    The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing. The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs. This article describes the design, implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes. Presented are some throughput measurements of one of the largest Linux-based parallel storage systems in the world

  2. Analysis and comparison of NoSQL databases with an introduction to consistent references in big data storage systems

    Science.gov (United States)

    Dziedzic, Adam; Mulawka, Jan

    2014-11-01

    NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.

  3. Requirements of data acquisition and analysis for condensed matter studies at the weapons neutron research/proton storage ring facility

    International Nuclear Information System (INIS)

    Johnson, M.W.; Goldstone, J.A.; Taylor, A.D.

    1982-11-01

    With the completion of the proton storage ring (PSR) in 1985, the subsquent increase in neutron flux, and the continuing improvement in neutron scattering instruments, a significant improvement in data acquisition and data analysis capabilities will be required. A brief account of the neutron source is given together with the associated neutron scattering instruments. Based on current technology and operating instruments, a projection for 1985 to 1990 of the neutron scattering instruments and their main parameters are given. From the expected data rates and the projected instruments, the size of data storage is estimated and the user requirements are developed. General requirements are outlined with specific requirements in user hardware and software stated. A project time scale to complete the data acquisition and analysis system by 1985 is given

  4. The use of historical data storage and retrieval systems at nuclear power plants

    International Nuclear Information System (INIS)

    Langen, P.A.

    1984-01-01

    In order to assist the nuclear plant operator in the assessment of useful historical plant information, C-E has developed the Historical Data Storage and Retrieval (HDSR) system, which will record, store, recall, and display historical information as it is needed by plant personnel. The system has been designed to respond to the user's needs under a variety of situations. The user is offered the choice of viewing historical data on color video displays as groups or on computer printouts as logs. The graphical representation is based upon a sectoring concept that provides a zoom-in enlargement of sections of the HDSR graphs

  5. Optical storage networking

    Science.gov (United States)

    Mohr, Ulrich

    2001-11-01

    For efficient business continuance and backup of mission- critical data an inter-site storage network is required. Where traditional telecommunications costs are prohibitive for all but the largest organizations, there is an opportunity for regional carries to deliver an innovative storage service. This session reveals how a combination of optical networking and protocol-aware SAN gateways can provide an extended storage networking platform with the lowest cost of ownership and the highest possible degree of reliability, security and availability. Companies of every size, with mainframe and open-systems environments, can afford to use this integrated service. Three mayor applications are explained; channel extension, Network Attached Storage (NAS), Storage Area Networks (SAN) and how optical networks address the specific requirements. One advantage of DWDM is the ability for protocols such as ESCON, Fibre Channel, ATM and Gigabit Ethernet, to be transported natively and simultaneously across a single fiber pair, and the ability to multiplex many individual fiber pairs over a single pair, thereby reducing fiber cost and recovering fiber pairs already in use. An optical storage network enables a new class of service providers, Storage Service Providers (SSP) aiming to deliver value to the enterprise by managing storage, backup, replication and restoration as an outsourced service.

  6. myPhyloDB: a local web server for the storage and analysis of metagenomics data

    Science.gov (United States)

    myPhyloDB is a user-friendly personal database with a browser-interface designed to facilitate the storage, processing, analysis, and distribution of metagenomics data. MyPhyloDB archives raw sequencing files, and allows for easy selection of project(s)/sample(s) of any combination from all availab...

  7. Optimal micro-mirror tilt angle and sync mark design for digital micro-mirror device based collinear holographic data storage system.

    Science.gov (United States)

    Liu, Jinpeng; Horimai, Hideyoshi; Lin, Xiao; Liu, Jinyan; Huang, Yong; Tan, Xiaodi

    2017-06-01

    The collinear holographic data storage system (CHDSS) is a very promising storage system due to its large storage capacities and high transfer rates in the era of big data. The digital micro-mirror device (DMD) as a spatial light modulator is the key device of the CHDSS due to its high speed, high precision, and broadband working range. To improve the system stability and performance, an optimal micro-mirror tilt angle was theoretically calculated and experimentally confirmed by analyzing the relationship between the tilt angle of the micro-mirror on the DMD and the power profiles of diffraction patterns of the DMD at the Fourier plane. In addition, we proposed a novel chess board sync mark design in the data page to reduce the system bit error rate in circumstances of reduced aperture required to decrease noise and median exposure amount. It will provide practical guidance for future DMD based CHDSS development.

  8. Projection of US LWR spent fuel storage requirements

    International Nuclear Information System (INIS)

    Fletcher, J.F.; Cole, B.M.; Purcell, W.L.; Rau, R.G.

    1982-11-01

    The spent fuel storage requirements projection is based on data supplied for each operating or planned nuclear power power plant by the operting utilities. The data supplied by the utilities encompassed details of plant operating history, past records of fuel discharges, current inventories in reactor spent fuel storage pools, and projections of future discharge patterns. Data on storage capacity of storage pools and on characterization of the discharged fuel are also included. The data supplied by the utilities, plus additional data from other appropriate sources, are maintained on a computerized data base by Pacific Northwest Laboratory. The spent fuel requirements projection was based on utility data updated and verified as of December 31, 1981

  9. Continuous inventory in SNM storage facilities

    International Nuclear Information System (INIS)

    Chambers, W.H.

    1975-01-01

    Instrumentation and data processing techniques that provide inexpensive verification of material in storage were investigated. Transfers of special nuclear materials (SNM) into the storage area are accompanied by an automated verification of the container identity, weight, and the radiation signature of the contents. This information is computer-processed and stored for comparison at subsequent transfers and also provides the data base for record purposes. Physical movement of containers across the boundary of the storage area is presently accomplished by operating personnel in order to minimize expensive modifications to existing storage facilities. Personnel entering and leaving the storage area are uniquely identified and also through portal monitors capable of detecting small quantities of SNM. Once material is placed on the storage shelves, simple, low-cost container tagging and radiation sensors are activated. A portion of the prescribed gamma signature, obtained by duplicate shelf monitors during the transfer verification, is thus continuously checked against the stored identification data. Radiation detector design is severely constrained by the need to discriminate individual signatures in a high background area and the need for low unit costs. In operation any unauthorized change in signal is analyzed along with auxiliary data from surveillance sensors to activate the appropriate alarms. (auth))

  10. POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph

    Science.gov (United States)

    Poat, M. D.; Lauret, J.; Betts, W.

    2015-12-01

    The STAR online computing infrastructure has become an intensive dynamic system used for first-hand data collection and analysis resulting in a dense collection of data output. As we have transitioned to our current state, inefficient, limited storage systems have become an impediment to fast feedback to online shift crews. Motivation for a centrally accessible, scalable and redundant distributed storage system had become a necessity in this environment. OpenStack Swift Object Storage and Ceph Object Storage are two eye-opening technologies as community use and development have led to success elsewhere. In this contribution, OpenStack Swift and Ceph have been put to the test with single and parallel I/O tests, emulating real world scenarios for data processing and workflows. The Ceph file system storage, offering a POSIX compliant file system mounted similarly to an NFS share was of particular interest as it aligned with our requirements and was retained as our solution. I/O performance tests were run against the Ceph POSIX file system and have presented surprising results indicating true potential for fast I/O and reliability. STAR'S online compute farm historical use has been for job submission and first hand data analysis. The goal of reusing the online compute farm to maintain a storage cluster and job submission will be an efficient use of the current infrastructure.

  11. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    International Nuclear Information System (INIS)

    Potekhin, M

    2012-01-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with a R and D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic loads.

  12. Specific storage and hydraulic conductivity tomography through the joint inversion of hydraulic heads and self-potential data

    Science.gov (United States)

    Ahmed, A. Soueid; Jardani, A.; Revil, A.; Dupont, J. P.

    2016-03-01

    Transient hydraulic tomography is used to image the heterogeneous hydraulic conductivity and specific storage fields of shallow aquifers using time series of hydraulic head data. Such ill-posed and non-unique inverse problem can be regularized using some spatial geostatistical characteristic of the two fields. In addition to hydraulic heads changes, the flow of water, during pumping tests, generates an electrical field of electrokinetic nature. These electrical field fluctuations can be passively recorded at the ground surface using a network of non-polarizing electrodes connected to a high impedance (> 10 MOhm) and sensitive (0.1 mV) voltmeter, a method known in geophysics as the self-potential method. We perform a joint inversion of the self-potential and hydraulic head data to image the hydraulic conductivity and specific storage fields. We work on a 3D synthetic confined aquifer and we use the adjoint state method to compute the sensitivities of the hydraulic parameters to the hydraulic head and self-potential data in both steady-state and transient conditions. The inverse problem is solved using the geostatistical quasi-linear algorithm framework of Kitanidis. When the number of piezometers is small, the record of the transient self-potential signals provides useful information to characterize the hydraulic conductivity and specific storage fields. These results show that the self-potential method reveals the heterogeneities of some areas of the aquifer, which could not been captured by the tomography based on the hydraulic heads alone. In our analysis, the improvement on the hydraulic conductivity and specific storage estimations were based on perfect knowledge of electrical resistivity field. This implies that electrical resistivity will need to be jointly inverted with the hydraulic parameters in future studies and the impact of its uncertainty assessed with respect to the final tomograms of the hydraulic parameters.

  13. Optimising LAN access to grid enabled storage elements

    International Nuclear Information System (INIS)

    Stewart, G A; Dunne, B; Elwell, A; Millar, A P; Cowan, G A

    2008-01-01

    When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE

  14. A Note on Interfacing Object Warehouses and Mass Storage Systems for Data Mining Applications

    Science.gov (United States)

    Grossman, Robert L.; Northcutt, Dave

    1996-01-01

    Data mining is the automatic discovery of patterns, associations, and anomalies in data sets. Data mining requires numerically and statistically intensive queries. Our assumption is that data mining requires a specialized data management infrastructure to support the aforementioned intensive queries, but because of the sizes of data involved, this infrastructure is layered over a hierarchical storage system. In this paper, we discuss the architecture of a system which is layered for modularity, but exploits specialized lightweight services to maintain efficiency. Rather than use a full functioned database for example, we use light weight object services specialized for data mining. We propose using information repositories between layers so that components on either side of the layer can access information in the repositories to assist in making decisions about data layout, the caching and migration of data, the scheduling of queries, and related matters.

  15. Oxidation of graphene ‘bow tie’ nanofuses for permanent, write-once-read-many data storage devices

    International Nuclear Information System (INIS)

    Pearson, A C; Jamieson, S; Davis, R C; Linford, M R; Lunt, B M

    2013-01-01

    We have fabricated nanoscale fuses from CVD graphene sheets with a ‘bow tie’ geometry for write-once-read-many data storage applications. The fuses are programmed using thermal oxidation driven by Joule heating. Fuses that were 250 nm wide with 2.5 μm between contact pads were programmed with average voltages and powers of 4.9 V and 2.1 mW, respectively. The required voltages and powers decrease with decreasing fuse sizes. Graphene shows extreme chemical and electronic stability; fuses require temperatures of about 400 °C for oxidation, indicating that they are excellent candidates for permanent data storage. To further demonstrate this stability, fuses were subjected to applied biases in excess of typical read voltages; stable currents were observed when a voltage of 10 V was applied to the devices in the off state and 1 V in the on state for 90 h each. (paper)

  16. Oxidation of graphene ‘bow tie’ nanofuses for permanent, write-once-read-many data storage devices

    Science.gov (United States)

    Pearson, A. C.; Jamieson, S.; Linford, M. R.; Lunt, B. M.; Davis, R. C.

    2013-04-01

    We have fabricated nanoscale fuses from CVD graphene sheets with a ‘bow tie’ geometry for write-once-read-many data storage applications. The fuses are programmed using thermal oxidation driven by Joule heating. Fuses that were 250 nm wide with 2.5 μm between contact pads were programmed with average voltages and powers of 4.9 V and 2.1 mW, respectively. The required voltages and powers decrease with decreasing fuse sizes. Graphene shows extreme chemical and electronic stability; fuses require temperatures of about 400 °C for oxidation, indicating that they are excellent candidates for permanent data storage. To further demonstrate this stability, fuses were subjected to applied biases in excess of typical read voltages; stable currents were observed when a voltage of 10 V was applied to the devices in the off state and 1 V in the on state for 90 h each.

  17. Design of a Mission Data Storage and Retrieval System for NASA Dryden Flight Research Center

    Science.gov (United States)

    Lux, Jessica; Downing, Bob; Sheldon, Jack

    2007-01-01

    The Western Aeronautical Test Range (WATR) at the NASA Dryden Flight Research Center (DFRC) employs the WATR Integrated Next Generation System (WINGS) for the processing and display of aeronautical flight data. This report discusses the post-mission segment of the WINGS architecture. A team designed and implemented a system for the near- and long-term storage and distribution of mission data for flight projects at DFRC, providing the user with intelligent access to data. Discussed are the legacy system, an industry survey, system operational concept, high-level system features, and initial design efforts.

  18. Reliable IoT Storage: Minimizing Bandwidth Use in Storage Without Newcomer Nodes

    DEFF Research Database (Denmark)

    Zhao, Xiaobo; Lucani Rötter, Daniel Enrique; Shen, Xiaohong

    2018-01-01

    This letter characterizes the optimal policies for bandwidth use and storage for the problem of distributed storage in Internet of Things (IoT) scenarios, where lost nodes cannot be replaced by new nodes as is typically assumed in Data Center and Cloud scenarios. We develop an information flow...... model that captures the overall process of data transmission between IoT devices, from the initial preparation stage (generating redundancy from the original data) to the different repair stages with fewer and fewer devices. Our numerical results show that in a system with 10 nodes, the proposed optimal...

  19. Local structure of liquid Ge{sub 1}Sb{sub 2}Te{sub 4} for rewritable data storage use

    Energy Technology Data Exchange (ETDEWEB)

    Sun Zhimei; Zhou Jian [Department of Materials Science and Engineering, College of Materials, Xiamen University, 361005 (China); Blomqvist, Andreas; Ahuja, Rajeev [Division for Materials Theory, Department of Physics and Materials Science, Uppsala University, Box 530, SE-751 21, Uppsala (Sweden); Xu Lihua [Department of Inorganic Non-metallic Materials Science, School of Materials and Engineering, University of Science and Technology Beijing, 100083 (China)], E-mail: zhmsun2@yahoo.com, E-mail: zmsun@xmu.edu.cn

    2008-05-21

    Phase-change materials based on chalcogenide alloys have been widely used for optical data storage and are promising materials for nonvolatile electrical memory use. However, the mechanism behind the utilization is unclear as yet. Since the rewritable data storage involved an extremely fast laser melt-quenched process for chalcogenide alloys, the liquid structure of which is one key to investigating the mechanism of the fast reversible phase transition and hence rewritable data storage, here by means of ab initio molecular dynamics we have studied the local structure of liquid Ge{sub 1}Sb{sub 2}Te{sub 4}. The results show that the liquid structure gives a picture of most Sb atoms being octahedrally coordinated, and the coexistence of tetrahedral and fivefold coordination at octahedral sites for Ge atoms, while Te atoms are essentially fourfold and threefold coordinated at octahedral sites, as characterized by partial pair correlation functions and bond angle distributions. The local structure of liquid Ge{sub 1}Sb{sub 2}Te{sub 4} generally resembles that of the crystalline form, except for the much lower coordination number. It may be this unique liquid structure that results in the fast and reversible phase transition between crystalline and amorphous states.

  20. A Network-Attached Storage System Supporting Guaranteed QoS

    Institute of Scientific and Technical Information of China (English)

    KONG Hua-feng; YU Sheng-sheng; LU Hong-wei

    2005-01-01

    We propose a network-attached storage system that can support guaranteed Quality of Service (QoS), called POPNet Storage. The special policy of date access and disk scheduling is enable users to access files quickly and directly with guaranteed QoS in the POPNet Storage. The POPNet Storage implements a measurement-based admission control algorithm (PSMBAC) to determine whether to admit a new data access request stream and admit as many requests as possible while meeting the QoS guarantees to its clients. The data reconstruction algorithms in the POPNet Storage also put more emphasis on data availability and guaranteed QoS, thus it is designed to complete the data recovery as soon as possible and at the same time provide the guaranteed QoS for high-priority data access. The experiment results show that the POPNet Storage can provide more significant performance, reliability, and guaranteed QoS than conventional storage systems.

  1. Storage system architectures and their characteristics

    Science.gov (United States)

    Sarandrea, Bryan M.

    1993-01-01

    Not all users storage requirements call for 20 MBS data transfer rates, multi-tier file or data migration schemes, or even automated retrieval of data. The number of available storage solutions reflects the broad range of user requirements. It is foolish to think that any one solution can address the complete range of requirements. For users with simple off-line storage requirements, the cost and complexity of high end solutions would provide no advantage over a more simple solution. The correct answer is to match the requirements of a particular storage need to the various attributes of the available solutions. The goal of this paper is to introduce basic concepts of archiving and storage management in combination with the most common architectures and to provide some insight into how these concepts and architectures address various storage problems. The intent is to provide potential consumers of storage technology with a framework within which to begin the hunt for a solution which meets their particular needs. This paper is not intended to be an exhaustive study or to address all possible solutions or new technologies, but is intended to be a more practical treatment of todays storage system alternatives. Since most commercial storage systems today are built on Open Systems concepts, the majority of these solutions are hosted on the UNIX operating system. For this reason, some of the architectural issues discussed focus around specific UNIX architectural concepts. However, most of the architectures are operating system independent and the conclusions are applicable to such architectures on any operating system.

  2. DPM: Future Proof Storage

    CERN Document Server

    Alvarez, Alejandro; Furano, Fabrizio; Hellmich, Martin; Keeble, Oliver; Rocha, Ricardo; CERN. Geneva. IT Department

    2012-01-01

    The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During the last year we have been working on providing stable, high performant data access to our storage system using standard protocols, while extending the storage management functionality and adapting both configuration and deployment procedures to reuse commonly used building blocks. In this contribution we cover in detail the extensive evaluation we have performed of our new HTTP/WebDAV and NFS 4.1 frontends, in terms of functionality and performance. We summarize the issues we faced and the solutions we developed to turn them into valid alternatives to the existing grid protocols - namely the additional work required to prov...

  3. Assessment of shielding analysis methods, codes, and data for spent fuel transport/storage applications

    International Nuclear Information System (INIS)

    Parks, C.V.; Broadhead, B.L.; Hermann, O.W.; Tang, J.S.; Cramer, S.N.; Gauthey, J.C.; Kirk, B.L.; Roussin, R.W.

    1988-07-01

    This report provides a preliminary assessment of the computational tools and existing methods used to obtain radiation dose rates from shielded spent nuclear fuel and high-level radioactive waste (HLW). Particular emphasis is placed on analysis tools and techniques applicable to facilities/equipment designed for the transport or storage of spent nuclear fuel or HLW. Applications to cask transport, storage, and facility handling are considered. The report reviews the analytic techniques for generating appropriate radiation sources, evaluating the radiation transport through the shield, and calculating the dose at a desired point or surface exterior to the shield. Discrete ordinates, Monte Carlo, and point kernel methods for evaluating radiation transport are reviewed, along with existing codes and data that utilize these methods. A literature survey was employed to select a cadre of codes and data libraries to be reviewed. The selection process was based on specific criteria presented in the report. Separate summaries were written for several codes (or family of codes) that provided information on the method of solution, limitations and advantages, availability, data access, ease of use, and known accuracy. For each data library, the summary covers the source of the data, applicability of these data, and known verification efforts. Finally, the report discusses the overall status of spent fuel shielding analysis techniques and attempts to illustrate areas where inaccuracy and/or uncertainty exist. The report notes the advantages and limitations of several analysis procedures and illustrates the importance of using adequate cross-section data sets. Additional work is recommended to enable final selection/validation of analysis tools that will best meet the US Department of Energy's requirements for use in developing a viable HLW management system. 188 refs., 16 figs., 27 tabs

  4. KEYNOTE ADDRESS: The role of standards in the emerging optical digital data disk storage systems market

    Science.gov (United States)

    Bainbridge, Ross C.

    1984-09-01

    The Institute for Computer Sciences and Technology at the National Bureau of Standards is pleased to cooperate with the International Society for Optical Engineering and to join with the other distinguished organizations in cosponsoring this conference on applications of optical digital data disk storage systems.

  5. Technical study gas storage. Final report

    International Nuclear Information System (INIS)

    Borowka, J.; Moeller, A.; Zander, W.; Koischwitz, M.A.

    2001-01-01

    This study will answer the following questions: (a) For what uses was the storage facility designed and for what use is it currently applied? Provide an overview of the technical data per gas storage facility: for instance, what is its capacity, volume, start-up time, etc.; (b) How often has this facility been used during the past 10 years? With what purpose was the facility brought into operation at the time? How much gas was supplied at the time from the storage facility?; (c) Given the characteristics and the use of the storage facility during the past 10 years and projected gas consumption in the future, how will the storage facility be used in the future?; (d) Are there other uses for which the gas storage facility can be deployed, or can a single facility be deployed for numerous uses? What are the technical possibilities in such cases? Questions (a) and (b) are answered separately for every storage facility. Questions (c) and (d) in a single chapter each (Chapter 2 and 3). An overview of the relevant storage data relating to current use, use in the last 10 years and use in future is given in the Annex

  6. ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization

    CERN Document Server

    Antcheva, I; Bellenot, B; Biskup,1, M; Brun, R; Buncic, N; Canal, Ph; Casadei, D; Couet, O; Fine, V; Franco,1, L; Ganis, G; Gheata, A; Gonzalez Maline, D; Goto, M; Iwaszkiewicz, J; Kreshuk, A; Marcos Segura, D; Maunder, R; Moneta, L; Naumann, A; Offermann, E; Onuchin, V; Panacek, S; Rademakers, F; Russo, P; Tadel, M

    2009-01-01

    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, the RooFit package allows the user to perform complex data modeling and fitting while the RooStats library provides abstractions and implementations for advanced statistical tools. Multivariat...

  7. Lossless compression of waveform data for efficient storage and transmission

    International Nuclear Information System (INIS)

    Stearns, S.D.; Tan, Li Zhe; Magotra, Neeraj

    1993-01-01

    Compression of waveform data is significant in many engineering and research areas since it can be used to alleviate data storage and transmission bandwidth. For example, seismic data are widely recorded and transmitted so that analysis can be performed on large amounts of data for numerous applications such as petroleum exploration, determination of the earth's core structure, seismic event detection and discrimination of underground nuclear explosions, etc. This paper describes a technique for lossless wave form data compression. The technique consists of two stages. The first stage is a modified form of linear prediction with discrete coefficients and the second stage is bi-level sequence coding. The linear predictor generates an error or residue sequence in a way such that exact reconstruction of the original data sequence can be accomplished with a simple algorithm. The residue sequence is essentially white Gaussian with seismic or other similar waveform data. Bi-level sequence coding, in which two sample sizes are chosen and the residue sequence is encoded into subsequences that alternate from one level to the other, further compresses the residue sequence. The principal feature of the two-stage data compression algorithm is that it is lossless, that is, it allows exact, bit-for-bit recovery of the original data sequence. The performance of the lossless compression algorithm at each stage is analyzed. The advantages of using bi-level sequence coding in the second stage are its simplicity of implementation, its effectiveness on data with large amplitude variations, and its near-optimal performance in encoding Gaussian sequences. Applications of the two-stage technique to typical seismic data indicates that an average number of compressed bits per sample close to the lower bound is achievable in practical situations

  8. Keyword-based Ciphertext Search Algorithm under Cloud Storage

    Directory of Open Access Journals (Sweden)

    Ren Xunyi

    2016-01-01

    Full Text Available With the development of network storage services, cloud storage have the advantage of high scalability , inexpensive, without access limit and easy to manage. These advantages make more and more small or medium enterprises choose to outsource large quantities of data to a third party. This way can make lots of small and medium enterprises get rid of costs of construction and maintenance, so it has broad market prospects. But now lots of cloud storage service providers can not protect data security.This result leakage of user data, so many users have to use traditional storage method.This has become one of the important factors that hinder the development of cloud storage. In this article, establishing keyword index by extracting keywords from ciphertext data. After that, encrypted data and the encrypted index upload cloud server together.User get related ciphertext by searching encrypted index, so it can response data leakage problem.

  9. Rewritable three-dimensional holographic data storage via optical forces

    Energy Technology Data Exchange (ETDEWEB)

    Yetisen, Ali K., E-mail: ayetisen@mgh.harvard.edu [Harvard Medical School and Wellman Center for Photomedicine, Massachusetts General Hospital, 65 Landsdowne Street, Cambridge, Massachusetts 02139 (United States); Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Montelongo, Yunuen [Department of Chemistry, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Butt, Haider [Nanotechnology Laboratory, School of Engineering Sciences, University of Birmingham, Birmingham B15 2TT (United Kingdom)

    2016-08-08

    The development of nanostructures that can be reversibly arranged and assembled into 3D patterns may enable optical tunability. However, current dynamic recording materials such as photorefractive polymers cannot be used to store information permanently while also retaining configurability. Here, we describe the synthesis and optimization of a silver nanoparticle doped poly(2-hydroxyethyl methacrylate-co-methacrylic acid) recording medium for reversibly recording 3D holograms. We theoretically and experimentally demonstrate organizing nanoparticles into 3D assemblies in the recording medium using optical forces produced by the gradients of standing waves. The nanoparticles in the recording medium are organized by multiple nanosecond laser pulses to produce reconfigurable slanted multilayer structures. We demonstrate the capability of producing rewritable optical elements such as multilayer Bragg diffraction gratings, 1D photonic crystals, and 3D multiplexed optical gratings. We also show that 3D virtual holograms can be reversibly recorded. This recording strategy may have applications in reconfigurable optical elements, data storage devices, and dynamic holographic displays.

  10. Privacy Preserving Similarity Based Text Retrieval through Blind Storage

    Directory of Open Access Journals (Sweden)

    Pinki Kumari

    2016-09-01

    Full Text Available Cloud computing is improving rapidly due to their more advantage and more data owners give interest to outsource their data into cloud storage for centralize their data. As huge files stored in the cloud storage, there is need to implement the keyword based search process to data user. At the same time to protect the privacy of data, encryption techniques are used for sensitive data, that encryption is done before outsourcing data to cloud server. But it is critical to search results in encryption data. In this system we propose similarity text retrieval from the blind storage blocks with encryption format. This system provides more security because of blind storage system. In blind storage system data is stored randomly on cloud storage.  In Existing Data Owner cannot encrypt the document data as it was done only at server end. Everyone can access the data as there was no private key concept applied to maintained privacy of the data. But In our proposed system, Data Owner can encrypt the data himself using RSA algorithm.  RSA is a public key-cryptosystem and it is widely used for sensitive data storage over Internet. In our system we use Text mining process for identifying the index files of user documents. Before encryption we also use NLP (Nature Language Processing technique to identify the keyword synonyms of data owner document. Here text mining process examines text word by word and collect literal meaning beyond the words group that composes the sentence. Those words are examined in API of word net so that only equivalent words can be identified for index file use. Our proposed system provides more secure and authorized way of recover the text in cloud storage with access control. Finally, our experimental result shows that our system is better than existing.

  11. Tests of Cloud Computing and Storage System features for use in H1 Collaboration Data Preservation model

    International Nuclear Information System (INIS)

    Łobodziński, Bogdan

    2011-01-01

    Based on the currently developing strategy for data preservation and long-term analysis in HEP tests of possible future Cloud Computing based on the Eucalyptus Private Cloud platform and the petabyte scale storage open source system CEPH were performed for the H1 Collaboration. Improvements in computing power and strong development of storage systems suggests that a single Cloud Computing resource supported on a given site will be sufficient for analysis requirements beyond the end-date of experiments. This work describes our test-bed architecture which could be applied to fulfill the requirements of the physics program of H1 after the end date of the Collaboration. We discuss the reasons why we choose the Eucalyptus platform and CEPH storage infrastructure as well as our experience with installations and support of these infrastructures. Using our first test results we will examine performance characteristics, noticed failure states, deficiencies, bottlenecks and scaling boundaries.

  12. CMS users data management service integration and first experiences with its NoSQL data storage

    CERN Document Server

    Riahi, H; Cinquilli, M; Hernandez, J M; Konstantinov, P; Mascheroni, M; Santocchia, A

    2014-01-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site.The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly 200k users files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and repor...

  13. Use of information-retrieval languages in automated retrieval of experimental data from long-term storage

    Science.gov (United States)

    Khovanskiy, Y. D.; Kremneva, N. I.

    1975-01-01

    Problems and methods are discussed of automating information retrieval operations in a data bank used for long term storage and retrieval of data from scientific experiments. Existing information retrieval languages are analyzed along with those being developed. The results of studies discussing the application of the descriptive 'Kristall' language used in the 'ASIOR' automated information retrieval system are presented. The development and use of a specialized language of the classification-descriptive type, using universal decimal classification indices as the main descriptors, is described.

  14. Evaluation of Big Data Containers for Popular Storage, Retrieval, and Computation Primitives in Earth Science Analysis

    Science.gov (United States)

    Das, K.; Clune, T.; Kuo, K. S.; Mattmann, C. A.; Huang, T.; Duffy, D.; Yang, C. P.; Habermann, T.

    2015-12-01

    Data containers are infrastructures that facilitate storage, retrieval, and analysis of data sets. Big data applications in Earth Science require a mix of processing techniques, data sources and storage formats that are supported by different data containers. Some of the most popular data containers used in Earth Science studies are Hadoop, Spark, SciDB, AsterixDB, and RasDaMan. These containers optimize different aspects of the data processing pipeline and are, therefore, suitable for different types of applications. These containers are expected to undergo rapid evolution and the ability to re-test, as they evolve, is very important to ensure the containers are up to date and ready to be deployed to handle large volumes of observational data and model output. Our goal is to develop an evaluation plan for these containers to assess their suitability for Earth Science data processing needs. We have identified a selection of test cases that are relevant to most data processing exercises in Earth Science applications and we aim to evaluate these systems for optimal performance against each of these test cases. The use cases identified as part of this study are (i) data fetching, (ii) data preparation for multivariate analysis, (iii) data normalization, (iv) distance (kernel) computation, and (v) optimization. In this study we develop a set of metrics for performance evaluation, define the specifics of governance, and test the plan on current versions of the data containers. The test plan and the design mechanism are expandable to allow repeated testing with both new containers and upgraded versions of the ones mentioned above, so that we can gauge their utility as they evolve.

  15. Random Linear Network Coding is Key to Data Survival in Highly Dynamic Distributed Storage

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2015-01-01

    Distributed storage solutions have become widespread due to their ability to store large amounts of data reliably across a network of unreliable nodes, by employing repair mechanisms to prevent data loss. Conventional systems rely on static designs with a central control entity to oversee...... and control the repair process. Given the large costs for maintaining and cooling large data centers, our work proposes and studies the feasibility of a fully decentralized systems that can store data even on unreliable and, sometimes, unavailable mobile devices. This imposes new challenges on the design...... as the number of available nodes varies greatly over time and keeping track of the system's state becomes unfeasible. As a consequence, conventional erasure correction approaches are ill-suited for maintaining data integrity. In this highly dynamic context, random linear network coding (RLNC) provides...

  16. myPhyloDB: a local web server for the storage and analysis of metagenomic data.

    Science.gov (United States)

    Manter, Daniel K; Korsa, Matthew; Tebbe, Caleb; Delgado, Jorge A

    2016-01-01

    myPhyloDB v.1.1.2 is a user-friendly personal database with a browser-interface designed to facilitate the storage, processing, analysis, and distribution of microbial community populations (e.g. 16S metagenomics data). MyPhyloDB archives raw sequencing files, and allows for easy selection of project(s)/sample(s) of any combination from all available data in the database. The data processing capabilities of myPhyloDB are also flexible enough to allow the upload and storage of pre-processed data, or use the built-in Mothur pipeline to automate the processing of raw sequencing data. myPhyloDB provides several analytical (e.g. analysis of covariance,t-tests, linear regression, differential abundance (DESeq2), and principal coordinates analysis (PCoA)) and normalization (rarefaction, DESeq2, and proportion) tools for the comparative analysis of taxonomic abundance, species richness and species diversity for projects of various types (e.g. human-associated, human gut microbiome, air, soil, and water) for any taxonomic level(s) desired. Finally, since myPhyloDB is a local web-server, users can quickly distribute data between colleagues and end-users by simply granting others access to their personal myPhyloDB database. myPhyloDB is available athttp://www.ars.usda.gov/services/software/download.htm?softwareid=472 and more information along with tutorials can be found on our websitehttp://www.myphylodb.org. Database URL:http://www.myphylodb.org. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the United States.

  17. Exascale Storage Systems the SIRIUS Way

    Science.gov (United States)

    Klasky, S. A.; Abbasi, H.; Ainsworth, M.; Choi, J.; Curry, M.; Kurc, T.; Liu, Q.; Lofstead, J.; Maltzahn, C.; Parashar, M.; Podhorszki, N.; Suchyta, E.; Wang, F.; Wolf, M.; Chang, C. S.; Churchill, M.; Ethier, S.

    2016-10-01

    As the exascale computing age emerges, data related issues are becoming critical factors that determine how and where we do computing. Popular approaches used by traditional I/O solution and storage libraries become increasingly bottlenecked due to their assumptions about data movement, re-organization, and storage. While, new technologies, such as “burst buffers”, can help address some of the short-term performance issues, it is essential that we reexamine the underlying storage and I/O infrastructure to effectively support requirements and challenges at exascale and beyond. In this paper we present a new approach to the exascale Storage System and I/O (SSIO), which is based on allowing users to inject application knowledge into the system and leverage this knowledge to better manage, store, and access large data volumes so as to minimize the time to scientific insights. Central to our approach is the distinction between the data, metadata, and the knowledge contained therein, transferred from the user to the system by describing “utility” of data as it ages.

  18. Next generation storage facility

    International Nuclear Information System (INIS)

    Schlesser, J.A.

    1994-01-01

    With diminishing requirements for plutonium, a substantial quantity of this material requires special handling and ultimately, long-term storage. To meet this objective, we at Los Alamos, have been involved in the design of a storage facility with the goal of providing storage capabilities for this and other nuclear materials. This paper presents preliminary basic design data, not for the structure and physical plant, but for the container and arrays which might be configured within the facility, with strong emphasis on criticality safety features

  19. Activation of hydrogen storage materials in the Li-Mg-N-H system: Effect on storage properties

    International Nuclear Information System (INIS)

    Yang, Jun; Sudik, Andrea; Wolverton, C.

    2007-01-01

    We investigate the thermodynamics, kinetics, and capacity of the hydrogen storage reaction: Li 2 Mg(NH) 2 + 2H 2 ↔ Mg(NH 2 ) 2 + 2LiH. Starting with LiNH 2 and MgH 2 , two distinct procedures have been previously proposed for activating samples to induce the reversible storage reaction. We clarify here the impact of these two activation procedures on the resulting capacity for the Li-Mg-N-H reaction. Additionally, we measure the temperature-dependent kinetic absorption data for this hydrogen storage system. Finally, our experiments confirm the previously reported formation enthalpy (ΔH), hydrogen capacity, and pressure-composition-isotherm (PCI) data, and suggest that this system represents a kinetically (but not thermodynamically) limited system for vehicular on-board storage applications

  20. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xi [Brookhaven National Laboratory, Upton, Long Island, NY 11973 (United States); Huang, Xiaobiao, E-mail: xiahuang@slac.stanford.edu [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States)

    2016-08-21

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.

  1. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xi [Brookhaven National Lab. (BNL), Upton, NY (United States); Huang, Xiaobiao [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.

  2. The NAP-M proton storage

    International Nuclear Information System (INIS)

    Bolvanov, Yu.A.; Kononov, V.I.; Kuper, Eh.A.

    1976-01-01

    A system is considered controlling the proton storage unit of NAP-M. The control system operates on line with ODRA-1325 computer. This enables one to process the data directly in the course of the experiment and to control the operating regime of the storage unit. The authors give a detailed description of the principal units of the control system: digital-to-analog converters, equipment for data conveying, and analog-to-digital converters. They describe the control program, which coordinates interaction of the computer with the control system. The control program provides for the possibility of editing the working programs, which realize the elementary operation in the storage unit control cycle

  3. OpenStack Swift as Multi-Region Eventual Consistency Storage for ownCloud Primary Storage

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    As more users adopt AARNet’s CloudStor Plus offering within Australia, interim solutions deployed to overcome failures of various distributed replicated storage technologies haven’t kept pace with the growth in data volume. AARNet’s original design goal of user proximal data storage, combined with national and even international data replication for redundancy reasons continues to be a key driver for design choices. AARNet’s national network is over 90ms from end to end, and accommodating this has been a key issue with numerous software solutions, hindering attempts to provide both original design goals in a reliable real-time manner. With the addition of features to the ownCloud software allowing primary data storage on OpenStack Swift, AARNet has chosen to deploy Swift in a nation spanning multi-region ring to take advantage of Swift’s eventual consistency capabilities and the local region quorum functionality for fast writes. The scaling capability of Swift resolves the twin problems of geogr...

  4. Mass storage technology in networks

    Science.gov (United States)

    Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo

    1990-08-01

    Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.

  5. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    International Nuclear Information System (INIS)

    Ito, H; Potekhin, M; Wenaus, T

    2012-01-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with an R and D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic rate of queries. In conclusion, we present our experience with operating a Cassandra cluster over an extended period of time and with data load adequate for planned application.

  6. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    Science.gov (United States)

    Ito, H.; Potekhin, M.; Wenaus, T.

    2012-12-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with an R&D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic rate of queries. In conclusion, we present our experience with operating a Cassandra cluster over an extended period of time and with data load adequate for planned application.

  7. MeV ion-beam analysis of optical data storage films

    Science.gov (United States)

    Leavitt, J. A.; Mcintyre, L. C., Jr.; Lin, Z.

    1993-01-01

    Our objectives are threefold: (1) to accurately characterize optical data storage films by MeV ion-beam analysis (IBA) for ODSC collaborators; (2) to develop new and/or improved analysis techniques; and (3) to expand the capabilities of the IBA facility itself. Using H-1(+), He-4(+), and N-15(++) ion beams in the 1.5 MeV to 10 MeV energy range from a 5.5 MV Van de Graaff accelerator, film thickness (in atoms/sq cm), stoichiometry, impurity concentration profiles, and crystalline structure were determined by Rutherford backscattering (RBS), high-energy backscattering, channeling, nuclear reaction analysis (NRA) and proton induced X-ray emission (PIXE). Most of these techniques are discussed in detail in the ODSC Annual Report (February 17, 1987), p. 74. The PIXE technique is briefly discussed in the ODSC Annual Report (March 15, 1991), p. 23.

  8. Using Emergent and Internal Catchment Data to Elucidate the Influence of Landscape Structure and Storage State on Hydrologic Response in a Piedmont Watershed

    Science.gov (United States)

    Putnam, S. M.; Harman, C. J.

    2017-12-01

    Many studies have sought to unravel the influence of landscape structure and catchment state on the quantity and composition of water at the catchment outlet. These studies run into issues of equifinality where multiple conceptualizations of flow pathways or storage states cannot be discriminated against on the basis of the quantity and composition of water alone. Here we aim to parse out the influence of landscape structure, flow pathways, and storage on both the observed catchment hydrograph and chemograph, using hydrometric and water isotope data collected from multiple locations within Pond Branch, a 37-hectare Piedmont catchment of the eastern US. This data is used to infer the quantity and age distribution of water stored and released by individual hydrogeomorphic units, and the catchment as a whole, in order to test hypotheses relating landscape structure, flow pathways, and catchment storage to the hydrograph and chemograph. Initial hypotheses relating internal catchment properties or processes to the hydrograph or chemograph are formed at the catchment scale. Data from Pond Branch include spring and catchment discharge measurements, well water levels, and soil moisture, as well as three years of high frequency precipitation and surface water stable water isotope data. The catchment hydrograph is deconstructed using hydrograph separation and the quantity of water associated with each time-scale of response is compared to the quantity of discharge that could be produced from hillslope and riparian hydrogeomorphic units. Storage is estimated for each hydrogeomorphic unit as well as the vadose zone, in order to construct a continuous time series of total storage, broken down by landscape unit. Rank StorAge Selection (rSAS) functions are parameterized for each hydrogeomorphic unit as well as the catchment as a whole, and the relative importance of changing proportions of discharge from each unit as well as storage in controlling the variability in the catchment

  9. Comprehensive Monitoring for Heterogeneous Geographically Distributed Storage

    Energy Technology Data Exchange (ETDEWEB)

    Ratnikova, N. [Fermilab; Karavakis, E. [CERN; Lammel, S. [Fermilab; Wildish, T. [Princeton U.

    2015-12-23

    Storage capacity at CMS Tier-1 and Tier-2 sites reached over 100 Petabytes in 2014, and will be substantially increased during Run 2 data taking. The allocation of storage for the individual users analysis data, which is not accounted as a centrally managed storage space, will be increased to up to 40%. For comprehensive tracking and monitoring of the storage utilization across all participating sites, CMS developed a space monitoring system, which provides a central view of the geographically dispersed heterogeneous storage systems. The first prototype was deployed at pilot sites in summer 2014, and has been substantially reworked since then. In this paper we discuss the functionality and our experience of system deployment and operation on the full CMS scale.

  10. Carbon storage in forests and peatlands of Russia

    Science.gov (United States)

    V.A. Alexeyev; R.A. Birdsey; [Editors

    1998-01-01

    Contains information about carbon storage in the vegetation, soils, and peatlands of Russia. Estimates of carbon storage in forests are derived from statistical data from the 1988 national forest inventory of Russia and from other sources. Methods are presented for converting data on timber stock into phytomass of tree stands, and for estimating carbon storage in...

  11. Hydrate Control for Gas Storage Operations

    Energy Technology Data Exchange (ETDEWEB)

    Jeffrey Savidge

    2008-10-31

    The overall objective of this project was to identify low cost hydrate control options to help mitigate and solve hydrate problems that occur in moderate and high pressure natural gas storage field operations. The study includes data on a number of flow configurations, fluids and control options that are common in natural gas storage field flow lines. The final phase of this work brings together data and experience from the hydrate flow test facility and multiple field and operator sources. It includes a compilation of basic information on operating conditions as well as candidate field separation options. Lastly the work is integrated with the work with the initial work to provide a comprehensive view of gas storage field hydrate control for field operations and storage field personnel.

  12. Computerization of reporting and data storage using automatic coding method in the department of radiology

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byung Hee; Lee, Kyung Sang; Kim, Woo Ho; Han, Joon Koo; Choi, Byung Ihn; Han, Man Chung [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)

    1990-10-15

    The authors developed a computer program for use in printing report as well as data storage and retrieval in the Radiology department. This program used IBM PC AT and was written in dBASE III plus language. The automatic coding method of the ACR code, developed by Kim et al was applied in this program, and the framework of this program is the same as that developed for the surgical pathology department. The working sheet, which contained the name card for X-ray film identification and the results of previous radiologic studies, were printed during registration. The word precessing function was applied for issuing the formal report of radiologic study, and the data storage was carried out during the typewriting of the report. Two kinds of data files were stored in the hard disk ; the temporary file contained full information and the permanent file contained patient's identification data, and ACR code. Searching of a specific case was performed by chart number, patients name, date of study, or ACR code within a second. All the cases were arranged by ACR codes of procedure code, anatomy code, and pathology code. Every new data was copied to the diskette after daily work automatically, with which data could be restored in case of hard diskette failure. The main advantage of this program with comparison to the larger computer system is its low price. Based on the experience in the Seoul District Armed Forces General Hospital, we assume that this program provides solution to various problems in the radiology department where a large computer system with well designed software is not available.

  13. Erasure Coded Storage on a Changing Network

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Venkat, Narayan; Oran, David

    2016-01-01

    As faster storage devices become commercially viable alternatives to disk drives, the network is increasingly becoming the bottleneck in achieving good performance in distributed storage systems. This is especially true for erasure coded storage, where the reconstruction of lost data can signific...

  14. Inspection of commercial optical devices for data storage using a three Gaussian beam microscope interferometer

    International Nuclear Information System (INIS)

    Flores, J. Mauricio; Cywiak, Moises; Servin, Manuel; Juarez P, Lorenzo

    2008-01-01

    Recently, an interferometric profilometer based on the heterodyning of three Gaussian beams has been reported. This microscope interferometer, called a three Gaussian beam interferometer, has been used to profile high quality optical surfaces that exhibit constant reflectivity with high vertical resolution and lateral resolution near λ. We report the use of this interferometer to measure the profiles of two commercially available optical surfaces for data storage, namely, the compact disk (CD-R) and the digital versatile disk (DVD-R). We include experimental results from a one-dimensional radial scan of these devices without data marks. The measurements are taken by placing the devices with the polycarbonate surface facing the probe beam of the interferometer. This microscope interferometer is unique when compared with other optical measuring instruments because it uses narrowband detection, filters out undesirable noisy signals, and because the amplitude of the output voltage signal is basically proportional to the local vertical height of the surface under test, thus detecting with high sensitivity. We show that the resulting profiles, measured with this interferometer across the polycarbonate layer, provide valuable information about the track profiles, making this interferometer a suitable tool for quality control of surface storage devices

  15. Decentralized data storage and processing in the context of the LHC experiments at CERN

    International Nuclear Information System (INIS)

    Blomer, Jakob Johannes

    2012-01-01

    The computing facilities used to process data for the experiments at the Large Hadron Collider (LHC) at CERN are scattered around the world. The embarrassingly parallel workload allows for use of various computing resources, such as computer centers comprising the Worldwide LHC Computing Grid, commercial and institutional cloud resources, as well as individual home PCs in ''volunteer clouds''. Unlike data, the experiment software and its operating system dependencies cannot be easily split into small chunks. Deployment of experiment software on distributed grid sites is challenging since it consists of millions of small files and changes frequently. This thesis develops a systematic approach to distribute a homogeneous runtime environment to a heterogeneous and geographically distributed computing infrastructure. A uniform bootstrap environment is provided by a minimal virtual machine tailored to LHC applications. Based on a study of the characteristics of LHC experiment software, the thesis argues for the use of content-addressable storage and decentralized caching in order to distribute the experiment software. In order to utilize the technology at the required scale, new methods of pre-processing data into content-addressable storage are developed. A co-operative, decentralized memory cache is designed that is optimized for the high peer churn expected in future virtualized computing clusters. This is achieved using a combination of consistent hashing with global knowledge about the worker nodes' state. The methods have been implemented in the form of a file system for software and Conditions Data delivery. The file system has been widely adopted by the LHC community and the benefits of the presented methods have been demonstrated in practice.

  16. Decentralized data storage and processing in the context of the LHC experiments at CERN

    Energy Technology Data Exchange (ETDEWEB)

    Blomer, Jakob Johannes

    2012-06-01

    The computing facilities used to process data for the experiments at the Large Hadron Collider (LHC) at CERN are scattered around the world. The embarrassingly parallel workload allows for use of various computing resources, such as computer centers comprising the Worldwide LHC Computing Grid, commercial and institutional cloud resources, as well as individual home PCs in ''volunteer clouds''. Unlike data, the experiment software and its operating system dependencies cannot be easily split into small chunks. Deployment of experiment software on distributed grid sites is challenging since it consists of millions of small files and changes frequently. This thesis develops a systematic approach to distribute a homogeneous runtime environment to a heterogeneous and geographically distributed computing infrastructure. A uniform bootstrap environment is provided by a minimal virtual machine tailored to LHC applications. Based on a study of the characteristics of LHC experiment software, the thesis argues for the use of content-addressable storage and decentralized caching in order to distribute the experiment software. In order to utilize the technology at the required scale, new methods of pre-processing data into content-addressable storage are developed. A co-operative, decentralized memory cache is designed that is optimized for the high peer churn expected in future virtualized computing clusters. This is achieved using a combination of consistent hashing with global knowledge about the worker nodes' state. The methods have been implemented in the form of a file system for software and Conditions Data delivery. The file system has been widely adopted by the LHC community and the benefits of the presented methods have been demonstrated in practice.

  17. Tribology of magnetic storage systems

    Science.gov (United States)

    Bhushan, Bharat

    1992-01-01

    The construction and the materials used in different magnetic storage devices are defined. The theories of friction and adhesion, interface temperatures, wear, and solid-liquid lubrication relevant to magnetic storage systems are presented. Experimental data are presented wherever possible to support the relevant theories advanced.

  18. Prototype plutonium-storage monitor

    International Nuclear Information System (INIS)

    Bliss, M.; Craig, R.A.; Sunberg, D.S.; Warner, R.A.

    1996-01-01

    Pacific Northwest National Laboratory (PNNL) has fabricated cerium-activated lithium silicate scintillating fibers via a hot-downdraw process. These fibers typically have an operational transmission length (e -1 length) of greater than 2 meters. This permits the fabrication of devices that, hitherto, were not possible to consider. A prototype neutron monitor for scrap Pu-storage containers was fabricated and tested for 70 days, taking data with a variety of sources in a high-background environment. These data and their implication in the context of a storage-monitor situation are discussed

  19. The next generation mass storage devices - Physical principles and current status

    Science.gov (United States)

    Wang, L.; Gai, S.

    2014-04-01

    The amount of digital data today has been increasing at a phenomenal rate due to the widespread digitalisation service in almost every industry. The need to store such ever-increasing data aggressively triggers the requirement to augment the storage capacity of the conventional storage technologies. Unfortunately, the physical limitations that conventional forms face have severely handicapped their potential to meet the storage need from both consumer and industry point of view. The focus has therefore been switched into the development of the innovative data storage technologies such as scanning probe memory, nanocrystal memory, carbon nanotube memory, DNA memory, and organic memory. In this paper, we review the physical principles of these emerging storage technologies and their superiorities as the next generation data storage device, as well as their respective technical challenges on further enhancing the storage capacity. We also compare these novel technologies with the mainstream data storage means according to the technology roadmap on areal density.

  20. Modular routing interface for simoultaneous list mode and histogramming mode storage of coincident data

    International Nuclear Information System (INIS)

    D'Achard van Eschut, J.F.M.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1985-01-01

    A routing interface has been developed and built for successive storage of the digital output of four 13-bit ADCs, within 6 μs, into selected parts of two 16K CAMAC histogramming modules and, if an event trigger is applied, simultaneously into four 64-words deep (16-bit) first-in first-out (FIFO) CAMAC modules. In this way it is possible to accumulate on-line single spectra and, at the same time, write coincident data in list mode to magnetic tape under control of a computer. Additional routing interfaces can be used in parallel so that extensive data-collecting systems can be set up to store multi-parameter events. (orig.)

  1. A novel data storage logic in the cloud [version 3; referees: 2 approved, 1 not approved

    Directory of Open Access Journals (Sweden)

    Bence Mátyás

    2017-08-01

    Full Text Available Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. A feasible solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table.

  2. Energy storage

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This chapter discusses the role that energy storage may have on the energy future of the US. The topics discussed in the chapter include historical aspects of energy storage, thermal energy storage including sensible heat storage, latent heat storage, thermochemical heat storage, and seasonal heat storage, electricity storage including batteries, pumped hydroelectric storage, compressed air energy storage, and superconducting magnetic energy storage, and production and combustion of hydrogen as an energy storage option

  3. The Use of Grid Storage Protocols for Healthcare Applications

    CERN Document Server

    Donno, F; CERN. Geneva. IT Department

    2008-01-01

    Grid computing has attracted worldwide attention for a variety of domains. Healthcare projects focus on data mining and standardization techniques, the issue of data accessibility and transparency over the storage systems on the Grid has seldom been tackled. In this position paper, we identify the key issues and requirements imposed by Healthcare applications and point out how Grid Storage Technology can be used to satisfy those requirements. The main contribution of this work is the identification of the characteristics and protocols that make Grid Storage technology attractive for building a Healthcare data storage infrastructure.

  4. Intrusion Detection, Diagnosis, and Recovery with Self-Securing Storage

    National Research Council Canada - National Science Library

    Strunk, John D; Goodson, Garth R; Pennington, Adam G; Soules, Craig A; Ganger, Gregory R

    2002-01-01

    .... From behind a thin storage interface (e.g., SCSI or CIFS), a self-securing storage server can watch storage requests, keep a record of all storage activity, and prevent compromised clients from destroying stored data...

  5. Spent fuel storage requirements, 1991--2040

    International Nuclear Information System (INIS)

    1991-12-01

    Historical inventories of spent fuel are combined with US Department of Energy (DOE) projections of future discharges from commercial nuclear reactors in the United States to provide estimates of spent fuel storage requirements over the next 50 years, through the year 2040. The needs for storage capacity beyond that presently available in the pools are estimated. These estimates incorporate the maximum capacities within current and planned in-pool storage facilities and any planned transshipments of fuel to other reactors or facilities. Existing and future dry storage facilities are also discussed. Historical data through December 1990 are derived from the 1991 Form RW-859 data survey of nuclear utilities. Projected discharges through the end of reactor life are based on DOE estimates of future nuclear capacity, generation, and spent fuel discharges

  6. Solar energy storage via liquid filled cans - Test data and analysis

    Science.gov (United States)

    Saha, H.

    1978-01-01

    This paper describes the design of a solar thermal storage test facility with water-filled metal cans as heat storage medium and also presents some preliminary tests results and analysis. This combination of solid and liquid mediums shows unique heat transfer and heat contents characteristics and will be well suited for use with solar air systems for space and hot water heating. The trends of the test results acquired thus far are representative of the test bed characteristics while operating in the various modes.

  7. Behavior of spent nuclear fuel and storage system components in dry interim storage.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A.B. Jr.; Gilbert, E.R.; Guenther, R.J.

    1982-08-01

    Irradiated nuclear fuel has been handled under dry conditions since the early days of nuclear reactor operation, and use of dry storage facilities for extended management of irradiated fuel began in 1964. Irradiated fuel is currently being stored dry in four types of facilities: dry wells, vaults, silos, and metal casks. Essentially all types of irradiated nuclear fuel are currently stored under dry conditions. Gas-cooled reactor (GCR) and liquid metal fast breeder reactor (LMFBR) fuels are stored in vaults and dry wells. Certain types of fuel are being stored in licensed dry storage facilities: Magnox fuel in vaults in the United Kingdom and organic-cooled reactor (OCR) fuel in silos in Canada. Dry storage demonstrations are under way for Zircaloy-clad fuel from boiling water reactors BWR's, pressurized heavy-water reactors (PHWRs), and pressurized water reactors (PWRs) in all four types of dry storage facilities. The demonstrations and related hot cell and laboratory tests are directed toward expanding the data base and establishing a licensing basis for dry storage of water reactor fuel. This report reviews the scope of dry interim storage technology, the performance of fuel and facility materials, the status of programs in several countries to license dry storage of water reactor fuel, and the characteristics of water reactor fuel that relate to dry storage conditions.

  8. Behavior of spent nuclear fuel and storage-system components in dry interim storage

    International Nuclear Information System (INIS)

    Johnson, A.B. Jr.; Gilbert, E.R.; Guenther, R.J.

    1982-08-01

    Irradiated nuclear fuel has been handled under dry conditions since the early days of nuclear reactor operation, and use of dry storage facilities for extended management of irradiated fuel began in 1964. Irradiated fuel is currently being stored dry in four types of facilities: dry wells, vaults, silos, and metal casks. Essentially all types of irradiated nuclear fuel are currently stored under dry conditions. Gas-cooled reactor (GCR) and liquid metal fast breeder reactor (LMFBR) fuels are stored in vaults and dry wells. Certain types of fuel are being stored in licensed dry storage facilities: Magnox fuel in vaults in the United Kingdom and organic-cooled reactor (OCR) fuel in silos in Canada. Dry storage demonstrations are under way for Zircaloy-clad fuel from boiling water reactors BWR's, pressurized heavy-water reactors (PHWRs), and pressurized water reactors (PWRs) in all four types of dry storage facilities. The demonstrations and related hot cell and laboratory tests are directed toward expanding the data base and establishing a licensing basis for dry storage of water reactor fuel. This report reviews the scope of dry interim storage technology, the performance of fuel and facility materials, the status of programs in several countries to license dry storage of water reactor fuel, and the characteristics of water reactor fuel that relate to dry storage conditions

  9. Analysis of the influence of input data uncertainties on determining the reliability of reservoir storage capacity

    Directory of Open Access Journals (Sweden)

    Marton Daniel

    2015-12-01

    Full Text Available The paper contains a sensitivity analysis of the influence of uncertainties in input hydrological, morphological and operating data required for a proposal for active reservoir conservation storage capacity and its achieved values. By introducing uncertainties into the considered inputs of the water management analysis of a reservoir, the subsequent analysed reservoir storage capacity is also affected with uncertainties. The values of water outflows from the reservoir and the hydrological reliabilities are affected with uncertainties as well. A simulation model of reservoir behaviour has been compiled with this kind of calculation as stated below. The model allows evaluation of the solution results, taking uncertainties into consideration, in contributing to a reduction in the occurrence of failure or lack of water during reservoir operation in low-water and dry periods.

  10. A study of data representation in Hadoop to optimize data storage and search performance for the ATLAS EventIndex

    Science.gov (United States)

    Baranowski, Z.; Canali, L.; Toebbicke, R.; Hrivnac, J.; Barberis, D.

    2017-10-01

    This paper reports on the activities aimed at improving the architecture and performance of the ATLAS EventIndex implementation in Hadoop. The EventIndex contains tens of billions of event records, each of which consists of ∼100 bytes, all having the same probability to be searched or counted. Data formats represent one important area for optimizing the performance and storage footprint of applications based on Hadoop. This work reports on the production usage and on tests using several data formats including Map Files, Apache Parquet, Avro, and various compression algorithms. The query engine plays also a critical role in the architecture. We report also on the use of HBase for the EventIndex, focussing on the optimizations performed in production and on the scalability tests. Additional engines that have been tested include Cloudera Impala, in particular for its SQL interface, and the optimizations for data warehouse workloads and reports.

  11. A study of data representation in Hadoop to optimise data storage and search performance for the ATLAS EventIndex

    CERN Document Server

    AUTHOR|(CDS)2078799; The ATLAS collaboration; Canali, Luca; Toebbicke, Rainer; Hrivnac, Julius; Barberis, Dario

    2017-01-01

    This paper reports on the activities aimed at improving the architecture and performance of the ATLAS EventIndex implementation in Hadoop. The EventIndex contains tens of billions of event records, each of which consists of ∼100 bytes, all having the same probability to be searched or counted. Data formats represent one important area for optimizing the performance and storage footprint of applications based on Hadoop. This work reports on the production usage and on tests using several data formats including Map Files, Apache Parquet, Avro, and various compression algorithms. The query engine plays also a critical role in the architecture. We report also on the use of HBase for the EventIndex, focussing on the optimizations performed in production and on the scalability tests. Additional engines that have been tested include Cloudera Impala, in particular for its SQL interface, and the optimizations for data warehouse workloads and reports.

  12. A study of data representations in Hadoop to optimize data storage and search performance of the ATLAS EventIndex

    CERN Document Server

    Baranowski, Zbigniew; The ATLAS collaboration

    2016-01-01

    This paper reports on the activities aimed at improving the architecture and performance of the ATLAS EventIndex implementation in Hadoop. The EventIndex contains tens of billions event records, each of which consisting of ~100 bytes, all having the same probability to be searched or counted. Data formats represent one important area for optimizing the performance and storage footprint of applications based on Hadoop. This work reports on the production usage and on tests using several data formats including Map Files, Apache Parquet, Avro, and various compression algorithms. The query engine plays also a critical role in the architecture. This paper reports on the use of HBase for the EventIndex, focussing on the optimizations performed in production and on the scalability tests. Additional engines that have been tested include Cloudera Impala, in particular for its SQL interface, and the optimizations for data warehouse workloads and reports.

  13. Status of US storage efforts

    International Nuclear Information System (INIS)

    Leasburg, R.H.

    1984-01-01

    Tasks involved in the implementation of the Nuclear Waste Policy Act are discussed. The need for speedy action on applications to deal with spent fuel storage problems is stressed. The problems faced by the Virginia Electric and Power Company, where full core discharge capability at the 1600-megawatt Surry power station is expected to be reached in early 1986, are reviewed. It is pointed out that although the Nuclear Waste Policy Act does not apply in this case, the problems illustrate the situation that may be faced after the Act is implemented. Problems involved in intro-utility transhipments and dry cask storage of spent fuel from Surry, including transportation ordinances at state and local levels and approval for the use of dry casks for storage, are reported. The suggestion that dry casks be used for interim storage and eventual transport to monitored retrievable storage facilities or permanent storage sites is considered. It is pointed out that data from a proposed 3-utility demonstration program of dry cask storage of consolidated fuels and the storage of fuels in air should give information applicable to the timely implementation of the Nuclear Waste Policy Act

  14. Annual Report: Carbon Storage

    Energy Technology Data Exchange (ETDEWEB)

    Strazisar, Brian [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Guthrie, George [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States)

    2012-09-30

    Activities include laboratory experimentation, field work, and numerical modeling. The work is divided into five theme areas (or first level tasks) that each address a key research need: Flow Properties of Reservoirs and Seals, Fundamental Processes and Properties, Estimates of Storage Potential, Verifying Storage Performance, and Geospatial Data Resources. The project also includes a project management effort which coordinates the activities of all the research teams.

  15. Intelligent Management System of Power Network Information Collection Under Big Data Storage

    Directory of Open Access Journals (Sweden)

    Qin Yingying

    2017-01-01

    Full Text Available With the development of economy and society, big data storage in enterprise management has become a problem that can’t be ignored. How to manage and optimize the allocation of tasks better is an important factor in the sustainable development of an enterprise. Now the enterprise information intelligent management has become a hot spot of management mode and concept in the information age. It presents information to the business managers in a more efficient, lower cost, and global form. The system uses the SG-UAP development tools, which is based on Eclipse development environment, and suits for Windows operating system, with Oracle as database development platform, Tomcat network information service for application server. The system uses SOA service-oriented architecture, provides RESTful style service, and HTTP(S as the communication protocol, and JSON as the data format. The system is divided into two parts, the front-end and the backs-end, achieved functions like user login, registration, password retrieving, enterprise internal personnel information management and internal data display and other functions.

  16. A DTM MULTI-RESOLUTION COMPRESSED MODEL FOR EFFICIENT DATA STORAGE AND NETWORK TRANSFER

    Directory of Open Access Journals (Sweden)

    L. Biagi

    2012-08-01

    Full Text Available In recent years the technological evolution of terrestrial, aerial and satellite surveying, has considerably increased the measurement accuracy and, consequently, the quality of the derived information. At the same time, the smaller and smaller limitations on data storage devices, in terms of capacity and cost, has allowed the storage and the elaboration of a bigger number of instrumental observations. A significant example is the terrain height surveyed by LIDAR (LIght Detection And Ranging technology where several height measurements for each square meter of land can be obtained. The availability of such a large quantity of observations is an essential requisite for an in-depth knowledge of the phenomena under study. But, at the same time, the most common Geographical Information Systems (GISs show latency in visualizing and analyzing these kind of data. This problem becomes more evident in case of Internet GIS. These systems are based on the very frequent flow of geographical information over the internet and, for this reason, the band-width of the network and the size of the data to be transmitted are two fundamental factors to be considered in order to guarantee the actual usability of these technologies. In this paper we focus our attention on digital terrain models (DTM's and we briefly analyse the problems about the definition of the minimal necessary information to store and transmit DTM's over network, with a fixed tolerance, starting from a huge number of observations. Then we propose an innovative compression approach for sparse observations by means of multi-resolution spline functions approximation. The method is able to provide metrical accuracy at least comparable to that provided by the most common deterministic interpolation algorithms (inverse distance weighting, local polynomial, radial basis functions. At the same time it dramatically reduces the number of information required for storing or for transmitting and rebuilding a

  17. Spent fuel behaviour during dry storage - a review

    International Nuclear Information System (INIS)

    Shivakumar, V.; Anantharaman, K.

    1997-09-01

    One of the strategies employed for management of spent fuel prior to their final disposal/reprocessing is their dry storage in casks, after they have been sufficiently cooled in spent fuel pools. In this interim storage, one of the main consideration is that the fuel should retain its integrity to ensure (a) radiological health hazard remains minimal and (b) the fuel is retrievable for down steam fuel management processes such as geological disposal or reprocessing. For dry storage of spent fuel in air, oxidation of the exposed UO 2 is the most severe of phenomena affecting the integrity of fuel. This is kept within acceptable limits for desired storage time by limiting the fuel temperature in the storage cask. The limit on the fuel temperature is met by having suitable limits on maximum burn-up of fuel, minimum cooling period in storage pool and optimum arrangement of fuel bundles in the storage cask from heat removal considerations. The oxidation of UO 2 by moist air has more deleterious effects on the integrity of fuel than that by dry air. The removal of moisture from the storage cask is therefore a very important aspect in dry storage practice. The kinetics of the oxidation phenomena at temperatures expected during dry storage in air is very slow and therefore the majority of the existing data is based on extrapolation of data obtained at higher fuel temperatures. This and the complex effects of factors like fission products in fuel, radiolysis of storage medium etc. has necessitated in having a conservative limiting criteria. The data generated by various experimental programmes and results from the on going programmes have shown that dry storage is a safe and economical practice. (author)

  18. Using Hadoop as a grid storage element

    International Nuclear Information System (INIS)

    Bockelman, Brian

    2009-01-01

    Hadoop is an open-source data processing framework that includes a scalable, fault-tolerant distributed file system, HDFS. Although HDFS was designed to work in conjunction with Hadoop's job scheduler, we have re-purposed it to serve as a grid storage element by adding GridFTP and SRM servers. We have tested the system thoroughly in order to understand its scalability and fault tolerance. The turn-on of the Large Hadron Collider (LHC) in 2009 poses a significant data management and storage challenge; we have been working to introduce HDFS as a solution for data storage for one LHC experiment, the Compact Muon Solenoid (CMS).

  19. Spent fuel storage requirements, 1990--2040

    International Nuclear Information System (INIS)

    Walling, R.; Bierschbach, M.

    1990-11-01

    Historical inventories of spent fuel are combined with US Department of Energy (DOE) projections of future discharges from commercial nuclear reactors in the United States to provide estimates of spent fuel storage requirements over the next 51 years, through the year 2040. The needs for storage capacity beyond that presently available in the pools are estimated. These estimates incorporate the maximum capacities within current and planned in-pool storage facilities and any planned transshipments of fuel to other reactors or facilities. Existing and future dry storage facilities are also discussed. Historical data through December 1989 are derived from the 1990 Form RW-859 data survey of nuclear utilities. Projected discharges through the end of reactor life are based on DOE estimates of future nuclear capacity, generation, and spent fuel discharges. 15 refs., 3 figs., 11 tabs

  20. Programs for data accumulation and storage from the multicrate CAMAC systems basing on the M-6000 computer

    International Nuclear Information System (INIS)

    Antonichev, G.M.; Shilkin, I.P.; Bespalova, T.V.; Golutvin, I.A.; Maslov, V.V.; Nevskaya, N.A.

    1978-01-01

    Programs for data accumulation and storage from multicrate CAMAC systems organized in parallel into a branch and connected with the M-6000 computer via the branch interface are described. Program operation in different modes of CAMAC apparatus is described. All the programs operate within the real time disk operation system

  1. CMS users data management service integration and first experiences with its NoSQL data storage

    International Nuclear Information System (INIS)

    Riahi, H; Spiga, D; Cinquilli, M; Boccali, T; Ciangottini, D; Santocchia, A; Hernàndez, J M; Konstantinov, P; Mascheroni, M

    2014-01-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site. The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly nearly 200k users' files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and reports to users and service operators, while being highly available. The associated data volume represents a new set of challenges in the areas of database scalability and service performance and efficiency. In this paper, we present an overview of the AsyncStageOut model and the integration strategy with the Common Analysis Framework. The motivations for using the NoSQL technology are also presented, as well as data design and the techniques used for efficient indexing and monitoring of the data. We describe deployment model for the high availability and scalability of the service. We also discuss the hardware requirements and the results achieved as they were determined by testing with actual data and realistic loads during the commissioning and the initial production phase with the Common Analysis Framework.

  2. CMS users data management service integration and first experiences with its NoSQL data storage

    Science.gov (United States)

    Riahi, H.; Spiga, D.; Boccali, T.; Ciangottini, D.; Cinquilli, M.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Santocchia, A.

    2014-06-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site. The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly nearly 200k users' files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and reports to users and service operators, while being highly available. The associated data volume represents a new set of challenges in the areas of database scalability and service performance and efficiency. In this paper, we present an overview of the AsyncStageOut model and the integration strategy with the Common Analysis Framework. The motivations for using the NoSQL technology are also presented, as well as data design and the techniques used for efficient indexing and monitoring of the data. We describe deployment model for the high availability and scalability of the service. We also discuss the hardware requirements and the results achieved as they were determined by testing with actual data and realistic loads during the commissioning and the initial production phase with the Common Analysis Framework.

  3. SSCL DD2 mass storage

    International Nuclear Information System (INIS)

    Mestad, S.L.

    1992-09-01

    The SSCL detector collaboration have determined the lab will need data storage devices capable of handling data rates of 100 megabytes/second and storing several petabytes per year. These needs would be difficult to meet with the typical devices currently available. A new high speed, high density tape drive has bee integrated with an SGI system at the SSCL which is capable of meeting the detector data storage requirements. This paper describes the goals and stages of the integration project, the lessons learned, and the additional work planned to make effective use of the DD2 tape drive

  4. Data on the changes of the mussels׳ metabolic profile under different cold storage conditions

    Directory of Open Access Journals (Sweden)

    Violetta Aru

    2016-06-01

    Full Text Available One of the main problems of seafood marketing is the ease with which fish and shellfish undergo deterioration after death. 1H NMR spectroscopy and microbiological analysis were applied to get in depth insight into the effects of cold storage (4 °C and 0 °C on the spoilage of the mussel Mytilus galloprovincialis. This data article provides information on the average distribution of the microbial loads in mussels׳ specimens and on the acquisition, processing, and multivariate analysis of the 1H NMR spectra from the hydrosoluble phase of stored mussels. This data article is referred to the research article entitled “Metabolomics analysis of shucked mussels’ freshness” (Aru et al., 2016 [1].

  5. Grand Challenges facing Storage Systems

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    In this talk, we will discuss the future of storage systems. In particular, we will focus on several big challenges which we are facing in storage, such as being able to build, manage and backup really massive storage systems, being able to find information of interest, being able to do long-term archival of data, and so on. We also present ideas and research being done to address these challenges, and provide a perspective on how we expect these challenges to be resolved as we go forward.

  6. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    Science.gov (United States)

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  7. Assimilating GRACE terrestrial water storage data into a conceptual hydrology model for the River Rhine

    Science.gov (United States)

    Widiastuti, E.; Steele-Dunne, S. C.; Gunter, B.; Weerts, A.; van de Giesen, N.

    2009-12-01

    Terrestrial water storage (TWS) is a key component of the terrestrial and global hydrological cycles, and plays a major role in the Earth’s climate. The Gravity Recovery and Climate Experiment (GRACE) twin satellite mission provided the first space-based dataset of TWS variations, albeit with coarse resolution and limited accuracy. Here, we examine the value of assimilating GRACE observations into a well-calibrated conceptual hydrology model of the Rhine river basin. In this study, the ensemble Kalman filter (EnKF) and smoother (EnKS) were applied to assimilate the GRACE TWS variation data into the HBV-96 rainfall run-off model, from February 2003 to December 2006. Two GRACE datasets were used, the DMT-1 models produced at TU Delft, and the CSR-RL04 models produced by UT-Austin . Each center uses its own data processing and filtering methods, yielding two different estimates of TWS variations and therefore two sets of assimilated TWS estimates. To validate the results, the model estimated discharge after the data assimilation was compared with measured discharge at several stations. As expected, the updated TWS was generally somewhere between the modeled and observed TWS in both experiments and the variance was also lower than both the prior error covariance and the assumed GRACE observation error. However, the impact on the discharge was found to depend heavily on the assimilation strategy used, in particular on how the TWS increments were applied to the individual storage terms of the hydrology model.

  8. Energy Storage.

    Science.gov (United States)

    Eaton, William W.

    Described are technological considerations affecting storage of energy, particularly electrical energy. The background and present status of energy storage by batteries, water storage, compressed air storage, flywheels, magnetic storage, hydrogen storage, and thermal storage are discussed followed by a review of development trends. Included are…

  9. Design of a large remote seismic exploration data acquisition system, with the architecture of a distributed storage area network

    International Nuclear Information System (INIS)

    Cao, Ping; Song, Ke-zhu; Yang, Jun-feng; Ruan, Fu-ming

    2011-01-01

    Nowadays, seismic exploration data acquisition (DAQ) systems have been developed into remote forms with a large-scale coverage area. In this kind of application, some features must be mentioned. Firstly, there are many sensors which are placed remotely. Secondly, the total data throughput is high. Thirdly, optical fibres are not suitable everywhere because of cost control, harsh running environments, etc. Fourthly, the ability of expansibility and upgrading is a must for this kind of application. It is a challenge to design this kind of remote DAQ (rDAQ). Data transmission, clock synchronization, data storage, etc must be considered carefully. A fourth-hierarchy model of rDAQ is proposed. In this model, rDAQ is divided into four different function levels. From this model, a simple and clear architecture based on a distributed storage area network is proposed. rDAQs with this architecture have advantages of flexible configuration, expansibility and stability. This architecture can be applied to design and realize from simple single cable systems to large-scale exploration DAQs

  10. Archival storage solutions for PACS

    Science.gov (United States)

    Chunn, Timothy

    1997-05-01

    While they are many, one of the inhibitors to the wide spread diffusion of PACS systems has been robust, cost effective digital archive storage solutions. Moreover, an automated Nearline solution is key to a central, sharable data repository, enabling many applications such as PACS, telemedicine and teleradiology, and information warehousing and data mining for research such as patient outcome analysis. Selecting the right solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, configuration architecture and flexibility, subsystem availability and reliability, security requirements, system cost, achievable benefits and cost savings, investment protection, strategic fit and more.This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on storage system throughput will be analyzed. The concept of automated migration of images from high performance, high cost storage devices to high capacity, low cost storage devices will be introduced as a viable way to minimize overall storage costs for an archive. The concept of access density will also be introduced and applied to the selection of the most cost effective archive solution.

  11. Spatially pooled depth-dependent reservoir storage, elevation, and water-quality data for selected reservoirs in Texas, January 1965-January 2010

    Science.gov (United States)

    Burley, Thomas E.; Asquith, William H.; Brooks, Donald L.

    2011-01-01

    The U.S. Geological Survey (USGS), in cooperation with Texas Tech University, constructed a dataset of selected reservoir storage (daily and instantaneous values), reservoir elevation (daily and instantaneous values), and water-quality data from 59 reservoirs throughout Texas. The period of record for the data is as large as January 1965-January 2010. Data were acquired from existing databases, spreadsheets, delimited text files, and hard-copy reports. The goal was to obtain as much data as possible; therefore, no data acquisition restrictions specifying a particular time window were used. Primary data sources include the USGS National Water Information System, the Texas Commission on Environmental Quality Surface Water-Quality Management Information System, and the Texas Water Development Board monthly Texas Water Condition Reports. Additional water-quality data for six reservoirs were obtained from USGS Texas Annual Water Data Reports. Data were combined from the multiple sources to create as complete a set of properties and constituents as the disparate databases allowed. By devising a unique per-reservoir short name to represent all sites on a reservoir regardless of their source, all sampling sites at a reservoir were spatially pooled by reservoir and temporally combined by date. Reservoir selection was based on various criteria including the availability of water-quality properties and constituents that might affect the trophic status of the reservoir and could also be important for understanding possible effects of climate change in the future. Other considerations in the selection of reservoirs included the general reservoir-specific period of record, the availability of concurrent reservoir storage or elevation data to match with water-quality data, and the availability of sample depth measurements. Additional separate selection criteria included historic information pertaining to blooms of golden algae. Physical properties and constituents were water

  12. Costing of spent nuclear fuel storage

    International Nuclear Information System (INIS)

    2009-01-01

    This report deals with economic analysis and cost estimation, based on exploration of relevant issues, including a survey of analytical tools for assessment and updated information on the market and financial issues associated with spent fuel storage. The development of new storage technologies and changes in some of the circumstances affecting the costs of spent fuel storage are also incorporated. This report aims to provide comprehensive information on spent fuel storage costs to engineers and nuclear professionals as well as other stakeholders in the nuclear industry. This report is meant to provide informative guidance on economic aspects involved in selecting a spent fuel storage system, including basic methods of analysis and cost data for project evaluation and comparison of storage options, together with financial and business aspects associated with spent fuel storage. After the review of technical options for spent fuel storage in Section 2, cost categories and components involved in the lifecycle of a storage facility are identified in Section 3 and factors affecting costs of spent fuel storage are then reviewed in the Section 4. Methods for cost estimation and analysis are introduced in Section 5, and other financial and business aspects associated with spent fuel storage are discussed in Section 6.

  13. Underground Storage Tanks - Storage Tank Locations

    Data.gov (United States)

    NSGIC Education | GIS Inventory — A Storage Tank Location is a DEP primary facility type, and its sole sub-facility is the storage tank itself. Storage tanks are aboveground or underground, and are...

  14. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    Science.gov (United States)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  15. Natural Gas Storage Facilities, US, 2010, Platts

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Platts Natural Gas Storage Facilities geospatial data layer contains points that represent locations of facilities used for natural gas storage in the United...

  16. FPGA based data-flow injection module at 10 Gbit/s reading data from network exported storage and using standard protocols

    International Nuclear Information System (INIS)

    Lemouzy, B; Garnier, J-C; Neufeld, N

    2011-01-01

    The goal of the LHCb readout upgrade is to accelerate the DAQ to 40 MHz. Such a DAQ system will certainly employ 10 Gigabit or similar technologies and might also need new networking protocols such as a customized, light-weight TCP or more specialized protocols. A test module is being implemented to be integrated in the existing LHCb infrastructure. It is a multiple 10-Gigabit traffic generator, driven by a Stratix IV FPGA, and flexible enough to generate LHCb's raw data packets. Traffic data are either internally generated or read from external storage via the network. We have implemented a light-weight industry standard protocol ATA over Ethernet (AoE) and we present an outlook of using a file-system on these network-exported disk-drivers.

  17. Shredder: GPU-Accelerated Incremental Storage and Computation

    OpenAIRE

    Bhatotia, Pramod; Rodrigues, Rodrigo; Verma, Akshat

    2012-01-01

    Redundancy elimination using data deduplication and incremental data processing has emerged as an important technique to minimize storage and computation requirements in data center computing. In this paper, we present the design, implementation and evaluation of Shredder, a high performance content-based chunking framework for supporting incremental storage and computation systems. Shredder exploits the massively parallel processing power of GPUs to overcome the CPU bottlenecks of content-ba...

  18. The collection, storage and use of equipment performance data for the safety and reliability assessment of nuclear power plants

    International Nuclear Information System (INIS)

    Fothergill, C.D.H.

    1975-01-01

    It has been characteristic of the Nuclear Industry that it should grow up in an atmosphere where reliability and operational safety considerations have been of vital importance. Consequently all aspects of Nuclear Power Reactor design, construction and operation (in the U.K.A.E.A.) are subjected to rigorous reliability assessments, beginning with the automatic protective devices and the safety shut-down systems. This has resulted in the setting up of large and small private data stores to support this upsurgence of Safety and Reliability assessment work. Unfortunately, much of the information being stored and published falls short of the minimum requirements of Safety Assessors and Reliability Analysts who need to make use of it. That there is still an urgent need for more work to be done in the Reliability Data field is universally acknowledged. The characteristics which make up good quality reliability data must be defined and achievable minimum standards must be set for its identification, collection, storage and retrieval. To this end the United Kingdom Atomic Energy Authority have set up the Systems Reliability Service Data Bank. This includes a computerized storage facility comprised of two principal data stores: (i) Reliability Data Store, (ii) Event Data Store. The figures available in the Reliability Data Store range from those relating to the lifetimes of minute components to those obtained from the assessment of whole plants and complete assemblies. These data have been accumulated from many reliable sources both inside and outside the Nuclear Industry, including the transfer of 'live' data generated from the results of reliability surveillance exercises associated with Event Data collection. Computer techniques developed specifically for the Reliability Data Store enable further 'processing' of these data to be carried out. The Event Data Store consists of three discrete computerized data stores, each one providing the necessary storage, retrieval and

  19. Integrating new Storage Technologies into EOS

    CERN Document Server

    Peters, Andreas J; Rocha, Joaquim; Lensing, Paul

    2015-01-01

    The EOS[1] storage software was designed to cover CERN disk-only storage use cases in the medium-term trading scalability against latency. To cover and prepare for long-term requirements the CERN IT data and storage services group (DSS) is actively conducting R&D; and open source contributions to experiment with a next generation storage software based on CEPH[3] and ethernet enabled disk drives. CEPH provides a scale-out object storage system RADOS and additionally various optional high-level services like S3 gateway, RADOS block devices and a POSIX compliant file system CephFS. The acquisition of CEPH by Redhat underlines the promising role of CEPH as the open source storage platform of the future. CERN IT is running a CEPH service in the context of OpenStack on a moderate scale of 1 PB replicated storage. Building a 100+PB storage system based on CEPH will require software and hardware tuning. It is of capital importance to demonstrate the feasibility and possibly iron out bottlenecks and blocking issu...

  20. Automated load balancing in the ATLAS high-performance storage software

    CERN Document Server

    Le Goff, Fabrice; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment collects proton-proton collision events delivered by the LHC accelerator at CERN. The ATLAS Trigger and Data Acquisition (TDAQ) system selects, transports and eventually records event data from the detector at several gigabytes per second. The data are recorded on transient storage before being delivered to permanent storage. The transient storage consists of high-performance direct-attached storage servers accounting for about 500 hard drives. The transient storage operates dedicated software in the form of a distributed multi-threaded application. The workload includes both CPU-demanding and IO-oriented tasks. This paper presents the original application threading model for this particular workload, discussing the load-sharing strategy among the available CPU cores. The limitations of this strategy were reached in 2016 due to changes in the trigger configuration involving a new data distribution pattern. We then describe a novel data-driven load-sharing strategy, designed to automatical...

  1. An investigation of used electronics return flows: A data-driven approach to capture and predict consumers storage and utilization behavior

    Energy Technology Data Exchange (ETDEWEB)

    Sabbaghi, Mostafa, E-mail: mostafas@buffalo.edu [Industrial and Systems Engineering Department, State University of New York, University at Buffalo, 437 Bell Hall, Buffalo, NY (United States); Esmaeilian, Behzad, E-mail: b.esmaeilian@neu.edu [Healthcare Systems Engineering Institute, Northeastern University, Boston, MA 02115 (United States); Raihanian Mashhadi, Ardeshir, E-mail: ardeshir@buffalo.edu [Mechanical and Aerospace Engineering, State University of New York, University at Buffalo, 437 Bell Hall, Buffalo, NY (United States); Behdad, Sara, E-mail: sarabehd@buffalo.edu [Industrial and Systems Engineering Department, State University of New York, University at Buffalo, 437 Bell Hall, Buffalo, NY (United States); Mechanical and Aerospace Engineering, State University of New York, University at Buffalo, 437 Bell Hall, Buffalo, NY (United States); Cade, Willie, E-mail: willie@pcrr.com [PC Rebuilder and Recyclers, 4734 W Chicago Ave, Chicago, IL 60651-3322 (United States)

    2015-02-15

    Highlights: • We analyzed a data set of HDDs returned back to an e-waste collection site. • We studied factors that affect the storage behavior. • Consumer type, brand and size are among factors which affect the storage behavior. • Commercial consumers have stored computers more than household consumers. • Machine learning models were used to predict the storage behavior. - Abstract: Consumers often have a tendency to store their used, old or un-functional electronics for a period of time before they discard them and return them back to the waste stream. This behavior increases the obsolescence rate of used still-functional products leading to lower profitability that could be resulted out of End-of-Use (EOU) treatments such as reuse, upgrade, and refurbishment. These types of behaviors are influenced by several product and consumer-related factors such as consumers’ traits and lifestyles, technology evolution, product design features, product market value, and pro-environmental stimuli. Better understanding of different groups of consumers, their utilization and storage behavior and the connection of these behaviors with product design features helps Original Equipment Manufacturers (OEMs) and recycling and recovery industry to better overcome the challenges resulting from the undesirable storage of used products. This paper aims at providing insightful statistical analysis of Electronic Waste (e-waste) dynamic nature by studying the effects of design characteristics, brand and consumer type on the electronics usage time and end of use time-in-storage. A database consisting of 10,063 Hard Disk Drives (HDD) of used personal computers returned back to a remanufacturing facility located in Chicago, IL, USA during 2011–2013 has been selected as the base for this study. The results show that commercial consumers have stored computers more than household consumers regardless of brand and capacity factors. Moreover, a heterogeneous storage behavior is

  2. An investigation of used electronics return flows: A data-driven approach to capture and predict consumers storage and utilization behavior

    International Nuclear Information System (INIS)

    Sabbaghi, Mostafa; Esmaeilian, Behzad; Raihanian Mashhadi, Ardeshir; Behdad, Sara; Cade, Willie

    2015-01-01

    Highlights: • We analyzed a data set of HDDs returned back to an e-waste collection site. • We studied factors that affect the storage behavior. • Consumer type, brand and size are among factors which affect the storage behavior. • Commercial consumers have stored computers more than household consumers. • Machine learning models were used to predict the storage behavior. - Abstract: Consumers often have a tendency to store their used, old or un-functional electronics for a period of time before they discard them and return them back to the waste stream. This behavior increases the obsolescence rate of used still-functional products leading to lower profitability that could be resulted out of End-of-Use (EOU) treatments such as reuse, upgrade, and refurbishment. These types of behaviors are influenced by several product and consumer-related factors such as consumers’ traits and lifestyles, technology evolution, product design features, product market value, and pro-environmental stimuli. Better understanding of different groups of consumers, their utilization and storage behavior and the connection of these behaviors with product design features helps Original Equipment Manufacturers (OEMs) and recycling and recovery industry to better overcome the challenges resulting from the undesirable storage of used products. This paper aims at providing insightful statistical analysis of Electronic Waste (e-waste) dynamic nature by studying the effects of design characteristics, brand and consumer type on the electronics usage time and end of use time-in-storage. A database consisting of 10,063 Hard Disk Drives (HDD) of used personal computers returned back to a remanufacturing facility located in Chicago, IL, USA during 2011–2013 has been selected as the base for this study. The results show that commercial consumers have stored computers more than household consumers regardless of brand and capacity factors. Moreover, a heterogeneous storage behavior is

  3. Cathodic Protection for Above Ground Storage Tank Bottom Using Data Acquisition

    Directory of Open Access Journals (Sweden)

    Naseer Abbood Issa Al Haboubi

    2015-07-01

    Full Text Available Impressed current cathodic protection controlled by computer gives the ideal solution to the changes in environmental factors and long term coating degradation. The protection potential distribution achieved and the current demand on the anode can be regulated to protection criteria, to achieve the effective protection for the system. In this paper, cathodic protection problem of above ground steel storage tank was investigated by an impressed current of cathodic protection with controlled potential of electrical system to manage the variation in soil resistivity. Corrosion controller has been implemented for above ground tank in LabView where tank's bottom potential to soil was manipulated to the desired set point (protection criterion 850 mV. National Instruments Data Acquisition (NI-DAQ and PC controllers for tank corrosion control system provides quick response to achieve steady state condition for any kind of disturbances.

  4. Towards Regional, Error-Bounded Landscape Carbon Storage Estimates for Data-Deficient Areas of the World

    DEFF Research Database (Denmark)

    Willcock, Simon; Phillips, Oliver L.; Platts, Philip J.

    2012-01-01

    estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been...

  5. Metabolomic analysis of platelets during storage

    DEFF Research Database (Denmark)

    Paglia, Giuseppe; Sigurjónsson, Ólafur E; Rolfsson, Óttar

    2015-01-01

    BACKGROUND: Platelet concentrates (PCs) can be prepared using three methods: platelet (PLT)-rich plasma, apheresis, and buffy coat. The aim of this study was to obtain a comprehensive data set that describes metabolism of buffy coat-derived PLTs during storage and to compare it with a previously...... published parallel data set obtained for apheresis-derived PLTs. STUDY DESIGN AND METHODS: During storage we measured more than 150 variables in 8 PLT units, prepared by the buffy coat method. Samples were collected at seven different time points resulting in a data set containing more than 8000...... after their collection. The transition was evident in PLT produced by both production methods. Apheresis-derived PLTs showed a clearer phenotype of PLT activation during early days of storage. The activated phenotype of apheresis PLTs was accompanied by a higher metabolic activity, especially related...

  6. Evolution of spent fuel dry storage

    Energy Technology Data Exchange (ETDEWEB)

    Standring, Paul Nicholas [International Atomic Energy Agency, Vienna (Austria). Div. of Nuclear Fuel Cycle and Waste Technology; Takats, Ferenc [TS ENERCON KFT, Budapest (Hungary)

    2016-11-15

    Around 10,000 tHM of spent fuel is discharged per year from the nuclear power plants in operation. Whilst the bulk of spent fuel is still held in at reactor pools, 24 countries have developed storage facilities; either on the reactor site or away from the reactor site. Of the 146 operational AFR storage facilities about 80 % employ dry storage; the majority being deployed over the last 20 years. This reflects both the development of dry storage technology as well as changes in politics and trading relationships that have affected spent fuel management policies. The paper describes the various approaches to the back-end of the nuclear fuel cycle for power reactor fuels and provides data on deployed storage technologies.

  7. Spent fuel storage requirements 1993--2040

    International Nuclear Information System (INIS)

    1994-09-01

    Historical inventories of spent fuel are combined with U.S. Department of Energy (DOE) projections of future discharges from commercial nuclear reactors in the United States to provide estimates of spent fuel storage requirements through the year 2040. The needs are estimated for storage capacity beyond that presently available in the reactor storage pools. These estimates incorporate the maximum capacities within current and planned in-pool storage facilities and any planned transshipments of spent fuel to other reactors or facilities. Existing and future dry storage facilities are also discussed. The nuclear utilities provide historical data through December 1992 on the end of reactor life are based on the DOE/Energy Information Administration (EIA) estimates of future nuclear capacity, generation, and spent fuel discharges

  8. 303-K Storage Facility: Report on FY98 closure activities

    International Nuclear Information System (INIS)

    Adler, J.G.

    1998-01-01

    This report summarizes and evaluates the decontamination activities, sampling activities, and sample analysis performed in support of the closure of the 303-K Storage Facility. The evaluation is based on the validated data included in the data validation package (98-EAP-346) for the 303-K Storage Facility. The results of this evaluation will be used for assessing contamination for the purpose of closing the 303-K Storage Facility as described in the 303-K Storage Facility Closure Plan, DOE/RL-90-04. The closure strategy for the 303-K Storage Facility is to decontaminate the interior of the north half of the 303-K Building to remove known or suspected dangerous waste contamination, to sample the interior concrete and exterior soils for the constituents of concern, and then to perform data analysis, with an evaluation to determine if the closure activities and data meet the closure criteria. The closure criteria for the 303-K Storage Facility is that the concentrations of constituents of concern are not present above the cleanup levels. Based on the evaluation of the decontamination activities, sampling activities, and sample data, determination has been made that the soils at the 303-K Storage Facility meet the cleanup performance standards (WMH 1997) and can be clean closed. The evaluation determined that the 303-K Building cannot be clean closed without additional closure activities. An additional evaluation will be needed to determine the specific activities required to clean close the 303-K Storage Facility. The radiological contamination at the 303-K Storage Facility is not addressed by the closure strategy

  9. Distributed Cloud Storage Using Network Coding

    OpenAIRE

    Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2014-01-01

    Distributed storage is usually considered within acloud provider to ensure availability and reliability of the data.However, the user is still directly dependent on the quality of asingle system. It is also entrusting the service provider with largeamounts of private data, which may be accessed by a successfulattack to that cloud system or even be inspected by governmentagencies in some countries. This paper advocates a generalframework for network coding enabled distributed storage overmulti...

  10. A Method of Signal Scrambling to Secure Data Storage for Healthcare Applications.

    Science.gov (United States)

    Bao, Shu-Di; Chen, Meng; Yang, Guang-Zhong

    2017-11-01

    A body sensor network that consists of wearable and/or implantable biosensors has been an important front-end for collecting personal health records. It is expected that the full integration of outside-hospital personal health information and hospital electronic health records will further promote preventative health services as well as global health. However, the integration and sharing of health information is bound to bring with it security and privacy issues. With extensive development of healthcare applications, security and privacy issues are becoming increasingly important. This paper addresses the potential security risks of healthcare data in Internet-based applications and proposes a method of signal scrambling as an add-on security mechanism in the application layer for a variety of healthcare information, where a piece of tiny data is used to scramble healthcare records. The former is kept locally and the latter, along with security protection, is sent for cloud storage. The tiny data can be derived from a random number generator or even a piece of healthcare data, which makes the method more flexible. The computational complexity and security performance in terms of theoretical and experimental analysis has been investigated to demonstrate the efficiency and effectiveness of the proposed method. The proposed method is applicable to all kinds of data that require extra security protection within complex networks.

  11. Definition of a Storage Accounting Record

    CERN Document Server

    Jensen, H. T.; Müller-Pfefferkorn, R.; Nilsen, J. K.; Zsolt, M.; Zappi, R.

    2011-01-01

    In this document a storage accounting record StAR is defined reflecting practical, financial and legal requirements of storage location, usage and space and data flow. The definition might be the base for a standardized schema or an extension of an existing record like the OGF UR.

  12. Neutrino Signals in Electron-Capture Storage-Ring Experiments

    Directory of Open Access Journals (Sweden)

    Avraham Gal

    2016-06-01

    Full Text Available Neutrino signals in electron-capture decays of hydrogen-like parent ions P in storage-ring experiments at GSI are reconsidered, with special emphasis placed on the storage-ring quasi-circular motion of the daughter ions D in two-body decays P → D + ν e . It is argued that, to the extent that daughter ions are detected, these detection rates might exhibit modulations with periods of order seconds, similar to those reported in the GSI storage-ring experiments for two-body decay rates. New dedicated experiments in storage rings, or using traps, could explore these modulations.

  13. Gas storage materials, including hydrogen storage materials

    Science.gov (United States)

    Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

    2013-02-19

    A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

  14. Analog storage integrated circuit

    Science.gov (United States)

    Walker, J.T.; Larsen, R.S.; Shapiro, S.L.

    1989-03-07

    A high speed data storage array is defined utilizing a unique cell design for high speed sampling of a rapidly changing signal. Each cell of the array includes two input gates between the signal input and a storage capacitor. The gates are controlled by a high speed row clock and low speed column clock so that the instantaneous analog value of the signal is only sampled and stored by each cell on coincidence of the two clocks. 6 figs.

  15. Robust holographic storage system design.

    Science.gov (United States)

    Watanabe, Takahiro; Watanabe, Minoru

    2011-11-21

    Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration. © 2011 Optical Society of America

  16. iSDS: a self-configurable software-defined storage system for enterprise

    Science.gov (United States)

    Chen, Wen-Shyen Eric; Huang, Chun-Fang; Huang, Ming-Jen

    2018-01-01

    Storage is one of the most important aspects of IT infrastructure for various enterprises. But, enterprises are interested in more than just data storage; they are interested in such things as more reliable data protection, higher performance and reduced resource consumption. Traditional enterprise-grade storage satisfies these requirements at high cost. It is because traditional enterprise-grade storage is usually designed and constructed by customised field-programmable gate array to achieve high-end functionality. However, in this ever-changing environment, enterprises request storage with more flexible deployment and at lower cost. Moreover, the rise of new application fields, such as social media, big data, video streaming service etc., makes operational tasks for administrators more complex. In this article, a new storage system called intelligent software-defined storage (iSDS), based on software-defined storage, is described. More specifically, this approach advocates using software to replace features provided by traditional customised chips. To alleviate the management burden, it also advocates applying machine learning to automatically configure storage to meet dynamic requirements of workloads running on storage. This article focuses on the analysis feature of iSDS cluster by detailing its architecture and design.

  17. Logic operations and data storage using vortex magnetization states in mesoscopic permalloy rings, and optical readout

    Energy Technology Data Exchange (ETDEWEB)

    Bowden, S R; Gibson, U J, E-mail: u.gibson@dartmouth.ed [Thayer School of Engineering, Dartmouth College, Hanover, NH 03755-8000 (United States)

    2010-01-01

    Optical coatings applied to one-half of thin film magnetic rings allow real-time readout of the chirality of the vortex state of micro- and nanomagnetic structures by breaking the symmetry of the optical signal. We use this technique to demonstrate data storage, operation of a NOT gate that uses exchange interactions between slightly overlapping rings, and to investigate the use of chains of rings as connecting wires for linking gates.

  18. Monitoring Groundwater Storage Changes in the Loess Plateau Using GRACE Satellite Gravity Data, Hydrological Models and Coal Mining Data

    Directory of Open Access Journals (Sweden)

    Xiaowei Xie

    2018-04-01

    Full Text Available Monitoring the groundwater storage (GWS changes is crucial to the rational utilization of groundwater and to ecological restoration in the Loess Plateau of China, which is one of the regions with the most extreme ecological environmental damage in the world. In this region, the mass loss caused by coal mining can reach the level of billions of tons per year. For this reason, in this work, in addition to Gravity Recovery and Climate Experiment (GRACE satellite gravity data and hydrological models, coal mining data were also used to monitor GWS variation in the Loess Plateau during the period of 2005–2014. The GWS changes results from different GRACE solutions, that is, the spherical harmonics (SH solutions, mascon solutions, and Slepian solutions (which are the Slepian localization of SH solutions, were compared with in situ GWS changes, obtained from 136 groundwater observation wells, and the aim was to acquire the most robust GWS changes. The results showed that the GWS changes from mascon solutions (mascon-GWS match best with in situ GWS changes, showing the highest correlation coefficient, lowest root mean square error (RMSE values and nearest annual trend. Therefore, the Mascon-GWS changes are used for the spatial-temporal analysis of GWS changes. Based on which, the groundwater depletion rate of the Loess Plateau was −0.65 ± 0.07 cm/year from 2005–2014, with a more severe consumption rate occurring in its eastern region, reaching about −1.5 cm/year, which is several times greater than those of the other regions. Furthermore, the precipitation and coal mining data were used for analyzing the causes of the groundwater depletion: the results showed that seasonal changes in groundwater storage are closely related to rainfall, but the groundwater consumption is mainly due to human activities; coal mining in particular plays a major role in the serious groundwater consumption in eastern region of the study area. Our results will help in

  19. What's Up with the Storage Hierarchy?

    DEFF Research Database (Denmark)

    Bonnet, Philippe

    2017-01-01

    Ten years ago, Jim Gray observed that flash was about to replace magnetic disks. He also predicted that the need for low latency would make main memory databases commonplace. Most of his predictions have proven accurate. Today, who can make predictions about the future of the storage hierarchy......? Both main memory and storage systems are undergoing profound transformations. First, their design goals are increasingly complex (reconfigurable infrastructure at low latency, high resource utilization and stable energy footprint). Second, the status quo is not an option due to the shortcomings...... of existing solutions (memory bandwidth gap, inefficiency of generic memory/storage controllers). Third, new technologies are emerging (hybrid memories, non-volatile memories still under non-disclosure agreements, near-data processing in memory and storage). The impact of these transformations on the storage...

  20. Spent fuel storage requirements 1989--2020

    International Nuclear Information System (INIS)

    1989-10-01

    Historical inventories of spent fuel are combined with Department of Energy (DOE) projections of future discharges from commercial nuclear reactors in the US to provide estimates of spent fuel storage requirements over the next 32 years, through the year 2020. The needs for storage capacity beyond that presently available in the pools are estimated. These estimates incorporate the maximum capacities within current and planned in-pool storage facilities and any planned transshipments of fuel to other reactors or facilities. Historical data through December 1988 are derived from the 1989 Form RW-859 data survey of nuclear utilities. Projected discharges through the end of reactor life are based on DOE estimates of future nuclear capacity, generation, and spent fuel discharges. 14 refs., 3 figs., 28 tabs

  1. Multiobjective Reliable Cloud Storage with Its Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Xiyang Liu

    2016-01-01

    Full Text Available Information abounds in all fields of the real life, which is often recorded as digital data in computer systems and treated as a kind of increasingly important resource. Its increasing volume growth causes great difficulties in both storage and analysis. The massive data storage in cloud environments has significant impacts on the quality of service (QoS of the systems, which is becoming an increasingly challenging problem. In this paper, we propose a multiobjective optimization model for the reliable data storage in clouds through considering both cost and reliability of the storage service simultaneously. In the proposed model, the total cost is analyzed to be composed of storage space occupation cost, data migration cost, and communication cost. According to the analysis of the storage process, the transmission reliability, equipment stability, and software reliability are taken into account in the storage reliability evaluation. To solve the proposed multiobjective model, a Constrained Multiobjective Particle Swarm Optimization (CMPSO algorithm is designed. At last, experiments are designed to validate the proposed model and its solution PSO algorithm. In the experiments, the proposed model is tested in cooperation with 3 storage strategies. Experimental results show that the proposed model is positive and effective. The experimental results also demonstrate that the proposed model can perform much better in alliance with proper file splitting methods.

  2. Operation of a Data Acquisition, Transfer, and Storage System for the Global Space-Weather Observation Network

    Directory of Open Access Journals (Sweden)

    T Nagatsuma

    2014-10-01

    Full Text Available A system to optimize the management of global space-weather observation networks has been developed by the National Institute of Information and Communications Technology (NICT. Named the WONM (Wide-area Observation Network Monitoring system, it enables data acquisition, transfer, and storage through connection to the NICT Science Cloud, and has been supplied to observatories for supporting space-weather forecast and research. This system provides us with easier management of data collection than our previously employed systems by means of autonomous system recovery, periodical state monitoring, and dynamic warning procedures. Operation of the WONM system is introduced in this report.

  3. Integrating new Storage Technologies into EOS

    Science.gov (United States)

    Peters, Andreas J.; van der Ster, Dan C.; Rocha, Joaquim; Lensing, Paul

    2015-12-01

    The EOS[1] storage software was designed to cover CERN disk-only storage use cases in the medium-term trading scalability against latency. To cover and prepare for long-term requirements the CERN IT data and storage services group (DSS) is actively conducting R&D and open source contributions to experiment with a next generation storage software based on CEPH[3] and ethernet enabled disk drives. CEPH provides a scale-out object storage system RADOS and additionally various optional high-level services like S3 gateway, RADOS block devices and a POSIX compliant file system CephFS. The acquisition of CEPH by Redhat underlines the promising role of CEPH as the open source storage platform of the future. CERN IT is running a CEPH service in the context of OpenStack on a moderate scale of 1 PB replicated storage. Building a 100+PB storage system based on CEPH will require software and hardware tuning. It is of capital importance to demonstrate the feasibility and possibly iron out bottlenecks and blocking issues beforehand. The main idea behind this R&D is to leverage and contribute to existing building blocks in the CEPH storage stack and implement a few CERN specific requirements in a thin, customisable storage layer. A second research topic is the integration of ethernet enabled disks. This paper introduces various ongoing open source developments, their status and applicability.

  4. Thermoelectric PbTe thin film for superresolution optical data storage

    International Nuclear Information System (INIS)

    Lee, Hyun Seok; Cheong, Byung-ki; Lee, Taek Sung; Lee, Kyeong Seok; Kim, Won Mok; Lee, Jae Won; Cho, Sung Ho; Youl Huh, Joo

    2004-01-01

    To find its practical use in ultrahigh density optical data storage, superresolution (SR) technique needs a material that can render a high SR capability at no cost of durability against repeated readout and write. Thermoelectric materials appear to be promising candidates due to their capability of yielding phase-change-free thermo-optic changes. A feasibility study was carried out with PbTe for its large thermoelectric coefficient and high stability over a wide temperature range as a crystalline single phase. Under exposure to pulsed red light, the material was found to display positive, yet completely reversible changes of optical transmittance regardless of laser power, fulfilling basic requirements for SR readout and write. The material was also shown to have a high endurance against repeated static laser heating of up to 10 6 -10 7 cycles tested. A read only memory disk with a PbTe SR layer led to the carrier to noise ratio value of 47 dB at 3.5 mW for 0.25 μm pit; below the optical resolution limit (∼0.27 μm) of the tester

  5. Rewritable azobenzene polyester for polarization holographic data storage

    DEFF Research Database (Denmark)

    Kerekes, A; Sajti, Sz.; Loerincz, Emoeke

    2000-01-01

    Optical storage properties of thin azobenzene side-chain polyester films were examined by polarization holographic measurements. The new amorphous polyester film is the candidate material for the purpose of rewritable holographic memory system. Temporal formation of anisotropic and topographic...... gratings was studied in case of films with and without a hard protective layer. We showed that the dominant contribution to the diffraction efficiency comes from the anisotropy in case of expositions below 1 sec even for high incident intensity. The usage of the same wavelength for writing, reading...

  6. Huygens file service and storage architecture

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.; Stabell-Kulo, Tage; Stabell-Kulo, Tage

    1993-01-01

    The Huygens file server is a high-performance file server which is able to deliver multi-media data in a timely manner while also providing clients with ordinary “Unix” like file I/O. The file server integrates client machines, file servers and tertiary storage servers in the same storage

  7. Huygens File Service and Storage Architecture

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.; Stabell-Kulo, Tage; Stabell-Kulo, Tage

    1993-01-01

    The Huygens file server is a high-performance file server which is able to deliver multi-media data in a timely manner while also providing clients with ordinary “Unix” like file I/O. The file server integrates client machines, file servers and tertiary storage servers in the same storage

  8. PC-Cluster based Storage System Architecture for Cloud Storage

    OpenAIRE

    Yee, Tin Tin; Naing, Thinn Thu

    2011-01-01

    Design and architecture of cloud storage system plays a vital role in cloud computing infrastructure in order to improve the storage capacity as well as cost effectiveness. Usually cloud storage system provides users to efficient storage space with elasticity feature. One of the challenges of cloud storage system is difficult to balance the providing huge elastic capacity of storage and investment of expensive cost for it. In order to solve this issue in the cloud storage infrastructure, low ...

  9. Management issues for high performance storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  10. Considerations applicable to the transportability of a transportable storage cask at the end of the storage period

    International Nuclear Information System (INIS)

    Sanders, T.L.; Ottinger, C.A.; Brimhall, J.L.; Creer, J.M.; Gilbert, E.R.; Jones, R.H.; McConnell, P.E.

    1991-11-01

    Additional spent fuel storage capacity is needed at many nuclear power plant sites where spent fuel storage pools have either reached or are expected to reach maximum capacities before spent fuel can be removed. This analysis examines certain aspects of Transportable Storage Casks (TSC) to assist in the determination of their feasibility as an option for at-reactor dry storage. Factors that can affect in-transport reliability include: the quality of design, development, and fabrication activities; the possibilities of damage or error during loading and closure; in-storage deterioration or unanticipated storage conditions; and the potential for loss of storage period monitoring/measurement data necessary for verifying the TSC fitness-for-transport. The reported effort utilizes a relative reliability comparison of TSCs to Transport-Only Casks (TOC) to identify and prioritize those issues and activities that are unique to TSCs. TSC system recommendations combine certain design and operational features, such as in-service monitoring, pretransport assessments, and conservation design assumptions, which when implemented and verified, should sufficiently ensure that the system will perform as intended in a later transport environment

  11. LHCb: FPGA based data-flow injection module at 10 Gbit/s reading data from network exported storage and using standard protocols

    CERN Multimedia

    Lemouzy, B; Garnier, J-C

    2010-01-01

    The goal of the LHCb readout upgrade is to speed up the DAQ to 40 MHz. Such a DAQ system will certainly employ 10 Gigabit or similar technologies and might also need new networking protocols such as a customized, light-weight TCP or more specialised protocols. A test module is being implemented, which integrates in the existing LHCb infrastructure. It is a multiple 10-Gigabit traffic generator, driven by a Stratix IV FPGA, which is flexibile enough to either generate LHCb's raw data packets internally or read them from external storage via the network. For reading the data we have implemented a light-weight industry standard protocol ATA over Ethernet (AoE) and we present an outlook of using a filesystem on these network-exported disk-drivers.

  12. N-1-Alkylated Pyrimidine Films as a New Potential Optical Data Storage Medium

    DEFF Research Database (Denmark)

    Lohse, Brian; Hvilsted, Søren; Berg, Rolf Henrik

    2006-01-01

    storage. Their dimerization efficiency was compared, in solution, with uracil as a reference, and as films, to investigate the correlation between solution and film. Films of good quality displaying excellent thermal and optical stability can be fabricated. A significant optical contrast between...... grating storage are also demonstrated in the films. Writing and reading of the gray scale can be performed at the same wavelength....

  13. On Network Coded Distributed Storage

    DEFF Research Database (Denmark)

    Cabrera Guerrero, Juan Alberto; Roetter, Daniel Enrique Lucani; Fitzek, Frank Hanns Paul

    2016-01-01

    systems typically rely on expensive infrastructure with centralized control to store, repair and access the data. This approach introduces a large delay for accessing and storing the data driven in part by a high RTT between users and the cloud. These characteristics are at odds with the massive increase......This paper focuses on distributed fog storage solutions, where a number of unreliable devices organize themselves in Peer-to-Peer (P2P) networks with the purpose to store reliably their data and that of other devices and/or local users and provide lower delay and higher throughput. Cloud storage...... of devices and generated data in coming years as well as the requirements of low latency in many applications. We focus on characterizing optimal solutions for maintaining data availability when nodes in the fog continuously leave the network. In contrast with state-of-the-art data repair formulations, which...

  14. SOLID-STATE STORAGE DEVICE WITH PROGRAMMABLE PHYSICAL STORAGE ACCESS

    DEFF Research Database (Denmark)

    2017-01-01

    a storage device action request, and the storage device evaluating a first rule of the one or more rules by determining if the received request fulfills request conditions comprised in the first rule, and in the affirmative the storage device performing request actions comprised in the first rule......Embodiments of the present invention includes a method of operating a solid-state storage device, comprising a storage device controller in the storage device receiving a set of one or more rules, each rule comprising (i) one or more request conditions to be evaluated for a storage device action...... request received from a host computer, and (ii) one or more request actions to be performed on a physical address space of a non-volatile storage unit in the solid-state storage device in case the one or more request conditions are fulfilled; the method further comprises: the storage device receiving...

  15. MiMiR: a comprehensive solution for storage, annotation and exchange of microarray data

    Directory of Open Access Journals (Sweden)

    Rahman Fatimah

    2005-11-01

    Full Text Available Abstract Background The generation of large amounts of microarray data presents challenges for data collection, annotation, exchange and analysis. Although there are now widely accepted formats, minimum standards for data content and ontologies for microarray data, only a few groups are using them together to build and populate large-scale databases. Structured environments for data management are crucial for making full use of these data. Description The MiMiR database provides a comprehensive infrastructure for microarray data annotation, storage and exchange and is based on the MAGE format. MiMiR is MIAME-supportive, customised for use with data generated on the Affymetrix platform and includes a tool for data annotation using ontologies. Detailed information on the experiment, methods, reagents and signal intensity data can be captured in a systematic format. Reports screens permit the user to query the database, to view annotation on individual experiments and provide summary statistics. MiMiR has tools for automatic upload of the data from the microarray scanner and export to databases using MAGE-ML. Conclusion MiMiR facilitates microarray data management, annotation and exchange, in line with international guidelines. The database is valuable for underpinning research activities and promotes a systematic approach to data handling. Copies of MiMiR are freely available to academic groups under licence.

  16. Comparison of data file and storage configurations for efficient temporal access of satellite image data

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-01-01

    Full Text Available . Traditional storage formats store such a series of images as a sequence of individual files, with each file internally storing the pixels in their spatial order. Consequently, the construction of a time series profile of a single pixel requires reading from...

  17. Secure Storage Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Aderholdt, Ferrol [Tennessee Technological University; Caldwell, Blake A [ORNL; Hicks, Susan Elaine [ORNL; Koch, Scott M [ORNL; Naughton, III, Thomas J [ORNL; Pogge, James R [Tennessee Technological University; Scott, Stephen L [Tennessee Technological University; Shipman, Galen M [ORNL; Sorrillo, Lawrence [ORNL

    2015-01-01

    The purpose of this report is to clarify the challenges associated with storage for secure enclaves. The major focus areas for the report are: - review of relevant parallel filesystem technologies to identify assets and gaps; - review of filesystem isolation/protection mechanisms, to include native filesystem capabilities and auxiliary/layered techniques; - definition of storage architectures that can be used for customizable compute enclaves (i.e., clarification of use-cases that must be supported for shared storage scenarios); - investigate vendor products related to secure storage. This study provides technical details on the storage and filesystem used for HPC with particular attention on elements that contribute to creating secure storage. We outline the pieces for a a shared storage architecture that balances protection and performance by leveraging the isolation capabilities available in filesystems and virtualization technologies to maintain the integrity of the data. Key Points: There are a few existing and in-progress protection features in Lustre related to secure storage, which are discussed in (Chapter 3.1). These include authentication capabilities like GSSAPI/Kerberos and the in-progress work for GSSAPI/Host-keys. The GPFS filesystem provides native support for encryption, which is not directly available in Lustre. Additionally, GPFS includes authentication/authorization mechanisms for inter-cluster sharing of filesystems (Chapter 3.2). The limitations of key importance for secure storage/filesystems are: (i) restricting sub-tree mounts for parallel filesystem (which is not directly supported in Lustre or GPFS), and (ii) segregation of hosts on the storage network and practical complications with dynamic additions to the storage network, e.g., LNET. A challenge for VM based use cases will be to provide efficient IO forwarding of the parallel filessytem from the host to the guest (VM). There are promising options like para-virtualized filesystems to

  18. Reorganizing Nigeria's Vaccine Supply Chain Reduces Need For Additional Storage Facilities, But More Storage Is Required.

    Science.gov (United States)

    Shittu, Ekundayo; Harnly, Melissa; Whitaker, Shanta; Miller, Roger

    2016-02-01

    One of the major problems facing Nigeria's vaccine supply chain is the lack of adequate vaccine storage facilities. Despite the introduction of solar-powered refrigerators and the use of new tools to monitor supply levels, this problem persists. Using data on vaccine supply for 2011-14 from Nigeria's National Primary Health Care Development Agency, we created a simulation model to explore the effects of variance in supply and demand on storage capacity requirements. We focused on the segment of the supply chain that moves vaccines inside Nigeria. Our findings suggest that 55 percent more vaccine storage capacity is needed than is currently available. We found that reorganizing the supply chain as proposed by the National Primary Health Care Development Agency could reduce that need to 30 percent more storage. Storage requirements varied by region of the country and vaccine type. The Nigerian government may want to consider the differences in storage requirements by region and vaccine type in its proposed reorganization efforts. Project HOPE—The People-to-People Health Foundation, Inc.

  19. Possible use of dual purpose dry storage casks for transportation and future storage of spent nuclear fuel from IRT-Sofia

    International Nuclear Information System (INIS)

    Manev, L.; Baltiyski, M.

    2003-01-01

    Objectives: The main objective of the present paper is related to one of the priority goals stipulated in Bulgarian Governmental Decision No.332 from May 17, 1999 - removal of SNF from IRT-Sofia site and its exporting for reprocessing and/or for temporary storage at Kozloduy NPP site. The variant of using dual purpose dry storage casks for transportation and future temporary storage of SNF from IRT-Sofia aims to find out a reasonable alternative of the existing till now variant for temporary SNF storage under water in the existing Kozloduy NPP Spent Fuel Storage Facility until its export for reprocessing. Results: Based on the given data for the condition of 73 Spent Nuclear Fuel Assemblies (SNFA) stored in the storage pool and technical data as well as data for available equipment and IRT-Sofia layout the following framework are specified: draft technical features of dual purpose dry storage casks and their overall dimensions; the suitability of the available equipment for safety and reliable performance of transportation and handling operations of assemblies from storage pool to dual purpose dry storage casks; the necessity of new equipment for performance of the above mentioned operations; Assemblies' transportation and handling operations are described; requirements to and conditions for future safety and reliable storage of SNFA loaded casks are determined. When selecting the technical solutions for safety assurance during performance of site handling operations of IRT-Sofia and for description of the exemplary casks the Effective Bulgarian Regulations are considered. The experience of other countries in performance of transfer and transportation of SNFA from such types of research reactors is taken into account. Also, Kozloduy NPP experience in SNF handling operations is taken into account. Conclusions: The Decision of Council of Minister for refurbishment of research reactor into a low power one and its future utilization for experimental and training

  20. Developing new transportable storage casks for interim dry storage

    International Nuclear Information System (INIS)

    Hayashi, K.; Iwasa, K.; Araki, K.; Asano, R.

    2004-01-01

    Transportable storage metal casks are to be consistently used during transport and storage for AFR interim dry storage facilities planning in Japan. The casks are required to comply with the technical standards of regulations for both transport (hereinafter called ''transport regulation'') and storage (hereafter called ''storage regulation'') to maintain safety functions (heat transfer, containment, shielding and sub-critical control). In addition to these requirements, it is not planned in normal state to change the seal materials during storage at the storage facility, therefore it is requested to use same seal materials when the casks are transported after storage period. The dry transportable storage metal casks that satisfy the requirements have been developed to meet the needs of the dry storage facilities. The basic policy of this development is to utilize proven technology achieved from our design and fabrication experience, to carry out necessary verification for new designs and to realize a safe and rational design with higher capacity and efficient fabrication

  1. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data

    OpenAIRE

    Fischer, Felix; Selver, M. Alper; Gezer, Sinem; Dicle, O?uz; Hillen, Walter

    2015-01-01

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant addi...

  2. Research on an IP disaster recovery storage system

    Science.gov (United States)

    Zeng, Dong; Wang, Yusheng; Zhu, Jianfeng

    2008-12-01

    According to both the Fibre Channel (FC) Storage Area Network (SAN) switch and Fabric Application Interface Standard (FAIS) mechanism, an iSCSI storage controller is put forward and based upon it, an internet Small Computer System Interface (iSCSI) SAN construction strategy for disaster recovery (DR) is proposed and some multiple sites replication models and a closed queue performance analysis method are also discussed in this paper. The iSCSI storage controller lies in the fabric level of the networked storage infrastructure, and it can be used to connect to both the hybrid storage applications and storage subsystems, besides, it can provide virtualized storage environment and support logical volume access control, and by cooperating with the remote peerparts, a disaster recovery storage system can be built on the basis of the data replication, block-level snapshot and Internet Protocol (IP) take-over functions.

  3. Side-chain liquid crystalline polyesters for optical information storage

    DEFF Research Database (Denmark)

    Ramanujam, P.S.; Holme, Christian; Hvilsted, Søren

    1996-01-01

    and holographic storage in one particular polyester are described in detail and polarized Fourier transform infrared spectroscopic data complementing the optical data are presented. Optical and atomic force microscope investigations point to a laser-induced aggregation as responsible for permanent optical storage.......Azobenzene side-chain liquid crystalline polyester structures suitable for permanent optical storage are described. The synthesis and characterization of the polyesters together with differential scanning calorimetry and X-ray investigations are discussed. Optical anisotropic investigations...

  4. FPGA-based prototype storage system with phase change memory

    Science.gov (United States)

    Li, Gezi; Chen, Xiaogang; Chen, Bomy; Li, Shunfen; Zhou, Mi; Han, Wenbing; Song, Zhitang

    2016-10-01

    With the ever-increasing amount of data being stored via social media, mobile telephony base stations, and network devices etc. the database systems face severe bandwidth bottlenecks when moving vast amounts of data from storage to the processing nodes. At the same time, Storage Class Memory (SCM) technologies such as Phase Change Memory (PCM) with unique features like fast read access, high density, non-volatility, byte-addressability, positive response to increasing temperature, superior scalability, and zero standby leakage have changed the landscape of modern computing and storage systems. In such a scenario, we present a storage system called FLEET which can off-load partial or whole SQL queries to the storage engine from CPU. FLEET uses an FPGA rather than conventional CPUs to implement the off-load engine due to its highly parallel nature. We have implemented an initial prototype of FLEET with PCM-based storage. The results demonstrate that significant performance and CPU utilization gains can be achieved by pushing selected query processing components inside in PCM-based storage.

  5. Nuclear materials management storage study

    International Nuclear Information System (INIS)

    Becker, G.W. Jr.

    1994-02-01

    The Office of Weapons and Materials Planning (DP-27) requested the Planning Support Group (PSG) at the Savannah River Site to help coordinate a Departmental complex-wide nuclear materials storage study. This study will support the development of management strategies and plans until Defense Programs' Complex 21 is operational by DOE organizations that have direct interest/concerns about or responsibilities for nuclear material storage. They include the Materials Planning Division (DP-273) of DP-27, the Office of the Deputy Assistant Secretary for Facilities (DP-60), the Office of Weapons Complex Reconfiguration (DP-40), and other program areas, including Environmental Restoration and Waste Management (EM). To facilitate data collection, a questionnaire was developed and issued to nuclear materials custodian sites soliciting information on nuclear materials characteristics, storage plans, issues, etc. Sites were asked to functionally group materials identified in DOE Order 5660.1A (Management of Nuclear Materials) based on common physical and chemical characteristics and common material management strategies and to relate these groupings to Nuclear Materials Management Safeguards and Security (NMMSS) records. A database was constructed using 843 storage records from 70 responding sites. The database and an initial report summarizing storage issues were issued to participating Field Offices and DP-27 for comment. This report presents the background for the Storage Study and an initial, unclassified summary of storage issues and concerns identified by the sites

  6. Optical information storage

    International Nuclear Information System (INIS)

    Woike, T.

    1996-01-01

    In order to increase storage capacity and data transfer velocity by about three orders of magnitude compared to CD or magnetic disc it is necessary to work with optical techniques, especially with holography. About 100 TByte can be stored in a waver of an area of 50 cm 2 via holograms which corresponds to a density of 2.10 9 Byte/mm 2 . Every hologram contains data of 1 MByte, so that parallel-processing is possible for read-out. Using high-speed CCD-arrays a read-out velocity of 1 MByte/μsec can be reached. Further, holographic technics are very important in solid state physics. We will discuss the existence of a space charge field in Sr 1-x Ba x Nb 2 O 6 doped with cerium and the physical properties of metastable states, which are suited for information storage. (author) 19 figs., 9 refs

  7. An experiment in big data: storage, querying and visualisation of data taken from the Liverpool Telescope's wide field cameras

    Science.gov (United States)

    Barnsley, R. M.; Steele, Iain A.; Smith, R. J.; Mawson, Neil R.

    2014-07-01

    The Small Telescopes Installed at the Liverpool Telescope (STILT) project has been in operation since March 2009, collecting data with three wide field unfiltered cameras: SkycamA, SkycamT and SkycamZ. To process the data, a pipeline was developed to automate source extraction, catalogue cross-matching, photometric calibration and database storage. In this paper, modifications and further developments to this pipeline will be discussed, including a complete refactor of the pipeline's codebase into Python, migration of the back-end database technology from MySQL to PostgreSQL, and changing the catalogue used for source cross-matching from USNO-B1 to APASS. In addition to this, details will be given relating to the development of a preliminary front-end to the source extracted database which will allow a user to perform common queries such as cone searches and light curve comparisons of catalogue and non-catalogue matched objects. Some next steps and future ideas for the project will also be presented.

  8. Integration of cloud-based storage in BES III computing environment

    International Nuclear Information System (INIS)

    Wang, L; Hernandez, F; Deng, Z

    2014-01-01

    We present an on-going work that aims to evaluate the suitability of cloud-based storage as a supplement to the Lustre file system for storing experimental data for the BES III physics experiment and as a backend for storing files belonging to individual members of the collaboration. In particular, we discuss our findings regarding the support of cloud-based storage in the software stack of the experiment. We report on our development work that improves the support of CERN' s ROOT data analysis framework and allows efficient remote access to data through several cloud storage protocols. We also present our efforts providing the experiment with efficient command line tools for navigating and interacting with cloud storage-based data repositories both from interactive sessions and grid jobs.

  9. Developing new transportable storage casks for interim dry storage

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, K.; Iwasa, K.; Araki, K.; Asano, R. [Hitachi Zosen Diesel and Engineering Co., Ltd., Tokyo (Japan)

    2004-07-01

    Transportable storage metal casks are to be consistently used during transport and storage for AFR interim dry storage facilities planning in Japan. The casks are required to comply with the technical standards of regulations for both transport (hereinafter called ''transport regulation'') and storage (hereafter called ''storage regulation'') to maintain safety functions (heat transfer, containment, shielding and sub-critical control). In addition to these requirements, it is not planned in normal state to change the seal materials during storage at the storage facility, therefore it is requested to use same seal materials when the casks are transported after storage period. The dry transportable storage metal casks that satisfy the requirements have been developed to meet the needs of the dry storage facilities. The basic policy of this development is to utilize proven technology achieved from our design and fabrication experience, to carry out necessary verification for new designs and to realize a safe and rational design with higher capacity and efficient fabrication.

  10. High Burnup Dry Storage Cask Research and Development Project, Final Test Plan

    Energy Technology Data Exchange (ETDEWEB)

    None

    2014-02-27

    EPRI is leading a project team to develop and implement the first five years of a Test Plan to collect data from a SNF dry storage system containing high burnup fuel.12 The Test Plan defined in this document outlines the data to be collected, and the storage system design, procedures, and licensing necessary to implement the Test Plan.13 The main goals of the proposed test are to provide confirmatory data14 for models, future SNF dry storage cask design, and to support license renewals and new licenses for ISFSIs. To provide data that is most relevant to high burnup fuel in dry storage, the design of the test storage system must mimic real conditions that high burnup SNF experiences during all stages of dry storage: loading, cask drying, inert gas backfilling, and transfer to the ISFSI for multi-year storage.15 Along with other optional modeling, SETs, and SSTs, the data collected in this Test Plan can be used to evaluate the integrity of dry storage systems and the high burnup fuel contained therein over many decades. It should be noted that the Test Plan described in this document discusses essential activities that go beyond the first five years of Test Plan implementation.16 The first five years of the Test Plan include activities up through loading the cask, initiating the data collection, and beginning the long-term storage period at the ISFSI. The Test Plan encompasses the overall project that includes activities that may not be completed until 15 or more years from now, including continued data collection, shipment of the Research Project Cask to a Fuel Examination Facility, opening the cask at the Fuel Examination Facility, and examining the high burnup fuel after the initial storage period.

  11. Phase change materials in non-volatile storage

    OpenAIRE

    Ielmini, Daniele; Lacaita, Andrea L.

    2011-01-01

    After revolutionizing the technology of optical data storage, phase change materials are being adopted in non-volatile semiconductor memories. Their success in electronic storage is mostly due to the unique properties of the amorphous state where carrier transport phenomena and thermally-induced phase change cooperate to enable high-speed, low-voltage operation and stable data retention possible within the same material. This paper reviews the key physical properties that make this phase so s...

  12. Temporary storage area characterization report

    International Nuclear Information System (INIS)

    1990-01-01

    The preferred alternative identified in the Remedial Investigation/Feasibility Study (RI/FS) for the Weldon Spring Quarry Bulk Wastes is to remove the wastes from the quarry and transport them by truck to temporary storage facility at the chemical plant site. To support the RI/FS, this report provides data to characterize the temporary storage area (TSA) site and to ensure the suitability of the proposed location. 31 refs., 14 figs., 7 tabs

  13. Energy storage

    Science.gov (United States)

    Kaier, U.

    1981-04-01

    Developments in the area of energy storage are characterized, with respect to theory and laboratory, by an emergence of novel concepts and technologies for storing electric energy and heat. However, there are no new commercial devices on the market. New storage batteries as basis for a wider introduction of electric cars, and latent heat storage devices, as an aid for solar technology applications, with satisfactory performance standards are not yet commercially available. Devices for the intermediate storage of electric energy for solar electric-energy systems, and for satisfying peak-load current demands in the case of public utility companies are considered. In spite of many promising novel developments, there is yet no practical alternative to the lead-acid storage battery. Attention is given to central heat storage for systems transporting heat energy, small-scale heat storage installations, and large-scale technical energy-storage systems.

  14. VME data acquisition system. Interactive software for the acquisition, display and storage of one or two dimensional spectra

    International Nuclear Information System (INIS)

    Petremann, E.

    1989-01-01

    The development and construction of a complete data acquisition system for nuclear physics applications, are described. The system is based on the VME bus and the 16/32 bits microprocessor. The data acquisition system enables the obtention of line spectra, involving one or two parameters, and the simultaneous storage of events in a magnetic tape. The analysis and the description of the data acquisition software, the experimental spectra display and saving on magnetic systems are given. Pascal and Assembler are used. The development of cards, for the standard VME and electronic equipment interfaces, is performed [fr

  15. Improving groundwater storage and soil moisture estimates by assimilating GRACE, SMOS, and SMAP data into CABLE using ensemble Kalman batch smoother and particle batch smoother frameworks

    Science.gov (United States)

    Han, S. C.; Tangdamrongsub, N.; Yeo, I. Y.; Dong, J.

    2017-12-01

    Soil moisture and groundwater storage are important information for comprehensive understanding of the climate system and accurate assessment of the regional/global water resources. It is possible to derive the water storage from land surface models but the outputs are commonly biased by inaccurate forcing data, inefficacious model physics, and improper model parameter calibration. To mitigate the model uncertainty, the observation (e.g., from remote sensing as well as ground in-situ data) are often integrated into the models via data assimilation (DA). This study aims to improve the estimation of soil moisture and groundwater storage by simultaneously assimilating satellite observations from the Gravity Recovery And Climate Experiment (GRACE), the Soil Moisture Ocean Salinity (SMOS), and the Soil Moisture Active Passive (SMAP) into the Community Atmosphere Biosphere Land Exchange (CABLE) land surface model using the ensemble Kalman batch smoother (EnBS) and particle batch smoother (PBS) frameworks. The uncertainty of GRACE observation is obtained rigorously from the full error variance-covariance matrix of the GRACE data product. This method demonstrates the use of a realistic representative of GRACE uncertainty, which is spatially correlated in nature, leads to a higher accuracy of water storage computation. Additionally, the comparison between EnBS and PBS results is discussed to understand the filter's performance, limitation, and suitability. The joint DA is demonstrated in the Goulburn catchment, South-East Australia, where diverse ground observations (surface soil moisture, root-zone soil moisture, and groundwater level) are available for evaluation of our DA results. Preliminary results show that both smoothers provide significant improvement of surface soil moisture and groundwater storage estimates. Importantly, our developed DA scheme disaggregates the catchment-scale GRACE information into finer vertical and spatial scales ( 25 km). We present an

  16. SEARCH FOR A RELIABLE STORAGE ARCHITECTURE FOR RHIC.

    Energy Technology Data Exchange (ETDEWEB)

    BINELLO,S.; KATZ, R.A.; MORRIS, J.T.

    2007-10-15

    Software used to operate the Relativistic Heavy Ion Collider (RHIC) resides on one operational RAID storage system. This storage system is also used to store data that reflects the status and recent history of accelerator operations. Failure of this system interrupts the operation of the accelerator as backup systems are brought online. In order to increase the reliability of this critical control system component, the storage system architecture has been upgraded to use Storage Area Network (SAN) technology and to introduce redundant components and redundant storage paths. This paper describes the evolution of the storage system, the contributions to reliability that each additional feature has provided, further improvements that are being considered, and real-life experience with the current system.

  17. SEARCH FOR A RELIABLE STORAGE ARCHITECTURE FOR RHIC

    International Nuclear Information System (INIS)

    BINELLO, S.; KATZ, R.A.; MORRIS, J.T.

    2007-01-01

    Software used to operate the Relativistic Heavy Ion Collider (RHIC) resides on one operational RAID storage system. This storage system is also used to store data that reflects the status and recent history of accelerator operations. Failure of this system interrupts the operation of the accelerator as backup systems are brought online. In order to increase the reliability of this critical control system component, the storage system architecture has been upgraded to use Storage Area Network (SAN) technology and to introduce redundant components and redundant storage paths. This paper describes the evolution of the storage system, the contributions to reliability that each additional feature has provided, further improvements that are being considered, and real-life experience with the current system

  18. Spacing Sensitivity Analysis of HLW Intermediate Storage Facility

    International Nuclear Information System (INIS)

    Youn, Bum Soo; Lee, Kwang Ho

    2010-01-01

    Currently, South Korea's spent fuels are stored in its temporary storage within the plant. But the temporary storage is expected to be reaching saturation soon. For the effective management of spent fuel wastes, the need for intermediate storage facility is a desperate position. However, the research for the intermediate storage facility for waste has not made active so far. In addition, in case of foreign countries it is mostly treated confidentially and the information isn't easy to collect. Therefore, the purpose of this study is creating the basic thermal analysis data for the waste storage facility that will be valuable in the future

  19. Long-Time Data Storage: Relevant Time Scales

    Directory of Open Access Journals (Sweden)

    Miko C. Elwenspoek

    2011-02-01

    Full Text Available Dynamic processes relevant for long-time storage of information about human kind are discussed, ranging from biological and geological processes to the lifecycle of stars and the expansion of the universe. Major results are that life will end ultimately and the remaining time that the earth is habitable for complex life is about half a billion years. A system retrieved within the next million years will be read by beings very closely related to Homo sapiens. During this time the surface of the earth will change making it risky to place a small number of large memory systems on earth; the option to place it on the moon might be more favorable. For much longer timescales both options do not seem feasible because of geological processes on the earth and the flux of small meteorites to the moon.

  20. A Rewritable, Random-Access DNA-Based Storage System.

    Science.gov (United States)

    Yazdi, S M Hossein Tabatabaei; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica

    2015-09-18

    We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.

  1. Materials in the environment of the fuel in dry storage

    Energy Technology Data Exchange (ETDEWEB)

    Issard, H [TN International (Cogema Logistics) (France)

    2012-07-01

    Spent nuclear fuel has been stored safely in pools or dry systems in over 30 countries. The majority of IAEA Member States have not yet decided upon the ultimate disposition of their spent nuclear fuel: reprocessing or direct disposal. Interim storage is the current solution for these countries. For developing the technological knowledge data base, a continuation of the IAEA's spent fuel storage performance assessment was achieved. The objectives are: Investigate the dry storage systems and gather basic fuel behaviour assessment; Gather data on dry storage environment and cask materials; Evaluate long term behaviour of cask materials.

  2. Storage and Management of Open-pit Transportation Path

    Directory of Open Access Journals (Sweden)

    Jiusheng Du

    2013-07-01

    Full Text Available This paper is aiming at the actual demand of open-pit mine daily production scheduling and positioning monitoring. After extracting data from existing topographic maps and other information, it discusses the feasibility of using this data to establish thematic database. By considering the extensive application of GPS data, utilizing new spatial data types of SQL Server 2008 for data storage and management. Extracting data algorithms such as the node spatial data, the regional boundary and the path are implemented, then spatial data storage and management is also realized. It provides the basis for the production of decision-making and production cost savings.

  3. Classification of CO2 Geologic Storage: Resource and Capacity

    Science.gov (United States)

    Frailey, S.M.; Finley, R.J.

    2009-01-01

    The use of the term capacity to describe possible geologic storage implies a realistic or likely volume of CO2 to be sequestered. Poor data quantity and quality may lead to very high uncertainty in the storage estimate. Use of the term "storage resource" alleviates the implied certainty of the term "storage capacity". This is especially important to non- scientists (e.g. policy makers) because "capacity" is commonly used to describe the very specific and more certain quantities such as volume of a gas tank or a hotel's overnight guest limit. Resource is a term used in the classification of oil and gas accumulations to infer lesser certainty in the commercial production of oil and gas. Likewise for CO2 sequestration, a suspected porous and permeable zone can be classified as a resource, but capacity can only be estimated after a well is drilled into the formation and a relatively higher degree of economic and regulatory certainty is established. Storage capacity estimates are lower risk or higher certainty compared to storage resource estimates. In the oil and gas industry, prospective resource and contingent resource are used for estimates with less data and certainty. Oil and gas reserves are classified as Proved and Unproved, and by analogy, capacity can be classified similarly. The highest degree of certainty for an oil or gas accumulation is Proved, Developed Producing (PDP) Reserves. For CO2 sequestration this could be Proved Developed Injecting (PDI) Capacity. A geologic sequestration storage classification system is developed by analogy to that used by the oil and gas industry. When a CO2 sequestration industry emerges, storage resource and capacity estimates will be considered a company asset and consequently regulated by the Securities and Exchange Commission. Additionally, storage accounting and auditing protocols will be required to confirm projected storage estimates and assignment of credits from actual injection. An example illustrates the use of

  4. Disk Storage Server

    CERN Multimedia

    This model was a disk storage server used in the Data Centre up until 2012. Each tray contains a hard disk drive (see the 5TB hard disk drive on the main disk display section - this actually fits into one of the trays). There are 16 trays in all per server. There are hundreds of these servers mounted on racks in the Data Centre, as can be seen.

  5. Optical information storage

    Energy Technology Data Exchange (ETDEWEB)

    Woike, T [Koeln Univ., Inst. fuer Kristallography, Koeln (Germany)

    1996-11-01

    In order to increase storage capacity and data transfer velocity by about three orders of magnitude compared to CD or magnetic disc it is necessary to work with optical techniques, especially with holography. About 100 TByte can be stored in a waver of an area of 50 cm{sup 2} via holograms which corresponds to a density of 2.10{sup 9} Byte/mm{sup 2}. Every hologram contains data of 1 MByte, so that parallel-processing is possible for read-out. Using high-speed CCD-arrays a read-out velocity of 1 MByte/{mu}sec can be reached. Further, holographic technics are very important in solid state physics. We will discuss the existence of a space charge field in Sr{sub 1-x}Ba{sub x}Nb{sub 2}O{sub 6} doped with cerium and the physical properties of metastable states, which are suited for information storage. (author) 19 figs., 9 refs.

  6. Summary of treatment, storage, and disposal facility usage data collected from U.S. Department of Energy sites

    International Nuclear Information System (INIS)

    Jacobs, A.; Oswald, K.; Trump, C.

    1995-04-01

    This report presents an analysis for the US Department of Energy (DOE) to determine the level and extent of treatment, storage, and disposal facility (TSDF) assessment duplication. Commercial TSDFs are used as an integral part of the hazardous waste management process for those DOE sites that generate hazardous waste. Data regarding the DOE sites' usage have been extracted from three sets of data and analyzed in this report. The data are presented both qualitatively and quantitatively, as appropriate. This information provides the basis for further analysis of assessment duplication to be documented in issue papers as appropriate. Once the issues have been identified and adequately defined, corrective measures will be proposed and subsequently implemented

  7. Storage and the electricity forward premium

    International Nuclear Information System (INIS)

    Douglas, Stratford; Popova, Julia

    2008-01-01

    We develop and test a model describing the influence of natural gas storage inventories on the electricity forward premium. The model is constructed by linking the effect of gas storage constraints on the higher moments of the distribution of electricity prices to an established model of the effect of those moments on the forward premium. The model predicts a sharply negative effect of gas storage inventories on the electricity forward premium when demand for electricity is high and space-heating demand for gas is low. Empirical results, based on PJM data, strongly support the model. (author)

  8. Recovery of Flash Memories for Reliable Mobile Storages

    Directory of Open Access Journals (Sweden)

    Daesung Moon

    2010-01-01

    Full Text Available As the mobile appliance is applied to many ubiquitous services and the importance of the information stored in it is increased, the security issue to protect the information becomes one of the major concerns. However, most previous researches focused only on the communication security, not the storage security. Especially, a flash memory whose operational characteristics are different from those of HDD is used increasingly as a storage device for the mobile appliance because of its resistance to physical shock and lower power requirement. In this paper, we propose a flash memory management scheme targeted for guaranteeing the data integrity of the mobile storage. By maintaining the old data specified during the recovery window, we can recover the old data when the mobile appliance is attacked. Also, to reduce the storage requirement for the recovery, we restrict the number of versions to be copied, called Degree of Integrity (DoI. Especially, we consider both the reclaim efficiency and the wear leveling which is a unique characteristic of the flash memory. Based on the performance evaluation, we confirm that the proposed scheme can be acceptable to many applications as a flash memory management scheme for improving data integrity.

  9. Technology for national asset storage systems

    Science.gov (United States)

    Coyne, Robert A.; Hulen, Harry; Watson, Richard

    1993-01-01

    An industry-led collaborative project, called the National Storage Laboratory, was organized to investigate technology for storage systems that will be the future repositories for our national information assets. Industry participants are IBM Federal Systems Company, Ampex Recording Systems Corporation, General Atomics DISCOS Division, IBM ADSTAR, Maximum Strategy Corporation, Network Systems Corporation, and Zitel Corporation. Industry members of the collaborative project are funding their own participation. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) will participate in the project as the operational site and the provider of applications. The expected result is an evaluation of a high performance storage architecture assembled from commercially available hardware and software, with some software enhancements to meet the project's goals. It is anticipated that the integrated testbed system will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte class files at gigabit-per-second data rates. The National Storage Laboratory was officially launched on 27 May 1992.

  10. Survey of experience with dry storage of spent nuclear fuel and update of wet storage experience

    International Nuclear Information System (INIS)

    1988-01-01

    Spent fuel storage is an important part of spent fuel management. At present about 45,000 t of spent water reactor fuel have been discharged worldwide. Only a small fraction of this fuel (approximately 7%) has been reprocessed. The amount of spent fuel arisings will increase significantly in the next 15 years. Estimates indicate that up to the year 2000 about 200,000 t HM of spent fuel could be accumulated. In view of the large quantities of spent fuel discharged from nuclear power plants and future expected discharges, many countries are involved in the construction of facilities for the storage of spent fuel and in the development of effective methods for spent fuel surveillance and monitoring to ensure that reliable and safe operation of storage facilities is achievable until the time when the final disposal of spent fuel or high level wastes is feasible. The first demonstrations of final disposal are not expected before the years 2000-2020. This is why the long term storage of spent fuel and HLW is a vital problem for all countries with nuclear power programmes. The present survey contains data on dry storage and recent information on wet storage, transportation, rod consolidation, etc. The main aim is to provide spent fuel management policy making organizations, designers, scientists and spent fuel storage facility operators with the latest information on spent fuel storage technology under dry and wet conditions and on innovations in this field. Refs, figs and tabs

  11. Economical evaluation on spent fuel storage technology away from reactor

    International Nuclear Information System (INIS)

    Itoh, Chihiro; Nagano, Koji; Saegusa, Toshiari

    2000-01-01

    Concerning the spent fuel storage away from reactor, economical comparison was carried out between metal cask and water pool storage technology. The economic index was defined by levelized cost (Unit storage cost) calculated on the assumption that the storage cost is paid at the receipt of the spent fuel at the storage facility. It is found that the cask storage is economical for small and large storage capacity. Unit storage cost of pool storage, however, is getting close to that of cask storage in case of storage capacity of 10,000 ton. Then, the unit storage cost is converted to power generation cost using data of the burn up of the fuel, etc. The cost is obtained as yen 0.09/kWh and yen 0. 15/kWh for cask storage and pool storage, respectively in case of the capacity of 5,000 tonU and the cooling time of 5 years. (author)

  12. Storage quality-of-service in cloud-based scientific environments: a standardization approach

    Science.gov (United States)

    Millar, Paul; Fuhrmann, Patrick; Hardt, Marcus; Ertl, Benjamin; Brzezniak, Maciej

    2017-10-01

    When preparing the Data Management Plan for larger scientific endeavors, PIs have to balance between the most appropriate qualities of storage space along the line of the planned data life-cycle, its price and the available funding. Storage properties can be the media type, implicitly determining access latency and durability of stored data, the number and locality of replicas, as well as available access protocols or authentication mechanisms. Negotiations between the scientific community and the responsible infrastructures generally happen upfront, where the amount of storage space, media types, like: disk, tape and SSD and the foreseeable data life-cycles are negotiated. With the introduction of cloud management platforms, both in computing and storage, resources can be brokered to achieve the best price per unit of a given quality. However, in order to allow the platform orchestrator to programmatically negotiate the most appropriate resources, a standard vocabulary for different properties of resources and a commonly agreed protocol to communicate those, has to be available. In order to agree on a basic vocabulary for storage space properties, the storage infrastructure group in INDIGO-DataCloud together with INDIGO-associated and external scientific groups, created a working group under the umbrella of the Research Data Alliance (RDA). As communication protocol, to query and negotiate storage qualities, the Cloud Data Management Interface (CDMI) has been selected. Necessary extensions to CDMI are defined in regular meetings between INDIGO and the Storage Network Industry Association (SNIA). Furthermore, INDIGO is contributing to the SNIA CDMI reference implementation as the basis for interfacing the various storage systems in INDIGO to the agreed protocol and to provide an official Open-Source skeleton for systems not being maintained by INDIGO partners.

  13. A concept of an electricity storage system with 50 MWh storage capacity

    Directory of Open Access Journals (Sweden)

    Józef Paska

    2012-06-01

    Full Text Available Electricity storage devices can be divided into indirect storage technology devices (involving electricity conversion into another form of energy, and direct storage (in an electric or magnetic fi eld. Electricity storage technologies include: pumped-storage power plants, BES Battery Energy Storage, CAES Compressed Air Energy Storage, Supercapacitors, FES Flywheel Energy Storage, SMES Superconducting Magnetic Energy Storage, FC Fuel Cells reverse or operated in systems with electrolysers and hydrogen storage. These technologies have diff erent technical characteristics and economic parameters that determine their usability. This paper presents two concepts of an electricity storage tank with a storage capacity of at least 50 MWh, using the BES battery energy storage and CAES compressed air energy storage technologies.

  14. Comparative assessment of software for non-targeted data analysis in the study of volatile fingerprint changes during storage of a strawberry beverage.

    Science.gov (United States)

    Morales, M L; Callejón, R M; Ordóñez, J L; Troncoso, A M; García-Parrilla, M C

    2017-11-03

    Five free software packages were compared to assess their utility for the non-targeted study of changes in the volatile profile during the storage of a novel strawberry beverage. AMDIS coupled to Gavin software turned out to be easy to use, required the minimum handling for subsequent data treatment and its results were the most similar to those obtained by manual integration. However, AMDIS coupled to SpectConnect software provided more information for the study of volatile profile changes during the storage of strawberry beverage. During storage, volatile profile changed producing the differentiation among the strawberry beverage stored at different temperatures, and this difference increases as time passes; these results were also supported by PCA. As expected, it seems that cold temperature is the best way of preservation for this product during long time storage. Variable Importance in the Projection (VIP) and correlation scores pointed out four volatile compounds as potential markers for shelf-life of our strawberry beverage: 2-phenylethyl acetate, decanoic acid, γ-decalactone and furfural. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Laboratory simulation of high-level liquid waste evaporation and storage

    International Nuclear Information System (INIS)

    Anderson, P.A.

    1978-01-01

    The reprocessing of nuclear fuel generates high-level liquid wastes (HLLW) which require interim storage pending solidification. Interim storage facilities are most efficient if the HLLW is evaporated prior to or during the storage period. Laboratory evaporation and storage studies with simulated waste slurries have yielded data which are applicable to the efficient design and economical operation of actual process equipment

  16. ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization

    Science.gov (United States)

    Antcheva, I.; Ballintijn, M.; Bellenot, B.; Biskup, M.; Brun, R.; Buncic, N.; Canal, Ph.; Casadei, D.; Couet, O.; Fine, V.; Franco, L.; Ganis, G.; Gheata, A.; Maline, D. Gonzalez; Goto, M.; Iwaszkiewicz, J.; Kreshuk, A.; Segura, D. Marcos; Maunder, R.; Moneta, L.; Naumann, A.; Offermann, E.; Onuchin, V.; Panacek, S.; Rademakers, F.; Russo, P.; Tadel, M.

    2011-06-01

    A new stable version ("production version") v5.28.00 of ROOT [1] has been published [2]. It features several major improvements in many areas, most noteworthy data storage performance as well as statistics and graphics features. Some of these improvements have already been predicted in the original publication Antcheva et al. (2009) [3]. This version will be maintained for at least 6 months; new minor revisions ("patch releases") will be published [4] to solve problems reported with this version. New version program summaryProgram title: ROOT Catalogue identifier: AEFA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser Public License v.2.1 No. of lines in distributed program, including test data, etc.: 2 934 693 No. of bytes in distributed program, including test data, etc.: 1009 Distribution format: tar.gz Programming language: C++ Computer: Intel i386, Intel x86-64, Motorola PPC, Sun Sparc, HP PA-RISC Operating system: GNU/Linux, Windows XP/Vista/7, Mac OS X, FreeBSD, OpenBSD, Solaris, HP-UX, AIX Has the code been vectorized or parallelized?: Yes RAM: > 55 Mbytes Classification: 4, 9, 11.9, 14 Catalogue identifier of previous version: AEFA_v1_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 2499 Does the new version supersede the previous version?: Yes Nature of problem: Storage, analysis and visualization of scientific data Solution method: Object store, wide range of analysis algorithms and visualization methods Reasons for new version: Added features and corrections of deficiencies Summary of revisions: The release notes at http://root.cern.ch/root/v528/Version528.news.html give a module-oriented overview of the changes in v5.28.00. Highlights include File format Reading of TTrees has been improved dramatically with respect to CPU time (30%) and notably with respect to disk space. Histograms A

  17. DPM: Future Proof Storage

    Science.gov (United States)

    Alvarez, Alejandro; Beche, Alexandre; Furano, Fabrizio; Hellmich, Martin; Keeble, Oliver; Rocha, Ricardo

    2012-12-01

    The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During the last year we have been working on providing stable, high performant data access to our storage system using standard protocols, while extending the storage management functionality and adapting both configuration and deployment procedures to reuse commonly used building blocks. In this contribution we cover in detail the extensive evaluation we have performed of our new HTTP/WebDAV and NFS 4.1 frontends, in terms of functionality and performance. We summarize the issues we faced and the solutions we developed to turn them into valid alternatives to the existing grid protocols - namely the additional work required to provide multi-stream transfers for high performance wide area access, support for third party copies, credential delegation or the required changes in the experiment and fabric management frameworks and tools. We describe new functionality that has been added to ease system administration, such as different filesystem weights and a faster disk drain, and new configuration and monitoring solutions based on the industry standards Puppet and Nagios. Finally, we explain some of the internal changes we had to do in the DPM architecture to better handle the additional load from the analysis use cases.

  18. Notes on a storage manager for the Clouds kernel

    Science.gov (United States)

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  19. Tritium storage

    International Nuclear Information System (INIS)

    Hircq, B.

    1990-01-01

    This document represents a synthesis relative to tritium storage. After indicating the main storage particularities as regards tritium, storages under gaseous and solid form are after examined before establishing choices as a function of the main criteria. Finally, tritium storage is discussed regarding tritium devices associated to Fusion Reactors and regarding smaller devices [fr

  20. Mass storage system by using broadcast technology

    International Nuclear Information System (INIS)

    Fujii, Hirofumi; Itoh, Ryosuke; Manabe, Atsushi; Miyamoto, Akiya; Morita, Youhei; Nozaki, Tadao; Sasaki, Takashi; Watase, Yoshiyuko; Yamasaki, Tokuyuki

    1996-01-01

    There are many similarities between data recording systems for high energy physics and broadcast systems; the data flow is almost one-way, requires real-time recording; requires large-scale automated libraries for 24-hours operation, etc. In addition to these functional similarities, the required data-transfer and data-recording speeds are also close to those for near future experiments. For these reasons, we have collaborated with SONY Broadcast Company to study the usability of broadcast devices for our data storage system. Our new data storage system consists of high-speed data recorders and tape-robots which are originally based on the digital video-tape recorder and the tape-robot for broadcast systems. We are also studying the possibility to use these technologies for the online data-recording system for B-physics experiment at KEK. (author)