WorldWideScience

Sample records for data storage

  1. Compact Holographic Data Storage

    Science.gov (United States)

    Chao, T. H.; Reyes, G. F.; Zhou, H.

    2001-01-01

    NASA's future missions would require massive high-speed onboard data storage capability to Space Science missions. For Space Science, such as the Europa Lander mission, the onboard data storage requirements would be focused on maximizing the spacecraft's ability to survive fault conditions (i.e., no loss in stored science data when spacecraft enters the 'safe mode') and autonomously recover from them during NASA's long-life and deep space missions. This would require the development of non-volatile memory. In order to survive in the stringent environment during space exploration missions, onboard memory requirements would also include: (1) survive a high radiation environment (1 Mrad), (2) operate effectively and efficiently for a very long time (10 years), and (3) sustain at least a billion write cycles. Therefore, memory technologies requirements of NASA's Earth Science and Space Science missions are large capacity, non-volatility, high-transfer rate, high radiation resistance, high storage density, and high power efficiency. JPL, under current sponsorship from NASA Space Science and Earth Science Programs, is developing a high-density, nonvolatile and rad-hard Compact Holographic Data Storage (CHDS) system to enable large-capacity, high-speed, low power consumption, and read/write of data in a space environment. The entire read/write operation will be controlled with electrooptic mechanism without any moving parts. This CHDS will consist of laser diodes, photorefractive crystal, spatial light modulator, photodetector array, and I/O electronic interface. In operation, pages of information would be recorded and retrieved with random access and high-speed. The nonvolatile, rad-hard characteristics of the holographic memory will provide a revolutionary memory technology meeting the high radiation challenge facing the Europa Lander mission. Additional information is contained in the original extended abstract.

  2. Holographic Optical Data Storage

    Science.gov (United States)

    Timucin, Dogan A.; Downie, John D.; Norvig, Peter (Technical Monitor)

    2000-01-01

    Although the basic idea may be traced back to the earlier X-ray diffraction studies of Sir W. L. Bragg, the holographic method as we know it was invented by D. Gabor in 1948 as a two-step lensless imaging technique to enhance the resolution of electron microscopy, for which he received the 1971 Nobel Prize in physics. The distinctive feature of holography is the recording of the object phase variations that carry the depth information, which is lost in conventional photography where only the intensity (= squared amplitude) distribution of an object is captured. Since all photosensitive media necessarily respond to the intensity incident upon them, an ingenious way had to be found to convert object phase into intensity variations, and Gabor achieved this by introducing a coherent reference wave along with the object wave during exposure. Gabor's in-line recording scheme, however, required the object in question to be largely transmissive, and could provide only marginal image quality due to unwanted terms simultaneously reconstructed along with the desired wavefront. Further handicapped by the lack of a strong coherent light source, optical holography thus seemed fated to remain just another scientific curiosity, until the field was revolutionized in the early 1960s by some major breakthroughs: the proposition and demonstration of the laser principle, the introduction of off-axis holography, and the invention of volume holography. Consequently, the remainder of that decade saw an exponential growth in research on theory, practice, and applications of holography. Today, holography not only boasts a wide variety of scientific and technical applications (e.g., holographic interferometry for strain, vibration, and flow analysis, microscopy and high-resolution imagery, imaging through distorting media, optical interconnects, holographic optical elements, optical neural networks, three-dimensional displays, data storage, etc.), but has become a prominent am advertising

  3. The Fermilab data storage infrastructure

    International Nuclear Information System (INIS)

    Jon A Bakken et al.

    2003-01-01

    Fermilab, in collaboration with the DESY laboratory in Hamburg, Germany, has created a petabyte scale data storage infrastructure to meet the requirements of experiments to store and access large data sets. The Fermilab data storage infrastructure consists of the following major storage and data transfer components: Enstore mass storage system, DCache distributed data cache, ftp and Grid ftp for primarily external data transfers. This infrastructure provides a data throughput sufficient for transferring data from experiments' data acquisition systems. It also allows access to data in the Grid framework

  4. The Petascale Data Storage Institute

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Garth [Carnegie Mellon Univ., Pittsburgh, PA (United States); Long, Darrell [The Regents of the University of California, Santa Cruz, CA (United States); Honeyman, Peter [Univ. of Michigan, Ann Arbor, MI (United States); Grider, Gary [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kramer, William [National Energy Research Scientific Computing Center, Berkeley, CA (United States); Shalf, John [National Energy Research Scientific Computing Center, Berkeley, CA (United States); Roth, Philip [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Felix, Evan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ward, Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-07-01

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.

  5. Data Centre Infrastructure & Data Storage @ Facebook

    CERN Multimedia

    CERN. Geneva; Garson, Matt; Kauffman, Mike

    2018-01-01

    Several speakers from the Facebook company will present their take on the infrastructure of their Data Center and Storage facilities, as follows: 10:00 - Facebook Data Center Infrastructure, by Delfina Eberly, Mike Kauffman and Veerendra Mulay Insight into how Facebook thinks about data center design, including electrical and cooling systems, and the technology and tooling used to manage data centers. 11:00 - Storage at Facebook, by Matt Garson An overview of Facebook infrastructure, focusing on different storage systems, in particular photo/video storage and storage for data analytics. About the speakers Mike Kauffman, Director, Data Center Site Engineering Delfina Eberly, Infrastructure, Site Services Matt Garson, Storage at Facebook Veerendra Mulay, Infrastructure

  6. ALICE bags data storage accolades

    CERN Multimedia

    2007-01-01

    ComputerWorld has recognized CERN with an award for the 'Best Practices in Storage' for ALICE's data acquisition system, in the category of 'Systems Implementation'. The award was presented to the ALICE DAQ team on 18 April at a ceremony in San Diego, CA. (Top) ALICE physicist Ulrich Fuchs. (Bottom) Three of the five storage racks for the ALICE Data Acquisition system (Photo Antonio Saba). Between 16 and19 April, one thousand people from data storage networks around the world gathered to attend the biannual Storage Networking World Conference. Twenty-five companies and organizations were celebrated as finalists, and five of those were given honorary awards-among them CERN, which tied for first place in the category of Systems Implementation for the success of the ALICE Data Acquisition System. CERN was one of five finalists in this category, which recognizes the winning facility for 'the successful design, implementation and management of an interoperable environment'. 'Successful' could include documentati...

  7. Unit 037 - Fundamentals of Data Storage

    OpenAIRE

    037, CC in GIScience; Jacobson, Carol R.

    2000-01-01

    This unit introduces the concepts and terms needed to understand storage of GIS data in a computer system, including the weaknesses of a discrete data model for representing the real world; an overview of data storage types and terminology; and a description of data storage issues.

  8. Encrypted Data Storage in EGEE

    CERN Document Server

    Frohner, Ákos

    2006-01-01

    The medical community is routinely using clinical images and associated medical data for diagnosis, intervention planning and therapy follow-up. Medical imaging is producing an increasing number of digital images for which computerized archiving, processing and analysis are needed. Grids are promising infrastructures for managing and analyzing the huge medical databases. Given the sensitive nature of medical images, practiotionners are often reluctant to use distributed systems though. Security if often implemented by isolating the imaging network from the outside world inside hospitals. Given the wide scale distribution of grid infrastructures and their multiple administrative entities, the level of security for manipulating medical data should be particularly high. In this presentation we describe the architecture of a solution, the gLite Encrypted Data Storage (EDS), which was developed in the framework of Enabling Grids for E-sciencE (EGEE), a project of the European Commission (contract number INFSO--508...

  9. Recordable storage medium with protected data area

    NARCIS (Netherlands)

    2005-01-01

    The invention relates to a method of storing data on a rewritable data storage medium, to a corresponding storage medium, to a corresponding recording apparatus and to a corresponding playback apparatus. Copy-protective measures require that on rewritable storage media some data must be stored which

  10. Data Acquisition and Mass Storage

    Science.gov (United States)

    Vande Vyvre, P.

    2004-08-01

    The experiments performed at supercolliders will constitute a new challenge in several disciplines of High Energy Physics and Information Technology. This will definitely be the case for data acquisition and mass storage. The microelectronics, communication, and computing industries are maintaining an exponential increase of the performance of their products. The market of commodity products remains the largest and the most competitive market of technology products. This constitutes a strong incentive to use these commodity products extensively as components to build the data acquisition and computing infrastructures of the future generation of experiments. The present generation of experiments in Europe and in the US already constitutes an important step in this direction. The experience acquired in the design and the construction of the present experiments has to be complemented by a large R&D effort executed with good awareness of industry developments. The future experiments will also be expected to follow major trends of our present world: deliver physics results faster and become more and more visible and accessible. The present evolution of the technologies and the burgeoning of GRID projects indicate that these trends will be made possible. This paper includes a brief overview of the technologies currently used for the different tasks of the experimental data chain: data acquisition, selection, storage, processing, and analysis. The major trends of the computing and networking technologies are then indicated with particular attention paid to their influence on the future experiments. Finally, the vision of future data acquisition and processing systems and their promise for future supercolliders is presented.

  11. High Density Digital Data Storage System

    Science.gov (United States)

    Wright, Kenneth D., II; Gray, David L.; Rowland, Wayne D.

    1991-01-01

    The High Density Digital Data Storage System was designed to provide a cost effective means for storing real-time data from the field-deployable digital acoustic measurement system. However, the high density data storage system is a standalone system that could provide a storage solution for many other real time data acquisition applications. The storage system has inputs for up to 20 channels of 16-bit digital data. The high density tape recorders presently being used in the storage system are capable of storing over 5 gigabytes of data at overall transfer rates of 500 kilobytes per second. However, through the use of data compression techniques the system storage capacity and transfer rate can be doubled. Two tape recorders have been incorporated into the storage system to produce a backup tape of data in real-time. An analog output is provided for each data channel as a means of monitoring the data as it is being recorded.

  12. Data storage as a service

    OpenAIRE

    Tomšič, Jan

    2016-01-01

    The purpose of the thesis was comparison of interfaces to network attached file systems and object storage. The thesis describes network file system and mounting procedure in Linux operating system. Object storage and distributed storage systems are explained with examples of usage. Amazon S3 is an example of object store with access trough REST interface. Ceph, a system for distributed object storage, is explained in detail, and a Ceph cluster was deployed for the purpose of this thesis. Cep...

  13. Liquid crystals for holographic optical data storage

    DEFF Research Database (Denmark)

    Matharu, Avtar; Jeeva, S.; Ramanujam, P.S.

    2007-01-01

    to the information storage demands of the 21st century is detailed. Holography is a small subset of the much larger field of optical data storage and similarly, the diversity of materials used for optical data storage is enormous. The theory of polarisation holography which produces holograms of constant intensity...

  14. ICI optical data storage tape: An archival mass storage media

    Science.gov (United States)

    Ruddick, Andrew J.

    1993-01-01

    At the 1991 Conference on Mass Storage Systems and Technologies, ICI Imagedata presented a paper which introduced ICI Optical Data Storage Tape. This paper placed specific emphasis on the media characteristics and initial data was presented which illustrated the archival stability of the media. More exhaustive analysis that was carried out on the chemical stability of the media is covered. Equally important, it also addresses archive management issues associated with, for example, the benefits of reduced rewind requirements to accommodate tape relaxation effects that result from careful tribology control in ICI Optical Tape media. ICI Optical Tape media was designed to meet the most demanding requirements of archival mass storage. It is envisaged that the volumetric data capacity, long term stability and low maintenance characteristics demonstrated will have major benefits in increasing reliability and reducing the costs associated with archival storage of large data volumes.

  15. ENERGY STAR Certified Data Center Storage

    Science.gov (United States)

    Certified models meet all ENERGY STAR requirements as listed in the Version 1.0 ENERGY STAR Program Requirements for Data Center Storage that are effective as of December 2, 2013. A detailed listing of key efficiency criteria are available at http://www.energystar.gov/certified-products/detail/data_center_storage

  16. The challenge of a data storage hierarchy

    Science.gov (United States)

    Ruderman, Michael

    1992-01-01

    A discussion of Mesa Archival Systems' data archiving system is presented. This data archiving system is strictly a software system that is implemented on a mainframe and manages the data into permanent file storage. Emphasis is placed on the fact that any kind of client system on the network can be connected through the Unix interface of the data archiving system.

  17. Cloud-Based Data Storage

    Science.gov (United States)

    Waters, John K.

    2011-01-01

    The vulnerability and inefficiency of backing up data on-site is prompting school districts to switch to more secure, less troublesome cloud-based options. District auditors are pushing for a better way to back up their data than the on-site, tape-based system that had been used for years. About three years ago, Hendrick School District in…

  18. Federated data storage and management infrastructure

    International Nuclear Information System (INIS)

    Zarochentsev, A; Kiryanov, A; Klimentov, A; Krasnopevtsev, D; Hristov, P

    2016-01-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics. (paper)

  19. Cloud Data Storage Federation for Scientific Applications

    NARCIS (Netherlands)

    Koulouzis, S.; Vasyunin, D.; Cushing, R.; Belloum, A.; Bubak, M.; an Mey, D.; Alexander, M.; Bientinesi, P.; Cannataro, M.; Clauss, C.; Costan, A.; Kecskemeti, G.; Morin, C.; Ricci, L.; Sahuquillo, J.; Schulz, M.; Scarano, V.; Scott, S.L.; Weidendorfer, J.

    2014-01-01

    Nowadays, data-intensive scientific research needs storage capabilities that enable efficient data sharing. This is of great importance for many scientific domains such as the Virtual Physiological Human. In this paper, we introduce a solution that federates a variety of systems ranging from file

  20. New data storage and retrieval systems for JET data

    International Nuclear Information System (INIS)

    Layne, Richard; Wheatley, Martin

    2002-01-01

    Since the start of the Joint European Torus (JET), an IBM mainframe has been the main platform for data analysis and storage (J. Comput. Phys. 73 (1987) 85). The mainframe was removed in June 2001 and Solaris and Linux are now the main data storage and analysis platforms. New data storage and retrieval systems have therefore been developed: the Data Warehouse, the JET pulse file server, and the processed pulse file system. In this paper, the new systems will be described, and the design decisions that led to the final systems will be outlined

  1. New data storage and retrieval systems for JET data

    Energy Technology Data Exchange (ETDEWEB)

    Layne, Richard E-mail: richard.layne@ukaea.org.uk; Wheatley, Martin E-mail: martin.wheatley@ukaea.org.uk

    2002-06-01

    Since the start of the Joint European Torus (JET), an IBM mainframe has been the main platform for data analysis and storage (J. Comput. Phys. 73 (1987) 85). The mainframe was removed in June 2001 and Solaris and Linux are now the main data storage and analysis platforms. New data storage and retrieval systems have therefore been developed: the Data Warehouse, the JET pulse file server, and the processed pulse file system. In this paper, the new systems will be described, and the design decisions that led to the final systems will be outlined.

  2. Peptide oligomers for holographic data storage

    DEFF Research Database (Denmark)

    Berg, Rolf Henrik; Hvilsted, Søren; Ramanujam, P.S.

    1996-01-01

    SEVERAL classes of organic materials (such as photoanisotropic liquid-crystalline polymers(1-4) and photorefractive polymers(5-7)) are being investigated for the development of media for optical data storage. Here we describe a new family of organic materials-peptide oligomers containing azobenzene...

  3. StorageTek T10000 Data Cartridge

    CERN Multimedia

    This data cartridge works on several StorageTek systems. The goal is to provide cartridge compatibility across several system. It has been designed for space saving and ultra-high capacity tape. It permit to fulfill high-volume backup, archiving, and disaster recovery.

  4. The NOAO Data Lab virtual storage system

    Science.gov (United States)

    Graham, Matthew J.; Fitzpatrick, Michael J.; Norris, Patrick; Mighell, Kenneth J.; Olsen, Knut; Stobie, Elizabeth B.; Ridgway, Stephen T.; Bolton, Adam S.; Saha, Abhijit; Huang, Lijuan W.

    2016-07-01

    Collaborative research/computing environments are essential for working with the next generations of large astronomical data sets. A key component of them is a distributed storage system to enable data hosting, sharing, and publication. VOSpace1 is a lightweight interface providing network access to arbitrary backend storage solutions and endorsed by the International Virtual Observatory Alliance (IVOA). Although similar APIs exist, such as Amazon S3, WebDav, and Dropbox, VOSpace is designed to be protocol agnostic, focusing on data control operations, and supports asynchronous and third-party data transfers, thereby minimizing unnecessary data transfers. It also allows arbitrary computations to be triggered as a result of a transfer operation: for example, a file can be automatically ingested into a database when put into an active directory or a data reduction task, such as Sextractor, can be run on it. In this paper, we shall describe the VOSpace implementations that we have developed for the NOAO Data Lab. These offer both dedicated remote storage, accessible as a local file system via FUSE, and a local VOSpace service to easily enable data synchronization.

  5. Data backup security in cloud storage system

    OpenAIRE

    Атаян, Борис Геннадьевич; Национальный политехнический университет Армении; Багдасарян, Татевик Араевна; Национальный политехнический университет Армении

    2016-01-01

    Cloud backup system is proposed, which provides means for effective creation, secure storage and restore of backups inCloud. For data archiving new efficient SGBP file format is being used in the system, which is based on DEFLATE compressionalgorithm. Proposed format provides means for fast archive creation, which can contain significant amounts of data. Modernapproaches of backup archive protection are described in the paper. Also the SGBP format is compared to heavily used ZIP format(both Z...

  6. Cloud and virtual data storage networking

    CERN Document Server

    Schulz, Greg

    2011-01-01

    The amount of data being generated, processed, and stored has reached unprecedented levels. Even during the recent economic crisis, there has been no slow down or information recession. Instead, the need to process, move, and store data has only increased. Consequently, IT organizations are looking to do more with what they have while supporting growth along with new services without compromising on cost and service delivery. Cloud and Virtual Data Storage Networking, by savvy IT industry veteran Greg Schulz, looks at converging IT resources and management technologies for facilitating efficie

  7. Failure Analysis of Storage Data Magnetic Systems

    Directory of Open Access Journals (Sweden)

    Ortiz–Prado A.

    2010-10-01

    Full Text Available This paper shows the conclusions about the corrosion mechanics in storage data magnetic systems (hard disk. It was done from the inspection of 198 units that were in service in nine different climatic regions characteristic for Mexico. The results allow to define trends about the failure forms and the factors that affect them. In turn, this study has analyzed the causes that led to mechanical failure and those due to deterioration by atmospheric corrosion. On the basis of the results obtained from the field sampling, demonstrates that the hard disk failure is fundamentally by mechanical effects. The deterioration by environmental effects were found in read-write heads, integrated circuits, printed circuit boards and in some of the electronic components of the controller card of the device, but not in magnetic storage surfaces. There fore, you can discard corrosion on the surface of the disk as the main kind of failure due to environmental deterioration. To avoid any inconvenience in the magnetic data storage system it is necessary to ensure sealing of the system.

  8. PETASCALE DATA STORAGE INSTITUTE (PDSI) Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Garth [Carnegie Mellon University

    2012-11-26

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability. The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools. The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz. Because the Institute focuses on low level files systems and storage systems, its role in improving SciDAC systems was one of supporting application middleware such as data management and system-level performance tuning. In retrospect, the Petascale Data Storage Institute’s most innovative and impactful contribution is the Parallel Log-structured File System (PLFS). Published in SC09, PLFS is middleware that operates in MPI-IO or embedded in FUSE for non-MPI applications. Its function is to decouple concurrently written files into a per-process log file, whose impact (the contents of the single file that the parallel application was concurrently writing) is determined on later reading, rather than during its writing. PLFS is transparent to the parallel application, offering a POSIX or MPI-IO interface, and it shows an order of magnitude speedup to the Chombo benchmark and two orders of magnitude to the FLASH benchmark. Moreover, LANL production applications see speedups of 5X to 28X, so PLFS has been put into production at LANL. Originally conceived and prototyped in a PDSI collaboration between LANL and CMU, it has grown to engage many other PDSI institutes, international partners like AWE

  9. Long-term data storage in diamond

    Science.gov (United States)

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A.

    2016-01-01

    The negatively charged nitrogen vacancy (NV−) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV− optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multicolor optical microscopy to read, write, and reset arbitrary data sets with two-dimensional (2D) binary bit density comparable to present digital-video-disk (DVD) technology. Leveraging on the singular dynamics of NV− ionization, we encode information on different planes of the diamond crystal with no cross-talk, hence extending the storage capacity to three dimensions. Furthermore, we correlate the center’s charge state and the nuclear spin polarization of the nitrogen host and show that the latter is robust to a cycle of NV− ionization and recharge. In combination with super-resolution microscopy techniques, these observations provide a route toward subdiffraction NV charge control, a regime where the storage capacity could exceed present technologies. PMID:27819045

  10. Long-term data storage in diamond.

    Science.gov (United States)

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A

    2016-10-01

    The negatively charged nitrogen vacancy (NV - ) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV - optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multicolor optical microscopy to read, write, and reset arbitrary data sets with two-dimensional (2D) binary bit density comparable to present digital-video-disk (DVD) technology. Leveraging on the singular dynamics of NV - ionization, we encode information on different planes of the diamond crystal with no cross-talk, hence extending the storage capacity to three dimensions. Furthermore, we correlate the center's charge state and the nuclear spin polarization of the nitrogen host and show that the latter is robust to a cycle of NV - ionization and recharge. In combination with super-resolution microscopy techniques, these observations provide a route toward subdiffraction NV charge control, a regime where the storage capacity could exceed present technologies.

  11. Ultrasonic identity data storage and archival system

    International Nuclear Information System (INIS)

    Mc Kenzie, J.M.; Self, B.G.; Walker, J.E.

    1987-01-01

    Ultrasonic seals are being used to determine if an underwater stored spent fuel container has been compromised and can be used to determine if a nuclear material container has been compromised. The Seal Pattern Reader (SPAR) is a microprocessor controlled instrument which interrogates an ultrasonic seal to obtain its identity. The SPAR can compare the present identity with a previous identity, which it obtains from a magnetic bubble cassette memory. A system has been developed which allows an IAEA inspector to transfer seal information obtained at a facility by the SPAR to an IAEA-based data storage and retrieval system, using the bubble cassette memory. Likewise, magnetic bubbles can be loaded at the IAEA with seal signature data needed at a facility for comparison purposes. The archived signatures can be retrieved from the data base for relevant statistical manipulation and for plotting

  12. Heterogeneous Data Storage Management with Deduplication in Cloud Computing

    OpenAIRE

    Yan, Zheng; Zhang, Lifang; Ding, Wenxiu; Zheng, Qinghua

    2017-01-01

    Cloud storage as one of the most important services of cloud computing helps cloud users break the bottleneck of restricted resources and expand their storage without upgrading their devices. In order to guarantee the security and privacy of cloud users, data are always outsourced in an encrypted form. However, encrypted data could incur much waste of cloud storage and complicate data sharing among authorized users. We are still facing challenges on encrypted data storage and management with ...

  13. Developments in data storage materials perspective

    CERN Document Server

    Chong, Chong Tow

    2011-01-01

    "The book covers the recent developments in the field of materials for advancing recording technology by experts worldwide. Chapters that provide sufficient information on the fundamentals will be also included, so that the book can be followed by graduate students or a beginner in the field of magnetic recording. The book also would have a few chapters related to optical data storage. In addition to helping a graduate student to quickly grasp the subject, the book also will serve as a useful reference material for the advanced researcher. The field of materials science related to data storage applications (especially hard disk drives) is rapidly growing. Several innovations take place every year in order to keep the growth trend in the capacity of the hard disk drives. Moreover, magnetic recording is very complicated that it is quite difficult for new engineers and graduate students in the field of materials science or electrical engineering to grasp the subject with a good understanding. There are no compet...

  14. Data storage and retrieval system abstract

    Science.gov (United States)

    Matheson, Barbara

    1992-09-01

    The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.

  15. Data storage and data access at H1

    International Nuclear Information System (INIS)

    Gerhards, R.; Kleinwort, C.; Kruener-Marquis, U.; Niebergall, F.

    1996-01-01

    The electron proton collider HERA at the DESY laboratory in Hamburg and the H1 experiment are now in successful operation for more than three years. The H1 experiment is logging data at an average rate of 500KB/s which results in a yearly raw data volume of several Terabytes. The data are reconstructed with a delay of only a few hours, also yielding several Terabytes of reconstructed data after physics oriented event classification. Physics analysis is performed on a SGI Challenge computer, equipped with about 500 GB of disk and, since a couple of months, direct access to a Storage Tek ACS 4400 silo. The disk space is mainly devoted to store the reconstructed data in very compressed format (typically 5 to 10 KB per event). This allows for very efficient and fast physics analysis. Monte Carlo data, on the other hand, are kept in the ACS silo and staged to disk on demand. (author)

  16. Utilizing cloud storage architecture for long-pulse fusion experiment data storage

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Ming; Liu, Qiang [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan, Hubei (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan, Hubei (China); Zheng, Wei, E-mail: zhenghaku@gmail.com [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan, Hubei (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan, Hubei (China); Wan, Kuanhong; Hu, Feiran; Yu, Kexun [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan, Hubei (China); School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan, Hubei (China)

    2016-11-15

    Scientific data storage plays a significant role in research facility. The explosion of data in recent years was always going to make data access, acquiring and management more difficult especially in fusion research field. For future long-pulse experiment like ITER, the extremely large data will be generated continuously for a long time, putting much pressure on both the write performance and the scalability. And traditional database has some defects such as inconvenience of management, hard to scale architecture. Hence a new data storage system is very essential. J-TEXTDB is a data storage and management system based on an application cluster and a storage cluster. J-TEXTDB is designed for big data storage and access, aiming at improving read–write speed, optimizing data system structure. The application cluster of J-TEXTDB is used to provide data manage functions and handles data read and write operations from the users. The storage cluster is used to provide the storage services. Both clusters are composed with general servers. By simply adding server to the cluster can improve the read–write performance, the storage space and redundancy, making whole data system highly scalable and available. In this paper, we propose a data system architecture and data model to manage data more efficient. Benchmarks of J-TEXTDB performance including read and write operations are given.

  17. Utilizing cloud storage architecture for long-pulse fusion experiment data storage

    International Nuclear Information System (INIS)

    Zhang, Ming; Liu, Qiang; Zheng, Wei; Wan, Kuanhong; Hu, Feiran; Yu, Kexun

    2016-01-01

    Scientific data storage plays a significant role in research facility. The explosion of data in recent years was always going to make data access, acquiring and management more difficult especially in fusion research field. For future long-pulse experiment like ITER, the extremely large data will be generated continuously for a long time, putting much pressure on both the write performance and the scalability. And traditional database has some defects such as inconvenience of management, hard to scale architecture. Hence a new data storage system is very essential. J-TEXTDB is a data storage and management system based on an application cluster and a storage cluster. J-TEXTDB is designed for big data storage and access, aiming at improving read–write speed, optimizing data system structure. The application cluster of J-TEXTDB is used to provide data manage functions and handles data read and write operations from the users. The storage cluster is used to provide the storage services. Both clusters are composed with general servers. By simply adding server to the cluster can improve the read–write performance, the storage space and redundancy, making whole data system highly scalable and available. In this paper, we propose a data system architecture and data model to manage data more efficient. Benchmarks of J-TEXTDB performance including read and write operations are given.

  18. Enhanced Obfuscation Technique for Data Confidentiality in Public Cloud Storage

    OpenAIRE

    Oli S. Arul; Arockiam L.

    2016-01-01

    With an advent of cloud computing, data storage has become a boon in information technology. At the same time, data storage in remote places have become important issues. Lot of techniques are available to ensure protection of data confidentiality. These techniques do not completely serve the purpose in protecting data. The Obfuscation techniques come to rescue for protecting data from malicious attacks. This paper proposes an obfuscation technique to encrypt the desired data type on the clou...

  19. Long-term data storage in diamond

    OpenAIRE

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A.

    2016-01-01

    The negatively charged nitrogen vacancy (NV?) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV? optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multic...

  20. Damsel: A Data Model Storage Library for Exascale Science

    Energy Technology Data Exchange (ETDEWEB)

    Koziol, Quincey [The HDF Group, Champaign, IL (United States)

    2014-11-26

    The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. We will accomplish this through three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community.

  1. Damsel - A Data Model Storage Library for Exascale Science

    Energy Technology Data Exchange (ETDEWEB)

    Samatova, Nagiza F

    2014-07-18

    The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. We will accomplish this through three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community.

  2. Scaling up DNA data storage and random access retrieval

    OpenAIRE

    Gopalan, Parikshit; Ceze, Luis; Nguyen, Bichlien; Takahashi, Christopher; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Seelig, Georg; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Yekhanin, Sergey; Makarychev, Konstantin

    2017-01-01

    Current storage technologies can no longer keep pace with exponentially growing amounts of data. Synthetic DNA offers an attractive alternative due to its potential information density of ~ 1018B/mm3, 107 times denser than magnetic tape, and potential durability of thousands of years. Recent advances in DNA data storage have highlighted technical challenges, in particular, coding and random access, but have stored only modest amounts of data in synthetic DNA. This paper demonstrates an end-to...

  3. ID based cryptography for secure cloud data storage

    OpenAIRE

    Kaaniche , Nesrine; Boudguiga , Aymen; Laurent , Maryline

    2013-01-01

    International audience; This paper addresses the security issues of storing sensitive data in a cloud storage service and the need for users to trust the commercial cloud providers. It proposes a cryptographic scheme for cloud storage, based on an original usage of ID-Based Cryptography. Our solution has several advantages. First, it provides secrecy for encrypted data which are stored in public servers. Second, it offers controlled data access and sharing among users, so that unauthorized us...

  4. Security and efficiency data sharing scheme for cloud storage

    International Nuclear Information System (INIS)

    Han, Ke; Li, Qingbo; Deng, Zhongliang

    2016-01-01

    With the adoption and diffusion of data sharing paradigm in cloud storage, there have been increasing demands and concerns for shared data security. Ciphertext Policy Attribute-Based Encryption (CP-ABE) is becoming a promising cryptographic solution to the security problem of shared data in cloud storage. However due to key escrow, backward security and inefficiency problems, existing CP-ABE schemes cannot be directly applied to cloud storage system. In this paper, an effective and secure access control scheme for shared data is proposed to solve those problems. The proposed scheme refines the security of existing CP-ABE based schemes. Specifically, key escrow and conclusion problem are addressed by dividing key generation center into several distributed semi-trusted parts. Moreover, secrecy revocation algorithm is proposed to address not only back secrecy but efficient problem in existing CP-ABE based scheme. Furthermore, security and performance analyses indicate that the proposed scheme is both secure and efficient for cloud storage.

  5. Disk storage at CERN: Handling LHC data and beyond

    International Nuclear Information System (INIS)

    Espinal, X; Adde, G; Chan, B; Iven, J; Presti, G Lo; Lamanna, M; Mascetti, L; Pace, A; Peters, A; Ponce, S; Sindrilaru, E

    2014-01-01

    The CERN-IT Data Storage and Services (DSS) group stores and provides access to data coming from the LHC and other physics experiments. We implement specialised storage services to provide tools for optimal data management, based on the evolution of data volumes, the available technologies and the observed experiment and users' usage patterns. Our current solutions are CASTOR, for highly-reliable tape-backed storage for heavy-duty Tier-0 workflows, and EOS, for disk-only storage for full-scale analysis activities. CASTOR is evolving towards a simplified disk layer in front of the tape robotics, focusing on recording the primary data from the detectors. EOS is now a well-established storage service used intensively by the four big LHC experiments. Its conceptual design based on multi-replica and in-memory namespace, makes it the perfect system for data intensive workflows. The LHC-Long Shutdown 1 (LSI) presents a window of opportunity to shape up both of our storage services and validate against the ongoing analysis activity in order to successfully face the new LHC data taking period in 2015. In this paper, the current state and foreseen evolutions of CASTOR and EOS will be presented together with a study about the reliability of our systems.

  6. Enhanced Obfuscation Technique for Data Confidentiality in Public Cloud Storage

    Directory of Open Access Journals (Sweden)

    Oli S. Arul

    2016-01-01

    Full Text Available With an advent of cloud computing, data storage has become a boon in information technology. At the same time, data storage in remote places have become important issues. Lot of techniques are available to ensure protection of data confidentiality. These techniques do not completely serve the purpose in protecting data. The Obfuscation techniques come to rescue for protecting data from malicious attacks. This paper proposes an obfuscation technique to encrypt the desired data type on the cloud providing more protection from unknown hackers. The experimental results show that the time taken for obfuscation is low and the confidentiality percentage is high when compared with existing techniques.

  7. Damsel: A Data Model Storage Library for Exascale Science

    Energy Technology Data Exchange (ETDEWEB)

    Choudhary, Alok [Northwestern Univ., Evanston, IL (United States); Liao, Wei-keng [Northwestern Univ., Evanston, IL (United States)

    2014-07-11

    Computational science applications have been described as having one of seven motifs (the “seven dwarfs”), each having a particular pattern of computation and communication. From a storage and I/O perspective, these applications can also be grouped into a number of data model motifs describing the way data is organized and accessed during simulation, analysis, and visualization. Major storage data models developed in the 1990s, such as Network Common Data Format (netCDF) and Hierarchical Data Format (HDF) projects, created support for more complex data models. Development of both netCDF and HDF5 was influenced by multi-dimensional dataset storage requirements, but their access models and formats were designed with sequential storage in mind (e.g., a POSIX I/O model). Although these and other high-level I/O libraries have had a beneficial impact on large parallel applications, they do not always attain a high percentage of peak I/O performance due to fundamental design limitations, and they do not address the full range of current and future computational science data models. The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. The project consists of three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community. The product of this project, Damsel library, is openly available for download from http://cucis.ece.northwestern.edu/projects/DAMSEL. Several case studies and application programming interface

  8. Effective Data Backup System Using Storage Area Network Solution ...

    African Journals Online (AJOL)

    The primary cause of data loss is lack or non- existent of data backup. Storage Area Network Solution (SANS) is internet-based software which will collect clients data and host them in several locations to forestall data loss in case of disaster in one location. The researcher used adobe Dreamweaver (CSC3) embedded with ...

  9. Searchable Data Vault: Encrypted Queries in Secure Distributed Cloud Storage

    Directory of Open Access Journals (Sweden)

    Geong Sen Poh

    2017-05-01

    Full Text Available Cloud storage services allow users to efficiently outsource their documents anytime and anywhere. Such convenience, however, leads to privacy concerns. While storage providers may not read users’ documents, attackers may possibly gain access by exploiting vulnerabilities in the storage system. Documents may also be leaked by curious administrators. A simple solution is for the user to encrypt all documents before submitting them. This method, however, makes it impossible to efficiently search for documents as they are all encrypted. To resolve this problem, we propose a multi-server searchable symmetric encryption (SSE scheme and construct a system called the searchable data vault (SDV. A unique feature of the scheme is that it allows an encrypted document to be divided into blocks and distributed to different storage servers so that no single storage provider has a complete document. By incorporating the scheme, the SDV protects the privacy of documents while allowing for efficient private queries. It utilizes a web interface and a controller that manages user credentials, query indexes and submission of encrypted documents to cloud storage services. It is also the first system that enables a user to simultaneously outsource and privately query documents from a few cloud storage services. Our preliminary performance evaluation shows that this feature introduces acceptable computation overheads when compared to submitting documents directly to a cloud storage service.

  10. High density data storage principle, technology, and materials

    CERN Document Server

    Zhu, Daoben

    2009-01-01

    The explosive increase in information and the miniaturization of electronic devices demand new recording technologies and materials that combine high density, fast response, long retention time and rewriting capability. As predicted, the current silicon-based computer circuits are reaching their physical limits. Further miniaturization of the electronic components and increase in data storage density are vital for the next generation of IT equipment such as ultra high-speed mobile computing, communication devices and sophisticated sensors. This original book presents a comprehensive introduction to the significant research achievements on high-density data storage from the aspects of recording mechanisms, materials and fabrication technologies, which are promising for overcoming the physical limits of current data storage systems. The book serves as an useful guide for the development of optimized materials, technologies and device structures for future information storage, and will lead readers to the fascin...

  11. Data storage accounting and verification in LHC experiments

    CERN Document Server

    Ratnikova ,Natalia

    2012-01-01

    All major experiments at Large Hadron Collider (LHC) need to measure real storage usage at the Grid sites. This information is equally important for the resource management, planning, and operations. To verify consistency of the central catalogs, experiments are asking sites to provide full list of files they have on storage, including size, checksum, and other file attributes. Such storage dumps provided at regular intervals give a realistic view of the storage resource usage by the experiments. Regular monitoring of the space usage and data verification serve as additional internal checks of the system integrity and performance. Both the importance and the complexity of these tasks increase with the constant growth of the total data volumes during the active data taking period at the LHC. Developed common solutions help to reduce the maintenance costs both at the large Tier-1 facilities supporting multiple virtual organizations, and at the small sites that often lack manpower. We discuss requirements...

  12. Enabling data-intensive science with Tactical Storage Systems

    CERN Multimedia

    CERN. Geneva; Marquina, Miguel Angel

    2006-01-01

    Large scale scientific computing requires the ability to share and consume data and storage in complex ways across multiple systems. However, conventional systems constrain users to the fixed abstractions selected by the local system administrator. The result is that users must either move data manually over the wide area or simply be satisfied with the resources of a single cluster. To remedy this situation, we introduce the concept of a tactical storage system (TSS) that allows users to create, reconfigure, and destroy distributed storage systems without special privileges or complex configuration. We have deployed a prototype TSS of 200 disks and 8 TB of storage at the University of Notre Dame and applied it to several problems in astrophysics, high energy physics, and bioinformatics. This talk will focus on novel system structures that support data-intensive science. About the speaker: Douglas Thain is an Assistant Professor of Computer Science and Engineering at the University of Notre Dame. He received ...

  13. Disk storage management for LHCb based on Data Popularity estimator

    CERN Document Server

    INSPIRE-00545541; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-12-23

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times ...

  14. High Throughput WAN Data Transfer with Hadoop-based Storage

    Science.gov (United States)

    Amin, A.; Bockelman, B.; Letts, J.; Levshina, T.; Martin, T.; Pi, H.; Sfiligoi, I.; Thomas, M.; Wüerthwein, F.

    2011-12-01

    Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.

  15. High Throughput WAN Data Transfer with Hadoop-based Storage

    International Nuclear Information System (INIS)

    Amin, A; Thomas, M; Bockelman, B; Letts, J; Martin, T; Pi, H; Sfiligoi, I; Wüerthwein, F; Levshina, T

    2011-01-01

    Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.

  16. Storage and Database Management for Big Data

    Science.gov (United States)

    2015-07-27

    cloud models that satisfy different problem 1.2. THE BIG DATA CHALLENGE 3 Enterprise Big Data - Interactive - On-demand - Virtualization - Java ...replication. Data loss can only occur if three drives fail prior to any one of the failures being corrected. Hadoop is written in Java and is installed in a...visible view into a dataset. There are many popular database management systems such as MySQL [4], PostgreSQL [63], and Oracle [5]. Most commonly

  17. Using Cloud Storage for NMR Data Distribution

    Science.gov (United States)

    Soulsby, David

    2012-01-01

    An approach using Google Groups as method for distributing student-acquired NMR data has been implemented. We describe how to configure NMR spectrometer software so that data is uploaded to a laboratory section specific Google Group, thereby removing bottlenecks associated with printing and processing at the spectrometer workstation. Outside of…

  18. Chapter 9. Data Management, Storage, and Reporting

    Science.gov (United States)

    Linda A. Spencer; Mary M. Manning; Bryce. Rickel

    2013-01-01

    Data collected for a habitat monitoring program must be managed and stored to be accessible for current and future use inside and outside the Forest Service. Information maintenance and dissemination are important to the Forest Service; they are part of the U.S. Department of Agriculture (USDA) guidelines for information quality (USDA 2002) under the Data Quality Act...

  19. Surface-enhanced raman optical data storage system

    Science.gov (United States)

    Vo-Dinh, Tuan

    1994-01-01

    An improved Surface-Enhanced Raman Optical Data Storage System (SERODS) is disclosed. In the improved system, entities capable of existing in multiple reversible states are present on the storage device. Such entities result in changed Surface-Enhanced Raman Scattering (SERS) when localized state changes are effected in less than all of the entities. Therefore, by changing the state of entities in localized regions of a storage device, the SERS emissions in such regions will be changed. When a write-on device is controlled by a data signal, such a localized regions of changed SERS emissions will correspond to the data written on the device. The data may be read by illuminating the surface of the storage device with electromagnetic radiation of an appropriate frequency and detecting the corresponding SERS emissions. Data may be deleted by reversing the state changes of entities in regions where the data was initially written. In application, entities may be individual molecules which allows for the writing of data at the molecular level. A read/write/delete head utilizing near-field quantum techniques can provide for a write/read/delete device capable of effecting state changes in individual molecules, thus providing for the effective storage of data at the molecular level.

  20. Data storage and retrieval for long-term dog studies

    International Nuclear Information System (INIS)

    Watson, C.R.; Trauger, G.M.; McIntyre, J.M.; Slavich, A.L.; Park, J.F.

    1980-01-01

    Over half of the 500,000 records collected on dogs in the last 20 years in our laboratory have been converted from sequential storage on magnetic tape to direct-access disk storage on a PDP 11/70 minicomputer. An interactive storage and retrieval system, based on a commercially available query language, has been developed to make these records more accessible. Data entry and retrieval are now performed by scientists and technicians rather than by keypunch operators and computer specialists. Further conversion awaits scheduled computer enhancement

  1. RAIN: A Bio-Inspired Communication and Data Storage Infrastructure.

    Science.gov (United States)

    Monti, Matteo; Rasmussen, Steen

    2017-01-01

    We summarize the results and perspectives from a companion article, where we presented and evaluated an alternative architecture for data storage in distributed networks. We name the bio-inspired architecture RAIN, and it offers file storage service that, in contrast with current centralized cloud storage, has privacy by design, is open source, is more secure, is scalable, is more sustainable, has community ownership, is inexpensive, and is potentially faster, more efficient, and more reliable. We propose that a RAIN-style architecture could form the backbone of the Internet of Things that likely will integrate multiple current and future infrastructures ranging from online services and cryptocurrency to parts of government administration.

  2. Design of Dimensional Model for Clinical Data Storage and Analysis

    Directory of Open Access Journals (Sweden)

    Dipankar SENGUPTA

    2013-06-01

    Full Text Available Current research in the field of Life and Medical Sciences is generating chunk of data on daily basis. It has thus become a necessity to find solutions for efficient storage of this data, trying to correlate and extract knowledge from it. Clinical data generated in Hospitals, Clinics & Diagnostics centers is falling under a similar paradigm. Patient’s records in various hospitals are increasing at an exponential rate, thus adding to the problem of data management and storage. Major problem being faced corresponding to storage, is the varied dimensionality of the data, ranging from images to numerical form. Therefore there is a need for development of efficient data model which can handle this multi-dimensionality data issue and store the data with historical aspect.For the stated problem lying in façade of clinical informatics we propose a clinical dimensional model design which can be used for development of a clinical data mart. The model has been designed keeping in consideration temporal storage of patient's data with respect to all possible clinical parameters which can include both textual and image based data. Availability of said data for each patient can be then used for application of data mining techniques for finding the correlation of all the parameters at the level of individual and population.

  3. Utilizing ZFS for the Storage of Acquired Data

    International Nuclear Information System (INIS)

    Pugh, C.; Henderson, P.; Silber, K.; Carroll, T.; Ying, K.

    2009-01-01

    Every day, the amount of data that is acquired from plasma experiments grows dramatically. It has become difficult for systems administrators to keep up with the growing demand for hard drive storage space. In the past, project storage has been supplied using UNIX filesystem (ufs) partitions. In order to increase the size of the disks using this system, users were required to discontinue use of the disk, so the existing data could be transferred to a disk of larger capacity or begin use of a completely new and separate disk, thus creating a segmentation of data storage. With the application of ZFS pools, the data capacity woes are over. ZFS provides simple administration that eliminates the need to unmount to resize, or transfer data to a larger disk. With a storage limit of 16 Exabytes (1018), ZFS provides immense scalability. Utilizing ZFS as the new project disk file system, users and administrators can eliminate time wasted waiting for data to transfer from one hard drive to another, and also enables more efficient use of disk space, as system administrators need only allocate what is presently required. This paper will discuss the application and benefits of using ZFS as an alternative to traditional data access and storage in the fusion environment.

  4. Move It or Lose It: Cloud-Based Data Storage

    Science.gov (United States)

    Waters, John K.

    2010-01-01

    There was a time when school districts showed little interest in storing or backing up their data to remote servers. Nothing seemed less secure than handing off data to someone else. But in the last few years the buzz around cloud storage has grown louder, and the idea that data backup could be provided as a service has begun to gain traction in…

  5. Researchers wrangle petabytes of data storage with NAS, tape

    CERN Multimedia

    Pariseau, Beth

    2007-01-01

    "Much is made in the enterprise data storage industry about the performance of disk systems over tape drives, but the managers of one data center that has eached the far limits of capacity say otherwise. Budget and performance demands forced them to build access protocols and data management tools for disk systems from scratch."

  6. Megastore: structured storage for Big Data

    Directory of Open Access Journals (Sweden)

    Oswaldo Moscoso Zea

    2012-12-01

    Full Text Available Megastore es uno de los componentes principales de la infraestructura de datos de Google, elcual ha permitido el procesamiento y almacenamiento de grandes volúmenes de datos (BigData con alta escalabilidad, confiabilidad y seguridad. Las compañías e individuos que usanestá tecnología se están beneficiando al mismo tiempo de un servicio estable y de altadisponibilidad. En este artículo se realiza un análisis de la infraestructura de datos de Google,comenzando por una revisión de los componentes principales que se han implementado en losúltimos años hasta la creación de Megastore. Se presenta también un análisis de los aspectostécnicos más importantes que se han implementado en este sistema de almacenamiento y que le han permitido cumplir con los objetivos para los que fue creado.Abstract:Megastore is one of the building blocks of Google’s data infrastructure. It has allowed storingand processing operations of huge volumes of data (Big Data with high scalability, reliabilityand security. Companies and individuals using this technology benefit from a highly availableand stable service. In this paper an analysis of Google’s data infrastructure is made, startingwith a review of the core components that have been developed in recent years until theimplementation of Megastore. An analysis is also made of the most important

  7. Volume Holographic Storage of Digital Data Implemented in Photorefractive Media

    Science.gov (United States)

    Heanue, John Frederick

    A holographic data storage system is fundamentally different from conventional storage devices. Information is recorded in a volume, rather than on a two-dimensional surface. Data is transferred in parallel, on a page-by -page basis, rather than serially. These properties, combined with a limited need for mechanical motion, lead to the potential for a storage system with high capacity, fast transfer rate, and short access time. The majority of previous volume holographic storage experiments have involved direct storage and retrieval of pictorial information. Success in the development of a practical holographic storage device requires an understanding of the performance capabilities of a digital system. This thesis presents a number of contributions toward this goal. A description of light diffraction from volume gratings is given. The results are used as the basis for a theoretical and numerical analysis of interpage crosstalk in both angular and wavelength multiplexed holographic storage. An analysis of photorefractive grating formation in photovoltaic media such as lithium niobate is presented along with steady-state expressions for the space-charge field in thermal fixing. Thermal fixing by room temperature recording followed by ion compensation at elevated temperatures is compared to simultaneous recording and compensation at high temperature. In particular, the tradeoff between diffraction efficiency and incomplete Bragg matching is evaluated. An experimental investigation of orthogonal phase code multiplexing is described. Two unique capabilities, the ability to perform arithmetic operations on stored data pages optically, rather than electronically, and encrypted data storage, are demonstrated. A comparison of digital signal representations, or channel codes, is carried out. The codes are compared in terms of bit-error rate performance at constant capacity. A well-known one-dimensional digital detection technique, maximum likelihood sequence estimation, is

  8. Data storage accounting and verification at LHC experiments

    Energy Technology Data Exchange (ETDEWEB)

    Huang, C. H. [Fermilab; Lanciotti, E. [CERN; Magini, N. [CERN; Ratnikova, N. [Moscow, ITEP; Sanchez-Hernandez, A. [CINVESTAV, IPN; Serfon, C. [Munich U.; Wildish, T. [Princeton U.; Zhang, X. [Beijing, Inst. High Energy Phys.

    2012-01-01

    All major experiments at the Large Hadron Collider (LHC) need to measure real storage usage at the Grid sites. This information is equally important for resource management, planning, and operations. To verify the consistency of central catalogs, experiments are asking sites to provide a full list of the files they have on storage, including size, checksum, and other file attributes. Such storage dumps, provided at regular intervals, give a realistic view of the storage resource usage by the experiments. Regular monitoring of the space usage and data verification serve as additional internal checks of the system integrity and performance. Both the importance and the complexity of these tasks increase with the constant growth of the total data volumes during the active data taking period at the LHC. The use of common solutions helps to reduce the maintenance costs, both at the large Tier1 facilities supporting multiple virtual organizations and at the small sites that often lack manpower. We discuss requirements and solutions to the common tasks of data storage accounting and verification, and present experiment-specific strategies and implementations used within the LHC experiments according to their computing models.

  9. Antenna data storage concept for phased array radio astronomical instruments

    Science.gov (United States)

    Gunst, André W.; Kruithof, Gert H.

    2018-04-01

    Low frequency Radio Astronomy instruments like LOFAR and SKA-LOW use arrays of dipole antennas for the collection of radio signals from the sky. Due to the large number of antennas involved, the total data rate produced by all the antennas is enormous. Storage of the antenna data is both economically and technologically infeasible using the current state of the art storage technology. Therefore, real-time processing of the antenna voltage data using beam forming and correlation is applied to achieve a data reduction throughout the signal chain. However, most science could equally well be performed using an archive of raw antenna voltage data coming straight from the A/D converters instead of capturing and processing the antenna data in real time over and over again. Trends on storage and computing technology make such an approach feasible on a time scale of approximately 10 years. The benefits of such a system approach are more science output and a higher flexibility with respect to the science operations. In this paper we present a radically new system concept for a radio telescope based on storage of raw antenna data. LOFAR is used as an example for such a future instrument.

  10. A protect solution for data security in mobile cloud storage

    Science.gov (United States)

    Yu, Xiaojun; Wen, Qiaoyan

    2013-03-01

    It is popular to access the cloud storage by mobile devices. However, this application suffer data security risk, especial the data leakage and privacy violate problem. This risk exists not only in cloud storage system, but also in mobile client platform. To reduce the security risk, this paper proposed a new security solution. It makes full use of the searchable encryption and trusted computing technology. Given the performance limit of the mobile devices, it proposes the trusted proxy based protection architecture. The design basic idea, deploy model and key flows are detailed. The analysis from the security and performance shows the advantage.

  11. Biophotopol: A Sustainable Photopolymer for Holographic Data Storage Applications

    Directory of Open Access Journals (Sweden)

    Augusto Beléndez

    2012-05-01

    Full Text Available Photopolymers have proved to be useful for different holographic applications such as holographic data storage or holographic optical elements. However, most photopolymers have certain undesirable features, such as the toxicity of some of their components or their low environmental compatibility. For this reason, the Holography and Optical Processing Group at the University of Alicante developed a new dry photopolymer with low toxicity and high thickness called biophotopol, which is very adequate for holographic data storage applications. In this paper we describe our recent studies on biophotopol and the main characteristics of this material.

  12. Optimization of Comb-Drive Actuators [Nanopositioners for probe-based data storage and musical MEMS

    NARCIS (Netherlands)

    Engelen, Johannes Bernardus Charles

    2011-01-01

    The era of infinite storage seems near. To reach it, data storage capabilities need to grow, and new storage technologies must be developed.This thesis studies one aspect of one of the emergent storage technologies: optimizing electrostatic combdrive actuation for a parallel probe-based data storage

  13. The Analysis of RDF Semantic Data Storage Optimization in Large Data Era

    Science.gov (United States)

    He, Dandan; Wang, Lijuan; Wang, Can

    2018-03-01

    With the continuous development of information technology and network technology in China, the Internet has also ushered in the era of large data. In order to obtain the effective acquisition of information in the era of large data, it is necessary to optimize the existing RDF semantic data storage and realize the effective query of various data. This paper discusses the storage optimization of RDF semantic data under large data.

  14. Distributed Scheme to Authenticate Data Storage Security in Cloud Computing

    OpenAIRE

    B. Rakesh; K. Lalitha; M. Ismail; H. Parveen Sultana

    2017-01-01

    Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which h...

  15. A novel data storage logic in the cloud.

    Science.gov (United States)

    Mátyás, Bence; Szarka, Máté; Járvás, Gábor; Kusper, Gábor; Argay, István; Fialowski, Alice

    2016-01-01

    Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. The solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table.

  16. Disk storage management for LHCb based on Data Popularity estimator

    Science.gov (United States)

    Hushchyn, Mikhail; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-12-01

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.

  17. Disk storage management for LHCb based on Data Popularity estimator

    International Nuclear Information System (INIS)

    Hushchyn, Mikhail; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-01-01

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data. (paper)

  18. Alternative Data Storage Solution for Mobile Messaging Services

    Directory of Open Access Journals (Sweden)

    David C. C. Ong

    2007-01-01

    Full Text Available In recent years, mobile devices have become relatively more powerful with additional features which have the capability to provide multimedia streaming. Better, faster and more reliable data storage solutions in the mobile messaging platform have become more essential with these additional improvements. The existing mobile messaging infrastructure, in particular the data storage platform has become less proficient in coping with the increased demand for its services. This demand especially in the mobile messaging area (i.e. SMS – Short Messaging Service, MMS – Multimedia Messaging Service, which may well exceeded 250,000 requests per second, means that the need to evaluate competing data management systems has become not only necessary but essential. This paper presents an evaluation of SMS and MMS platforms using different database management systems – DBMS and recommends the best data management strategies for these platforms.

  19. Shift-Peristrophic Multiplexing for High Density Holographic Data Storage

    Directory of Open Access Journals (Sweden)

    Zenta Ushiyama

    2014-03-01

    Full Text Available Holographic data storage is a promising technology that provides very large data storage capacity, and the multiplexing method plays a significant role in increasing this capacity. Various multiplexing methods have been previously researched. In the present study, we propose a shift-peristrophic multiplexing technique that uses spherical reference waves, and experimentally verify that this method efficiently increases the data capacity. In the proposed method, a series of holograms is recorded with shift multiplexing, in which the recording material is rotated with its axis perpendicular to the material’s surface. By iterating this procedure, multiplicity is shown to improve. This method achieves more than 1 Tbits/inch2 data density recording. Furthermore, a capacity increase of several TB per disk is expected by maximizing the recording medium performance.

  20. Vector and Raster Data Storage Based on Morton Code

    Science.gov (United States)

    Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.

    2018-05-01

    Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.

  1. Towards Efficient Scientific Data Management Using Cloud Storage

    Science.gov (United States)

    He, Qiming

    2013-01-01

    A software prototype allows users to backup and restore data to/from both public and private cloud storage such as Amazon's S3 and NASA's Nebula. Unlike other off-the-shelf tools, this software ensures user data security in the cloud (through encryption), and minimizes users operating costs by using space- and bandwidth-efficient compression and incremental backup. Parallel data processing utilities have also been developed by using massively scalable cloud computing in conjunction with cloud storage. One of the innovations in this software is using modified open source components to work with a private cloud like NASA Nebula. Another innovation is porting the complex backup to- cloud software to embedded Linux, running on the home networking devices, in order to benefit more users.

  2. Eigenmode multiplexing with SLM for volume holographic data storage

    Science.gov (United States)

    Chen, Guanghao; Miller, Bo E.; Takashima, Yuzuru

    2017-08-01

    The cavity supports the orthogonal reference beam families as its eigenmodes while enhancing the reference beam power. Such orthogonal eigenmodes are used as additional degree of freedom to multiplex data pages, consequently increase storage densities for volume Holographic Data Storage Systems (HDSS) when the maximum number of multiplexed data page is limited by geometrical factor. Image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at multiple Bragg angles by using Liquid Crystal on Silicon (LCOS) spatial light modulators (SLMs) in reference arms. Total of nine holograms are recorded with three angular and three eigenmode.

  3. Multilevel recording of complex amplitude data pages in a holographic data storage system using digital holography.

    Science.gov (United States)

    Nobukawa, Teruyoshi; Nomura, Takanori

    2016-09-05

    A holographic data storage system using digital holography is proposed to record and retrieve multilevel complex amplitude data pages. Digital holographic techniques are capable of modulating and detecting complex amplitude distribution using current electronic devices. These techniques allow the development of a simple, compact, and stable holographic storage system that mainly consists of a single phase-only spatial light modulator and an image sensor. As a proof-of-principle experiment, complex amplitude data pages with binary amplitude and four-level phase are recorded and retrieved. Experimental results show the feasibility of the proposed holographic data storage system.

  4. Shipping and storage cask data for spent nuclear fuel

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, E.R.; Notz, K.J.

    1988-11-01

    This document is a compilation of data on casks used for the storage and/or transport of commercially generated spent fuel in the US based on publicly available information. In using the information contained in the following data sheets, it should be understood that the data have been assembled from published information, which in some instances was not internally consistent. Moreover, it was sometimes necessary to calculate or infer the values of some attributes from available information. Nor was there always a uniform method of reporting the values of some attributes; for example, an outside surface dose of the loaded cask was sometimes reported to be the maximum acceptable by NRC, while in other cases the maximum actual dose rate expected was reported, and in still other cases the expected average dose rate was reported. A summary comparison of the principal attributes of storage and transportable storage casks is provided and a similar comparison for shipping casks is also shown. References to source data are provided on the individual data sheets for each cask.

  5. Shipping and storage cask data for spent nuclear fuel

    International Nuclear Information System (INIS)

    Johnson, E.R.; Notz, K.J.

    1988-11-01

    This document is a compilation of data on casks used for the storage and/or transport of commercially generated spent fuel in the US based on publicly available information. In using the information contained in the following data sheets, it should be understood that the data have been assembled from published information, which in some instances was not internally consistent. Moreover, it was sometimes necessary to calculate or infer the values of some attributes from available information. Nor was there always a uniform method of reporting the values of some attributes; for example, an outside surface dose of the loaded cask was sometimes reported to be the maximum acceptable by NRC, while in other cases the maximum actual dose rate expected was reported, and in still other cases the expected average dose rate was reported. A summary comparison of the principal attributes of storage and transportable storage casks is provided and a similar comparison for shipping casks is also shown. References to source data are provided on the individual data sheets for each cask

  6. Using Cloud-based Storage Technologies for Earth Science Data

    Science.gov (United States)

    Michaelis, A.; Readey, J.; Votava, P.

    2016-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  7. An intelligent data model for the storage of structured grids

    Science.gov (United States)

    Clyne, John; Norton, Alan

    2013-04-01

    With support from the U.S. National Science Foundation we have developed, and currently maintain, VAPOR: a geosciences-focused, open source visual data analysis package. VAPOR enables highly interactive exploration, as well as qualitative and quantitative analysis of high-resolution simulation outputs using only a commodity, desktop computer. The enabling technology behind VAPOR's ability to interact with a data set, whose size would overwhelm all but the largest analysis computing resources, is a progressive data access file format, called the VAPOR Data Collection (VDC). The VDC is based on the discrete wavelet transform and their information compaction properties. Prior to analysis, raw data undergo a wavelet transform, concentrating the information content into a fraction of the coefficients. The coefficients are then sorted by their information content (magnitude) into a small number of bins. Data are reconstructed by applying an inverse wavelet transform. If all of the coefficient bins are used during reconstruction the process is lossless (up to floating point round-off). If only a subset of the bins are used, an approximation of the original data is produced. A crucial point here is that the principal benefit to reconstruction from a subset of wavelet coefficients is a reduction in I/O. Further, if smaller coefficients are simply discarded, or perhaps stored on more capacious tertiary storage, secondary storage requirements (e.g. disk) can be reduced as well. In practice, these reductions in I/O or storage can be on the order of tens or even hundreds. This talk will briefly describe the VAPOR Data Collection, and will present real world success stories from the geosciences that illustrate how progressive data access enables highly interactive exploration of Big Data.

  8. Benefits and Pitfalls of GRACE Terrestrial Water Storage Data Assimilation

    Science.gov (United States)

    Girotto, Manuela

    2018-01-01

    Satellite observations of terrestrial water storage (TWS) from the Gravity Recovery and Climate Experiment (GRACE) mission have a coarse resolution in time (monthly) and space (roughly 150,000 sq km at midlatitudes) and vertically integrate all water storage components over land, including soil moisture and groundwater. Nonetheless, data assimilation can be used to horizontally downscale and vertically partition GRACE-TWS observations. This presentation illustrates some of the benefits and drawbacks of assimilating TWS observations from GRACE into a land surface model over the continental United States and India. The assimilation scheme yields improved skill metrics for groundwater compared to the no-assimilation simulations. A smaller impact is seen for surface and root-zone soil moisture. Further, GRACE observes TWS depletion associated with anthropogenic groundwater extraction. Results from the assimilation emphasize the importance of representing anthropogenic processes in land surface modeling and data assimilation systems.

  9. Managing high-bandwidth real-time data storage

    Energy Technology Data Exchange (ETDEWEB)

    Bigelow, David D. [Los Alamos National Laboratory; Brandt, Scott A [Los Alamos National Laboratory; Bent, John M [Los Alamos National Laboratory; Chen, Hsing-Bung [Los Alamos National Laboratory

    2009-09-23

    There exist certain systems which generate real-time data at high bandwidth, but do not necessarily require the long-term retention of that data in normal conditions. In some cases, the data may not actually be useful, and in others, there may be too much data to permanently retain in long-term storage whether it is useful or not. However, certain portions of the data may be identified as being vitally important from time to time, and must therefore be retained for further analysis or permanent storage without interrupting the ongoing collection of new data. We have developed a system, Mahanaxar, intended to address this problem. It provides quality of service guarantees for incoming real-time data streams and simultaneous access to already-recorded data on a best-effort basis utilizing any spare bandwidth. It has built in mechanisms for reliability and indexing, can scale upwards to meet increasing bandwidth requirements, and handles both small and large data elements equally well. We will show that a prototype version of this system provides better performance than a flat file (traditional filesystem) based version, particularly with regard to quality of service guarantees and hard real-time requirements.

  10. Computerized system of data acquisition, primary processing and data storage

    International Nuclear Information System (INIS)

    Muniz, F.J.

    1981-05-01

    The present system was proposed in order to collect and conveniently document data alphanumeric. The application of this system in the nuclear area was motivated by the demand originated by the enviromental monitoring program in nuclear installations. This is possible due to the flexibility offered by the system, which can have a general purpose utilization, with exception of the normalization circuits. In the nuclear area the collected data will be basically meteorological, of use in the development of atmospheric diffusion models. The data will be useful to estimate the radiation doses to the public, resultant of accidental or routine liberation of radioactive material to the atmosphere. The evolution of the potential dose received by the public, resulting from a hypothetic reactor accident, could be calculated from these data, as well. The system, from the electronic point of view, utilizes scale large integration technology, being constitued basically by the following functional blocks: -Transducer, -Normalization circuits, -Analog multiplexer, -Analog digital converter, -Microprocessor, -Interface to cassette records, -Interface to cassette read, -Cassette. (Author) [pt

  11. Data Storage and sharing for the long tail of science

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B. [Purdue Univ., West Lafayette, IN (United States); Pouchard, L. [Purdue Univ., West Lafayette, IN (United States); Smith, P. M. [Purdue Univ., West Lafayette, IN (United States); Gasc, A. [Purdue Univ., West Lafayette, IN (United States); Pijanowski, B. C. [Purdue Univ., West Lafayette, IN (United States)

    2016-11-21

    Research data infrastructure such as storage must now accommodate new requirements resulting from trends in research data management that require researchers to store their data for the long term and make it available to other researchers. We propose Data Depot, a system and service that provides capabilities for shared space within a group, shared applications, flexible access patterns and ease of transfer at Purdue University. We evaluate Depot as a solution for storing and sharing multiterabytes of data produced in the long tail of science with a use case in soundscape ecology studies from the Human- Environment Modeling and Analysis Laboratory. We observe that with the capabilities enabled by Data Depot, researchers can easily deploy fine-grained data access control, manage data transfer and sharing, as well as integrate their workflows into a High Performance Computing environment.

  12. Compressing Control System Data for Efficient Storage and Retrieval

    International Nuclear Information System (INIS)

    Christopher Larrieu

    2003-01-01

    The controls group at the Thomas Jefferson National Accelerator Facility (Jefferson Lab), acquires multiple terabytes of EPICS control system data per year via CZAR, its new archiving system. By heuristically applying a combination of rudimentary compression techniques, in conjunction with several specialized data transformations and algorithms, the CZAR storage engine reduces the size of this data by approximately 88 percent, without any loss of information. While the compression process requires significant memory and processor time, the decompression routine suffers only slightly in this regard

  13. Development of climate data storage and processing model

    Science.gov (United States)

    Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.

    2016-11-01

    We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.

  14. Development of EDFSRS: evaluated data files storage and retrieval system

    International Nuclear Information System (INIS)

    Hasegawa, Akira

    1985-07-01

    EDFSRS: Evaluated Data Files Storage and Retrieval System has been developed, which is a complete service system for the evaluated nuclear data files compiled in the major three formats: ENDF/B, UKNDL and KEDAK. This system intends to give efficient loading and maintenance of evaluated nuclear data files to the data base administrators and efficient retrievals to their users not only with the easiness but with the best confidence. It can give users all of the information available in these major three formats. The system consists of more than fifteen independent programs and some 150 Mega byte data files and index files (data-base) of the loaded data. In addition it is designed to be operated in the on-line TSS (Time Sharing System) mode, so that users can get any information from their desk top terminals. This report is prepared as a reference manual of the EDFSRS. (author)

  15. Efficient and secure outsourcing of genomic data storage.

    Science.gov (United States)

    Sousa, João Sá; Lefebvre, Cédric; Huang, Zhicong; Raisaro, Jean Louis; Aguilar-Melchor, Carlos; Killijian, Marc-Olivier; Hubaux, Jean-Pierre

    2017-07-26

    Cloud computing is becoming the preferred solution for efficiently dealing with the increasing amount of genomic data. Yet, outsourcing storage and processing sensitive information, such as genomic data, comes with important concerns related to privacy and security. This calls for new sophisticated techniques that ensure data protection from untrusted cloud providers and that still enable researchers to obtain useful information. We present a novel privacy-preserving algorithm for fully outsourcing the storage of large genomic data files to a public cloud and enabling researchers to efficiently search for variants of interest. In order to protect data and query confidentiality from possible leakage, our solution exploits optimal encoding for genomic variants and combines it with homomorphic encryption and private information retrieval. Our proposed algorithm is implemented in C++ and was evaluated on real data as part of the 2016 iDash Genome Privacy-Protection Challenge. Results show that our solution outperforms the state-of-the-art solutions and enables researchers to search over millions of encrypted variants in a few seconds. As opposed to prior beliefs that sophisticated privacy-enhancing technologies (PETs) are unpractical for real operational settings, our solution demonstrates that, in the case of genomic data, PETs are very efficient enablers.

  16. Technology Assessment of High Capacity Data Storage Systems: Can We Avoid a Data Survivability Crisis

    Science.gov (United States)

    Halem, M.; Shaffer, F.; Palm, N.; Salmon, E.; Raghavan, S.; Kempster, L.

    1998-01-01

    The density of digital storage media in our information-intensive society increases by a factor of four every three years, while the rate at which this data can be migrated to viable long-term storage has been increasing by a factor of only four every nine years. Meanwhile, older data stored on increasingly obsolete media, are at considerable risk. When the systems for which the media were designed are no longer serviced by their manufacturers (many of whom are out of business), the data will no longer be accessible. In some cases, older media suffer from a physical breakdown of components - tapes simply lose their magnetic properties after a long time in storage. The scale of the crisis is compatible to that facing the Social Security System. Greater financial and intellectual resources to the development and refinement of new storage media and migration technologies in order to preserve as much data as possible.

  17. Cavity enhanced eigenmode multiplexing for volume holographic data storage

    Science.gov (United States)

    Miller, Bo E.; Takashima, Yuzuru

    2017-08-01

    Previously, we proposed and experimentally demonstrated enhanced recording speeds by using a resonant optical cavity to semi-passively increase the reference beam power while recording image bearing holograms. In addition to enhancing the reference beam power the cavity supports the orthogonal reference beam families of its eigenmodes, which can be used as a degree of freedom to multiplex data pages and increase storage densities for volume Holographic Data Storage Systems (HDSS). While keeping the increased recording speed of a cavity enhanced reference arm, image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at two Bragg angles for expedited recording of four multiplexed holograms. We experimentally confirmed write rates are enhanced by an average factor of 1.1, and page crosstalk is about 2.5%. This hybrid multiplexing opens up a pathway to increase storage density while minimizing modifications to current angular multiplexing HDSS.

  18. Dual-Wavelength Sensitized Photopolymer for Holographic Data Storage

    Science.gov (United States)

    Tao, Shiquan; Zhao, Yuxia; Wan, Yuhong; Zhai, Qianli; Liu, Pengfei; Wang, Dayong; Wu, Feipeng

    2010-08-01

    Novel photopolymers for holographic storage were investigated by combining acrylate monomers and/or vinyl monomers as recording media and liquid epoxy resins plus an amine harder as binder. In order to improve the holographic performances of the material at blue-green wavelength band two novel dyes were used as sensitizer. The methods of evaluating the holographic performances of the material, including the shrinkage and noise characteristics, are described in detail. Preliminary experiments show that samples with optimized composite have good holographic performances, and it is possible to record dual-wavelength hologram simultaneously in this photopolymer by sharing the same optical system, thus the storage density and data rate can be doubly increased.

  19. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    Science.gov (United States)

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  20. Hybrid data storage system in an HPC exascale environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Gupta, Uday K.; Tzelnic, Percy; Ting, Dennis P. J.

    2015-08-18

    A computer-executable method, system, and computer program product for managing I/O requests from a compute node in communication with a data storage system, including a first burst buffer node and a second burst buffer node, the computer-executable method, system, and computer program product comprising striping data on the first burst buffer node and the second burst buffer node, wherein a first portion of the data is communicated to the first burst buffer node and a second portion of the data is communicated to the second burst buffer node, processing the first portion of the data at the first burst buffer node, and processing the second portion of the data at the second burst buffer node.

  1. Data Storage for Social Networks A Socially Aware Approach

    CERN Document Server

    Tran, Duc A

    2012-01-01

    Evidenced by the success of Facebook, Twitter, and LinkedIn, online social networks (OSNs) have become ubiquitous, offering novel ways for people to access information and communicate with each other. As the increasing popularity of social networking is undeniable, scalability is an important issue for any OSN that wants to serve a large number of users. Storing user data for the entire network on a single server can quickly lead to a bottleneck, and, consequently, more servers are needed to expand storage capacity and lower data request traffic per server. Adding more servers is just one step

  2. Data Storage and Management for Global Research Data Infrastructures - Status and Perspectives

    Directory of Open Access Journals (Sweden)

    Erwin Laure

    2013-07-01

    Full Text Available In the vision of Global Research Data Infrastructures (GRDIs, data storage and management plays a crucial role. A successful GRDI will require a common globally interoperable distributed data system, formed out of data centres, that incorporates emerging technologies and new scientific data activities. The main challenge is to define common certification and auditing frameworks that will allow storage providers and data communities to build a viable partnership based on trust. To achieve this, it is necessary to find a long-term commitment model that will give financial, legal, and organisational guarantees of digital information preservation. In this article we discuss the state of the art in data storage and management for GRDIs and point out future research directions that need to be tackled to implement GRDIs.

  3. Multiplexed optical data storage and vectorial ray tracing

    Directory of Open Access Journals (Sweden)

    Foreman M.R.

    2010-06-01

    Full Text Available With the motivation of creating a terabyte-sized optical disk, a novel imaging technique is implemented. This technique merges two existing technologies: confocal microscopy and Mueller matrix imaging. Mueller matrix images from a high numerical space are obtained. The acquisition of these images makes the exploration of polarisation properties in a sample possible. The particular case of optical data storage is used as an example in this presentation. Since we encode information into asymmetric datapits (see Figure 1, the study of the polarisation of the scattered light can then be used to recover the orientation of the pit. It is thus possible to multiplex information by changing the angle of the mark. The storage capacity in the system is hence limited by the number of distinct angles that the optical system can resolve. This presentation thus answers the question; what is the current storage capacity of a polarisation sensitive optical disk? After a brief introduction to polarisation, the decoding method and experimental results are presented so as to provide an answer to this question. With the aim of understanding high NA focusing, an introduction to vectorial ray tracing is then given.

  4. Analysis Report for Exascale Storage Requirements for Scientific Data.

    Energy Technology Data Exchange (ETDEWEB)

    Ruwart, Thomas M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-02-01

    Over the next 10 years, the Department of Energy will be transitioning from Petascale to Exascale Computing resulting in data storage, networking, and infrastructure requirements to increase by three orders of magnitude. The technologies and best practices used today are the result of a relatively slow evolution of ancestral technologies developed in the 1950s and 1960s. These include magnetic tape, magnetic disk, networking, databases, file systems, and operating systems. These technologies will continue to evolve over the next 10 to 15 years on a reasonably predictable path. Experience with the challenges involved in transitioning these fundamental technologies from Terascale to Petascale computing systems has raised questions about how these will scale another 3 or 4 orders of magnitude to meet the requirements imposed by Exascale computing systems. This report is focused on the most concerning scaling issues with data storage systems as they relate to High Performance Computing- and presents options for a path forward. Given the ability to store exponentially increasing amounts of data, far more advanced concepts and use of metadata will be critical to managing data in Exascale computing systems.

  5. Analysis and Research on Spatial Data Storage Model Based on Cloud Computing Platform

    Science.gov (United States)

    Hu, Yong

    2017-12-01

    In this paper, the data processing and storage characteristics of cloud computing are analyzed and studied. On this basis, a cloud computing data storage model based on BP neural network is proposed. In this data storage model, it can carry out the choice of server cluster according to the different attributes of the data, so as to complete the spatial data storage model with load balancing function, and have certain feasibility and application advantages.

  6. Clone-based Data Index in Cloud Storage Systems

    Directory of Open Access Journals (Sweden)

    He Jing

    2016-01-01

    Full Text Available The storage systems have been challenged by the development of cloud computing. The traditional data index cannot satisfy the requirements of cloud computing because of the huge index volumes and quick response time. Meanwhile, because of the increasing size of data index and its dynamic characteristics, the previous ways, which rebuilding the index or fully backup the index before the data has changed, cannot satisfy the need of today’s big data index. To solve these problems, we propose a double-layer index structure that overcomes the throughput limitation of single point server. Then, a clone based B+ tree structure is proposed to achieve high performance and adapt dynamic environment. The experimental results show that our clone-based solution has high efficiency.

  7. Phase-image-based content-addressable holographic data storage

    Science.gov (United States)

    John, Renu; Joseph, Joby; Singh, Kehar

    2004-03-01

    We propose and demonstrate the use of phase images for content-addressable holographic data storage. Use of binary phase-based data pages with 0 and π phase changes, produces uniform spectral distribution at the Fourier plane. The absence of strong DC component at the Fourier plane and more intensity of higher order spatial frequencies facilitate better recording of higher spatial frequencies, and improves the discrimination capability of the content-addressable memory. This improves the results of the associative recall in a holographic memory system, and can give low number of false hits even for small search arguments. The phase-modulated pixels also provide an opportunity of subtraction among data pixels leading to better discrimination between similar data pages.

  8. Data exchange system in cooler-storage-ring virtual accelerator

    International Nuclear Information System (INIS)

    Liu Wufeng; Qiao Weimin; Jing Lan; Guo Yuhui

    2009-01-01

    The data exchange system of the cooler-storage-ring (CSR) control system for heavy ion radiotherapy has been introduced for the heavy ion CSR at Lanzhou (HIRFL-CSR). Using techniques of Java, component object model (COM), Oracle, DSP and FPGA, this system can achieve real-time control of magnet power supplies sanctimoniously, and control beams and their switching in 256 energy levels. It has been used in the commissioning of slow extraction for the main CSR (CSRm), showing stable and reliable performance. (authors)

  9. Optically Addressed Nanostructures for High Density Data Storage

    Science.gov (United States)

    2005-10-14

    beam to sub-wavelength resolutions. X. Refereed Journal Publications I. M. D. Stenner , D. J. Gauthier, and M. A. Neifeld, "The speed of information in a...profiles for high-density optical data storage," Optics Communications, Vol.253, pp.56-69, 2005. 5. M. D. Stenner , D. J. Gauthier, and M. A. Neifeld, "Fast...causal information transmission in a medium with a slow group velocity," Physical Review Letters, Vol.94, February 2005. 6. M. D. Stenner , M. A

  10. Spectroscopic Feedback for High Density Data Storage and Micromachining

    Science.gov (United States)

    Carr, Christopher W.; Demos, Stavros; Feit, Michael D.; Rubenchik, Alexander M.

    2008-09-16

    Optical breakdown by predetermined laser pulses in transparent dielectrics produces an ionized region of dense plasma confined within the bulk of the material. Such an ionized region is responsible for broadband radiation that accompanies a desired breakdown process. Spectroscopic monitoring of the accompanying light in real-time is utilized to ascertain the morphology of the radiated interaction volume. Such a method and apparatus as presented herein, provides commercial realization of rapid prototyping of optoelectronic devices, optical three-dimensional data storage devices, and waveguide writing.

  11. NOSQL FOR STORAGE AND RETRIEVAL OF LARGE LIDAR DATA COLLECTIONS

    Directory of Open Access Journals (Sweden)

    J. Boehm

    2015-08-01

    Full Text Available Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.

  12. Nosql for Storage and Retrieval of Large LIDAR Data Collections

    Science.gov (United States)

    Boehm, J.; Liu, K.

    2015-08-01

    Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file) in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.

  13. Protecting location privacy for outsourced spatial data in cloud storage.

    Science.gov (United States)

    Tian, Feng; Gui, Xiaolin; An, Jian; Yang, Pan; Zhao, Jianqiang; Zhang, Xuejun

    2014-01-01

    As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC(∗)) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC(∗) and DSC are more secure than SHC, and DSC achieves the best index generation performance.

  14. Storage and analysis of radioisotope scan data using a microcomputer

    Energy Technology Data Exchange (ETDEWEB)

    Crawshaw, I P; Diffey, B L [Dryburn Hospital, Durham (UK)

    1981-08-01

    A data storage system has been created for recording clinical radioisotope scan data on a microcomputer system, located and readily available for use in an imaging department. The input of patient data from the request cards and the results sheets is straightforward as menus and code numbers are used throughout a logical sequence of steps in the program. The questions fall into four categories; patient information, referring centre information, diagnosis and symptoms and results of the investigation. The main advantage of the analysis program is its flexibility in that it follows the same format as the input program and any combination of criteria required for analysis may be selected. The menus may readily be altered and the programs adapted for use in other hospital departments.

  15. Storage and analysis of radioisotope scan data using a microcomputer

    International Nuclear Information System (INIS)

    Crawshaw, I.P.; Diffey, B.L.

    1981-01-01

    A data storage system has been created for recording clinical radioisotope scan data on a microcomputer system, located and readily available for use in an imaging department. The input of patient data from the request cards and the results sheets is straightforward as menus and code numbers are used throughout a logical sequence of steps in the program. The questions fall into four categories; patient information, referring centre information, diagnosis and symptoms and results of the investigation. The main advantage of the analysis program is its flexibility in that it follows the same format as the input program and any combination of criteria required for analysis may be selected. The menus may readily be altered and the programs adapted for use in other hospital departments. (U.K.)

  16. Challenges for data storage in medical imaging research.

    Science.gov (United States)

    Langer, Steve G

    2011-04-01

    Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher's need grows by leveraging the on-demand provisioning ability of cloud computing.

  17. Federated data storage system prototype for LHC experiments and data intensive science

    Science.gov (United States)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  18. Online data handling and storage at the CMS experiment

    Science.gov (United States)

    Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gómez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, RK; Morovic, S.; Nuñez-Barranco-Fernández, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.

  19. Online Data Handling and Storage at the CMS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Andre, J. M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.

  20. Online data handling and storage at the CMS experiment

    CERN Document Server

    Andre, Jean-marc Olivier; Behrens, Ulf; Branson, James; Chaze, Olivier; Demiragli, Zeynep; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez Ceballos, Guillelmo; Hegeman, Jeroen Guido; Holzner, Andre Georg; Jimenez Estupinan, Raul; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; Nunez Barranco Fernandez, Carlos; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrucci, Andrea; Pieri, Marco; Racz, Attila; Roberts, Penelope Amelia; Sakulin, Hannes; Schwick, Christoph; Stieger, Benjamin Bastian; Sumorok, Konstanty; Veverka, Jan; Zaza, Salvatore; Zejdl, Petr

    2015-01-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small 'documents' using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store ...

  1. Online data handling and storage at the CMS experiment

    International Nuclear Information System (INIS)

    Andre, J-M; Andronidis, A; Chaze, O; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Hegeman, J; Jimenez-Estupiñán, R; Masetti, L; Meijers, F; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Darlea, G-L; Demiragli, Z; Gómez-Ceballos, G; Erhan, S

    2015-01-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system. (paper)

  2. Holographic data storage: science fiction or science fact?

    Science.gov (United States)

    Anderson, Ken; Ayres, Mark; Askham, Fred; Sissom, Brad

    2014-09-01

    To compete in the archive and backup industries, holographic data storage must be highly competitive in four critical areas: total cost of ownership (TCO), cost/TB, capacity/footprint, and transfer rate. New holographic technology advancements by Akonia Holographics have enabled the potential for ultra-high capacity holographic storage devices that are capable of world record bit densities of over 2-4Tbit/in2, up to 200MB/s transfer rates, and media costs less than $10/TB in the next few years. Additional advantages include more than a 3x lower TCO than LTO, a 3.5x decrease in volumetric footprint, 30ms random access times, and 50 year archive life. At these bit densities, 4.5 Petabytes of uncompressed user data could be stored in a 19" rack system. A demonstration platform based on these new advances has been designed and built by Akonia to progressively demonstrate bit densities of 2Tb/in2, 4Tb/in2, and 8Tb/in2 over the next year. Keywords: holographic

  3. Robust Secure Authentication and Data Storage with Perfect Secrecy

    Directory of Open Access Journals (Sweden)

    Sebastian Baur

    2018-04-01

    Full Text Available We consider an authentication process that makes use of biometric data or the output of a physical unclonable function (PUF, respectively, from an information theoretical point of view. We analyse different definitions of achievability for the authentication model. For the secrecy of the key generated for authentication, these definitions differ in their requirements. In the first work on PUF based authentication, weak secrecy has been used and the corresponding capacity regions have been characterized. The disadvantages of weak secrecy are well known. The ultimate performance criteria for the key are perfect secrecy together with uniform distribution of the key. We derive the corresponding capacity region. We show that, for perfect secrecy and uniform distribution of the key, we can achieve the same rates as for weak secrecy together with a weaker requirement on the distribution of the key. In the classical works on PUF based authentication, it is assumed that the source statistics are known perfectly. This requirement is rarely met in applications. That is why the model is generalized to a compound model, taking into account source uncertainty. We also derive the capacity region for the compound model requiring perfect secrecy. Additionally, we consider results for secure storage using a biometric or PUF source that follow directly from the results for authentication. We also generalize known results for this problem by weakening the assumption concerning the distribution of the data that shall be stored. This allows us to combine source compression and secure storage.

  4. Two-Level Verification of Data Integrity for Data Storage in Cloud Computing

    Science.gov (United States)

    Xu, Guangwei; Chen, Chunlin; Wang, Hongya; Zang, Zhuping; Pang, Mugen; Jiang, Ping

    Data storage in cloud computing can save capital expenditure and relive burden of storage management for users. As the lose or corruption of files stored may happen, many researchers focus on the verification of data integrity. However, massive users often bring large numbers of verifying tasks for the auditor. Moreover, users also need to pay extra fee for these verifying tasks beyond storage fee. Therefore, we propose a two-level verification of data integrity to alleviate these problems. The key idea is to routinely verify the data integrity by users and arbitrate the challenge between the user and cloud provider by the auditor according to the MACs and ϕ values. The extensive performance simulations show that the proposed scheme obviously decreases auditor's verifying tasks and the ratio of wrong arbitration.

  5. All-optical signal processing data communication and storage applications

    CERN Document Server

    Eggleton, Benjamin

    2015-01-01

    This book provides a comprehensive review of the state-of-the art of optical signal processing technologies and devices. It presents breakthrough solutions for enabling a pervasive use of optics in data communication and signal storage applications. It presents presents optical signal processing as solution to overcome the capacity crunch in communication networks. The book content ranges from the development of innovative materials and devices, such as graphene and slow light structures, to the use of nonlinear optics for secure quantum information processing and overcoming the classical Shannon limit on channel capacity and microwave signal processing. Although it holds the promise for a substantial speed improvement, today’s communication infrastructure optics remains largely confined to the signal transport layer, as it lags behind electronics as far as signal processing is concerned. This situation will change in the near future as the tremendous growth of data traffic requires energy efficient and ful...

  6. Lossless compression of waveform data for efficient storage and transmission

    International Nuclear Information System (INIS)

    Stearns, S.D.; Tan, Li Zhe; Magotra, Neeraj

    1993-01-01

    Compression of waveform data is significant in many engineering and research areas since it can be used to alleviate data storage and transmission bandwidth. For example, seismic data are widely recorded and transmitted so that analysis can be performed on large amounts of data for numerous applications such as petroleum exploration, determination of the earth's core structure, seismic event detection and discrimination of underground nuclear explosions, etc. This paper describes a technique for lossless wave form data compression. The technique consists of two stages. The first stage is a modified form of linear prediction with discrete coefficients and the second stage is bi-level sequence coding. The linear predictor generates an error or residue sequence in a way such that exact reconstruction of the original data sequence can be accomplished with a simple algorithm. The residue sequence is essentially white Gaussian with seismic or other similar waveform data. Bi-level sequence coding, in which two sample sizes are chosen and the residue sequence is encoded into subsequences that alternate from one level to the other, further compresses the residue sequence. The principal feature of the two-stage data compression algorithm is that it is lossless, that is, it allows exact, bit-for-bit recovery of the original data sequence. The performance of the lossless compression algorithm at each stage is analyzed. The advantages of using bi-level sequence coding in the second stage are its simplicity of implementation, its effectiveness on data with large amplitude variations, and its near-optimal performance in encoding Gaussian sequences. Applications of the two-stage technique to typical seismic data indicates that an average number of compressed bits per sample close to the lower bound is achievable in practical situations

  7. Technology Assessment of High Capacity Data Storage Systems: Can We Avoid a Data Survivability Crisis?

    Science.gov (United States)

    Halem, M.; Shaffer, F.; Palm, N.; Salmon, E.; Raghavan, S.; Kempster, L.

    1998-01-01

    This technology assessment of long-term high capacity data storage systems identifies an emerging crisis of severe proportions related to preserving important historical data in science, healthcare, manufacturing, finance and other fields. For the last 50 years, the information revolution, which has engulfed all major institutions of modem society, centered itself on data-their collection, storage, retrieval, transmission, analysis and presentation. The transformation of long term historical data records into information concepts, according to Drucker, is the next stage in this revolution towards building the new information based scientific and business foundations. For this to occur, data survivability, reliability and evolvability of long term storage media and systems pose formidable technological challenges. Unlike the Y2K problem, where the clock is ticking and a crisis is set to go off at a specific time, large capacity data storage repositories face a crisis similar to the social security system in that the seriousness of the problem emerges after a decade or two. The essence of the storage crisis is as follows: since it could take a decade to migrate a peta-byte of data to a new media for preservation, and the life expectancy of the storage media itself is only a decade, then it may not be possible to complete the transfer before an irrecoverable data loss occurs. Over the last two decades, a number of anecdotal crises have occurred where vital scientific and business data were lost or would have been lost if not for major expenditures of resources and funds to save this data, much like what is happening today to solve the Y2K problem. A pr-ime example was the joint NASA/NSF/NOAA effort to rescue eight years worth of TOVS/AVHRR data from an obsolete system, which otherwise would have not resulted in the valuable 20-year long satellite record of global warming. Current storage systems solutions to long-term data survivability rest on scalable architectures

  8. ``Recent experiences and future expectations in data storage technology''

    Science.gov (United States)

    Pfister, Jack

    1990-08-01

    For more than 10 years the conventional media for High Energy Physics has been 9 track magnetic tape in various densities. More recently, especially in Europe, the IBM 3480 technology has been adopted while in the United States, especially at Fermilab, 8 mm is being used by the largest experiments as a primary recording media and where possible they are using 8 mm for the production, analysis and distribution of data summary tapes. VHS and Digital Audio tape have recurrently appeared but seem to serve primarily as a back-up storage media. The reasons for what appear to be a radical departure are many. Economics (media and controllers are inexpensive), form factor (two gigabytes per shirt pocket), and convenience (fewer mounts/dismounts per minute) are dominant among the reasons. The traditional data media suppliers seem to have been content to evolve the traditional media at their own pace with only modest enhancements primarily in ``value engineering'' of extant products. Meanwhile, start-up companies providing small system and workstations sought other media both to reduce the price of their offerings and respond to the real need of lower cost back-up for lower cost systems. This happening in a market context where traditional computer systems vendors were leaving the tape market altogether or shifting to ``3480'' technology which has certainly created a climate for reconsideration and change. The newest data storage products, in most cases, are not coming from the technologies developed by the computing industry but by the audio and video industry. Just where these flopticals, opticals, 19 mm tape and the new underlying technologies, such as, ``digital paper'' may fit in the HEP computing requirement picture will be reviewed. What these technologies do for and to HEP will be discussed along with some suggestions for a methodology for tracking and evaluating extant and emerging technologies.

  9. Proactive replica checking to assure reliability of data in cloud storage with minimum replication

    Science.gov (United States)

    Murarka, Damini; Maheswari, G. Uma

    2017-11-01

    The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.

  10. Eternal 5D optical data storage in glass (Conference Presentation)

    Science.gov (United States)

    Kazansky, Peter G.; Cerkauskaite, Ausra; Drevinskas, Rokas; Zhang, Jingyu

    2016-09-01

    A decade ago it has been discovered that during femtosecond laser writing self-organized subwavelength structures with record small features of 20 nm, could be created in the volume of silica glass. On the macroscopic scale the self-assembled nanostructure behaves as a uniaxial optical crystal with negative birefringence. The optical anisotropy, which results from the alignment of nano-platelets, referred to as form birefringence, is of the same order of magnitude as positive birefringence in crystalline quartz. The two independent parameters describing birefringence, the slow axis orientation (4th dimension) and the strength of retardance (5th dimension), are explored for the optical encoding of information in addition to three spatial coordinates. The slow axis orientation and the retardance are independently manipulated by the polarization and intensity of the femtosecond laser beam. The data optically encoded into five dimensions is successfully retrieved by quantitative birefringence measurements. The storage allows unprecedented parameters including hundreds of terabytes per disc data capacity and thermal stability up to 1000°. Even at elevated temperatures of 160oC, the extrapolated decay time of nanogratings is comparable with the age of the Universe - 13.8 billion years. The recording of the digital documents, which will survive the human race, including the eternal copies of Universal Declaration of Human Rights, Newton's Opticks, Kings James Bible and Magna Carta, is a vital step towards an eternal archive. Additionally, a number of projects (such as Time Capsule to Mars, MoonMail, and the Google Lunar XPRIZE) could benefit from the technique's extreme durability, which fulfills a crucial requirement for storage on the Moon or Mars.

  11. Parallel file system performances in fusion data storage

    International Nuclear Information System (INIS)

    Iannone, F.; Podda, S.; Bracco, G.; Manduchi, G.; Maslennikov, A.; Migliori, S.; Wolkersdorfer, K.

    2012-01-01

    High I/O flow rates, up to 10 GB/s, are required in large fusion Tokamak experiments like ITER where hundreds of nodes store simultaneously large amounts of data acquired during the plasma discharges. Typical network topologies such as linear arrays (systolic), rings, meshes (2-D arrays), tori (3-D arrays), trees, butterfly, hypercube in combination with high speed data transports like Infiniband or 10G-Ethernet, are the main areas in which the effort to overcome the so-called parallel I/O bottlenecks is most focused. The high I/O flow rates were modelled in an emulated testbed based on the parallel file systems such as Lustre and GPFS, commonly used in High Performance Computing. The test runs on High Performance Computing–For Fusion (8640 cores) and ENEA CRESCO (3392 cores) supercomputers. Message Passing Interface based applications were developed to emulate parallel I/O on Lustre and GPFS using data archival and access solutions like MDSPLUS and Universal Access Layer. These methods of data storage organization are widely diffused in nuclear fusion experiments and are being developed within the EFDA Integrated Tokamak Modelling – Task Force; the authors tried to evaluate their behaviour in a realistic emulation setup.

  12. Parallel file system performances in fusion data storage

    Energy Technology Data Exchange (ETDEWEB)

    Iannone, F., E-mail: francesco.iannone@enea.it [Associazione EURATOM-ENEA sulla Fusione, C.R.ENEA Frascati, via E.Fermi, 45 - 00044 Frascati, Rome (Italy); Podda, S.; Bracco, G. [ENEA Information Communication Tecnologies, Lungotevere Thaon di Revel, 76 - 00196 Rome (Italy); Manduchi, G. [Associazione EURATOM-ENEA sulla Fusione, Consorzio RFX, Corso Stati Uniti, 4 - 35127 Padua (Italy); Maslennikov, A. [CASPUR Inter-University Consortium for the Application of Super-Computing for Research, via dei Tizii, 6b - 00185 Rome (Italy); Migliori, S. [ENEA Information Communication Tecnologies, Lungotevere Thaon di Revel, 76 - 00196 Rome (Italy); Wolkersdorfer, K. [Juelich Supercomputing Centre-FZJ, D-52425 Juelich (Germany)

    2012-12-15

    High I/O flow rates, up to 10 GB/s, are required in large fusion Tokamak experiments like ITER where hundreds of nodes store simultaneously large amounts of data acquired during the plasma discharges. Typical network topologies such as linear arrays (systolic), rings, meshes (2-D arrays), tori (3-D arrays), trees, butterfly, hypercube in combination with high speed data transports like Infiniband or 10G-Ethernet, are the main areas in which the effort to overcome the so-called parallel I/O bottlenecks is most focused. The high I/O flow rates were modelled in an emulated testbed based on the parallel file systems such as Lustre and GPFS, commonly used in High Performance Computing. The test runs on High Performance Computing-For Fusion (8640 cores) and ENEA CRESCO (3392 cores) supercomputers. Message Passing Interface based applications were developed to emulate parallel I/O on Lustre and GPFS using data archival and access solutions like MDSPLUS and Universal Access Layer. These methods of data storage organization are widely diffused in nuclear fusion experiments and are being developed within the EFDA Integrated Tokamak Modelling - Task Force; the authors tried to evaluate their behaviour in a realistic emulation setup.

  13. Converged photonic data storage and switch platform for exascale disaggregated data centers

    Science.gov (United States)

    Pitwon, R.; Wang, K.; Worrall, A.

    2017-02-01

    We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.

  14. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Directory of Open Access Journals (Sweden)

    Shaoming Pan

    Full Text Available Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  15. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Science.gov (United States)

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  16. The Storage of Thermal Reactor Safety Analysis data (STRESA)

    International Nuclear Information System (INIS)

    Tanarro Colodron, J.

    2016-01-01

    Full text: Storage of Thermal Reactor Safety Analysis data (STRESA) is an online information system that contains three technical databases: 1) European Nuclear Research Facilities, open to all online visitors; 2) Nuclear Experiments, available only to registered users; 3) Results Data, being the core content of the information system, its availability depends on the role and organisation of each user. Its main purpose is to facilitate the exchange of experimental data produced by large Euratom funded scientific projects addressing severe accidents, providing at the same time a secure repository for this information. Due to its purpose and architecture, it has become an important asset for networks of excellence as SARNET or NUGENIA. The Severe Accident ResearchNetwork of Excellence (SARNET)was set up in 2004 under the aegis of the research Euratom Framework Programmes to study severe accidents in watercooled nuclear power plants. Coordinated by the IRSN, SARNET unites 43 organizations involved in research on nuclear reactor safety in 18 European countries plus the USA, Canada, South Korea and India. In 2013, SARNET became fully integrated in the Technical Area N2(TA2), named “Severe accidents” of NUGENIA association, devoted to R&D on fission technology of Generation II and III. (author

  17. Concept of data storage prototype for Super-C-Tau factory detector

    International Nuclear Information System (INIS)

    Maximov, D.A.

    2017-01-01

    The physics program of experiments at the Super- c τ factory with a peak luminosity of 10 35 cm −2 s −1 leads to high requrements for Data Acquisition and Data Storage systems. Detector data storage is one of the key component of the detector infrastructure, so it must be reliable, highly available and fault tolerant shared storage. It is mostly oriented (from end user point of view) for sequential but mixed read and write operations and is planed to store large data blocks (files). According to CDR of Super-C-Tau factory detector data storage must have very high performance (up to 1 Tbps in both directions simultaneously) and have significant volume (tens and hundreds of Petabytes). It is decided to build a series of prototypes with growing capabilities to investigate storage and neighboring technologies. First prototype of data storage is aimed to develop and test basic components of detector data storage system such as storage devices, networks and software. This prototype is designed to be capable to work with data rate of order 10 Gbps. It is estimated that about 5 modern computers with about 50 disks in total should be enough to archive required performance. The prototype will be based on Ceph storage technology. Ceph is a distributed storage system which allows to create storage solutions with very flexible design, high availability and scalability.

  18. An evaluation of Oracle for persistent data storage and analysis of LHC physics data

    International Nuclear Information System (INIS)

    Grancher, E.; Marczukajtis, M.

    2001-01-01

    CERN's IT/DB group is currently exploring the possibility of using oracle to store LHC physics data. It presents preliminary results from this work, concentrating on two aspects: the storage of RAW and the analysis of TAG data. The RAW data part of the study discusses the throughput that one can achieve with the oracle database system, the options for storing the data and an estimation of the associated overheads. The TAG data analysis focuses on the use of new and extended indexing features of oracle to perform efficient cuts on the data. The tests were performed with Oracle 8.1.7

  19. A Privacy-Preserving Outsourcing Data Storage Scheme with Fragile Digital Watermarking-Based Data Auditing

    Directory of Open Access Journals (Sweden)

    Xinyue Cao

    2016-01-01

    Full Text Available Cloud storage has been recognized as the popular solution to solve the problems of the rising storage costs of IT enterprises for users. However, outsourcing data to the cloud service providers (CSPs may leak some sensitive privacy information, as the data is out of user’s control. So how to ensure the integrity and privacy of outsourced data has become a big challenge. Encryption and data auditing provide a solution toward the challenge. In this paper, we propose a privacy-preserving and auditing-supporting outsourcing data storage scheme by using encryption and digital watermarking. Logistic map-based chaotic cryptography algorithm is used to preserve the privacy of outsourcing data, which has a fast operation speed and a good effect of encryption. Local histogram shifting digital watermark algorithm is used to protect the data integrity which has high payload and makes the original image restored losslessly if the data is verified to be integrated. Experiments show that our scheme is secure and feasible.

  20. Dendronized macromonomers for three-dimensional data storage

    DEFF Research Database (Denmark)

    Khan, A.; Daugaard, Anders Egede; Bayles, A.

    2009-01-01

    A series of dendritic macromonomers have been synthesized and utilized as the photoactive component in holographic storage systems leading to high performance, low shrinkage materials.......A series of dendritic macromonomers have been synthesized and utilized as the photoactive component in holographic storage systems leading to high performance, low shrinkage materials....

  1. Rewritable three-dimensional holographic data storage via optical forces

    Energy Technology Data Exchange (ETDEWEB)

    Yetisen, Ali K., E-mail: ayetisen@mgh.harvard.edu [Harvard Medical School and Wellman Center for Photomedicine, Massachusetts General Hospital, 65 Landsdowne Street, Cambridge, Massachusetts 02139 (United States); Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Montelongo, Yunuen [Department of Chemistry, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Butt, Haider [Nanotechnology Laboratory, School of Engineering Sciences, University of Birmingham, Birmingham B15 2TT (United Kingdom)

    2016-08-08

    The development of nanostructures that can be reversibly arranged and assembled into 3D patterns may enable optical tunability. However, current dynamic recording materials such as photorefractive polymers cannot be used to store information permanently while also retaining configurability. Here, we describe the synthesis and optimization of a silver nanoparticle doped poly(2-hydroxyethyl methacrylate-co-methacrylic acid) recording medium for reversibly recording 3D holograms. We theoretically and experimentally demonstrate organizing nanoparticles into 3D assemblies in the recording medium using optical forces produced by the gradients of standing waves. The nanoparticles in the recording medium are organized by multiple nanosecond laser pulses to produce reconfigurable slanted multilayer structures. We demonstrate the capability of producing rewritable optical elements such as multilayer Bragg diffraction gratings, 1D photonic crystals, and 3D multiplexed optical gratings. We also show that 3D virtual holograms can be reversibly recorded. This recording strategy may have applications in reconfigurable optical elements, data storage devices, and dynamic holographic displays.

  2. Cloud Storage and Bioinformatics in a private cloud deployment: Lessons for Data Intensive research

    OpenAIRE

    Chang, Victor; Walters, Robert John; Wills, Gary

    2013-01-01

    This paper describes service portability for a private cloud deployment, including a detailed case study about Cloud Storage and bioinformatics services developed as part of the Cloud Computing Adoption Framework (CCAF). Our Cloud Storage design and deployment is based on Storage Area Network (SAN) technologies, details of which include functionalities, technical implementation, architecture and user support. Experiments for data services (backup automation, data recovery and data migration) ...

  3. A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data

    Science.gov (United States)

    Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.

    2017-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  4. Heavy vehicle simulator operations: protocol for instrumentation, data collection and data storage - 2nd draft

    CSIR Research Space (South Africa)

    Jones, DJ

    2002-09-01

    Full Text Available The instrumentation used is discussed under the relevant sections. Keywords: Accelerated pavement testing (APT), Heavy Vehicle Simulator (HVS) Proposals for implementation: Follow protocol in all future HVS testing. Update as required... future HVS testing. The protocol discusses staffing, site selection and establishment, and data collection, analysis and storage. 1.2. Accelerated Pavement Testing Accelerated Pavement Testing (APT) can be described as a controlled application...

  5. An Open-Source Data Storage and Visualization Back End for Experimental Data

    DEFF Research Database (Denmark)

    Nielsen, Kenneth; Andersen, Thomas; Jensen, Robert

    2014-01-01

    and to interfere with the experiment if needed. The data stored consist both of specific measurements and of continuously logged system parameters. The latter is crucial to a variety of automation and surveillance features, and three cases of such features are described: monitoring system health, getting status......In this article, a flexible free and open-source software system for data logging and presentation will be described. The system is highly modular and adaptable and can be used in any laboratory in which continuous and/or ad hoc measurements require centralized storage. A presentation component...... for the data back end has furthermore been written that enables live visualization of data on any device capable of displaying Web pages. The system consists of three parts: data-logging clients, a data server, and a data presentation Web site. The logging of data from independent clients leads to high...

  6. ABOUT THE GENERAL CONCEPT OF THE UNIVERSAL STORAGE SYSTEM AND PRACTICE-ORIENTED DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    L. V. Rudikova

    2017-01-01

    Full Text Available Approaches evolution and concept of data accumulation in warehouse and subsequent Data Mining use is perspective due to the fact that, Belarusian segment of the same IT-developments is organizing. The article describes the general concept for creation a system of storage and practice-oriented data analysis, based on the data warehousing technology. The main aspect in universal system design on storage layer and working with data is approach uses extended data warehouse, based on universal platform of stored data, which grants access to storage and subsequent data analysis different structure and subject domains have compound’s points (nodes and extended functional with data structure choice option for data storage and subsequent intrasystem integration. Describe the universal system general architecture of storage and analysis practice-oriented data, structural elements. Main components of universal system for storage and processing practice-oriented data are: online data sources, ETL-process, data warehouse, subsystem of analysis, users. An important place in the system is analytical processing of data, information search, document’s storage and providing a software interface for accessing the functionality of the system from the outside. An universal system based on describing concept will allow collection information of different subject domains, get analytical summaries, do data processing and apply appropriate Data Mining methods and algorithms.

  7. Computer system for environmental sample analysis and data storage and analysis

    International Nuclear Information System (INIS)

    Brauer, F.P.; Fager, J.E.

    1976-01-01

    A mini-computer based environmental sample analysis and data storage system has been developed. The system is used for analytical data acquisition, computation, storage of analytical results, and tabulation of selected or derived results for data analysis, interpretation and reporting. This paper discussed the structure, performance and applications of the system

  8. Storage and Retrieval of Encrypted Data Blocks with In-Line Message Authentication Codes

    NARCIS (Netherlands)

    Bosch, H.G.P.; McLellan Jr, Hubert Rae; Mullender, Sape J.

    2007-01-01

    Techniques are disclosed for in-line storage of message authentication codes with respective encrypted data blocks. In one aspect, a given data block is encrypted and a message authentication code is generated for the encrypted data block. A target address is determined for storage of the encrypted

  9. Changes in cod muscle proteins during frozen storage revealed by proteome analysis and multivariate data analysis

    DEFF Research Database (Denmark)

    Kjærsgård, Inger Vibeke Holst; Nørrelykke, M.R.; Jessen, Flemming

    2006-01-01

    Multivariate data analysis has been combined with proteomics to enhance the recovery of information from 2-DE of cod muscle proteins during different storage conditions. Proteins were extracted according to 11 different storage conditions and samples were resolved by 2-DE. Data generated by 2-DE...... was subjected to principal component analysis (PCA) and discriminant partial least squares regression (DPLSR). Applying PCA to 2-DE data revealed the samples to form groups according to frozen storage time, whereas differences due to different storage temperatures or chilled storage in modified atmosphere...... light chain 1, 2 and 3, triose-phosphate isomerase, glyceraldehyde-3-phosphate dehydrogenase, aldolase A and two ?-actin fragments, and a nuclease diphosphate kinase B fragment to change in concentration, during frozen storage. Application of proteomics, multivariate data analysis and MS/MS to analyse...

  10. SERODS: a new medium for high-density optical data storage

    Science.gov (United States)

    Vo-Dinh, Tuan; Stokes, David L.

    1998-10-01

    A new optical dada storage technology based on the surface- enhanced Raman scattering (SERS) effect has been developed for high-density optical memory and three-dimensional data storage. With the surface-enhanced Raman optical data storage (SERODS) technology, the molecular interactions between the optical layer molecules and the nanostructured metal substrate are modified by the writing laser, changing their SERS properties to encode information as bits. Since the SERS properties are extremely sensitive to molecular nano- environments, very small 'spectrochemical holes' approaching the diffraction limit can be produced for the writing process. The SERODS device uses a reading laser to induce the SERS emission of molecules on the disk and a photometric detector tuned to the frequency of the RAMAN spectrum to retrieve the stored information. The results illustrate that SERODS is capable of three-dimensional data storage and has the potential to achieve higher storage density than currently available optical data storage systems.

  11. Advanced radiographic scanning, enhancement and electronic data storage

    International Nuclear Information System (INIS)

    Savoie, C.; Rivest, D.

    2003-01-01

    It is a well-known fact that radiographs deteriorate with time. Substantial cost is attributed to cataloguing and storage. To eliminate deterioration issues and save time retrieving radiographs, laser scanning techniques were developed in conjunction with viewing and enhancement software. This will allow radiographs to be successfully scanned and stored electronically for future reference. Todays radiographic laser scanners are capable Qf capturing images with an optical density of up to 4.1 at 256 grey levels and resolutions up to 4096 pixels per line. An industrial software interface was developed for the nondestructive testing industry so that, certain parameters such as scan resolution, number of scans, file format and location to be saved could be adjusted as needed. Once the radiographs have been scanned, the tiff images are stored, or retrieved into Radiance software (developed by Rivest Technologies Inc.), which will help to properly interpret the radiographs. Radiance was developed to allow the user to quickly view the radiographs correctness or enhance its defects for comparison and future evaluation. Radiance also allows the user to zoom, measure and annotate areas of interest. Physical cost associated with cataloguing, storing and retrieving radiographs can be eliminated. You can now successfully retrieve and view your radiographs from CD media or dedicated hard drive at will. For continuous searches and/or field access, dedicated hard drives controlled by a server would be the media of choice. All scanned radiographs will be archived to CD media (CD-R). Laser scanning with a proper acquisition interface and easy to use viewing software will permit a qualified user to identify areas of interest and share this information with his/her colleagues via e-mail or web data access. (author)

  12. Integrated Storage and Management of Vector and Raster Data Based on Oracle Database

    Directory of Open Access Journals (Sweden)

    WU Zheng

    2017-05-01

    Full Text Available At present, there are many problems in the storage and management of multi-source heterogeneous spatial data, such as the difficulty of transferring, the lack of unified storage and the low efficiency. By combining relational database and spatial data engine technology, an approach for integrated storage and management of vector and raster data is proposed on the basis of Oracle in this paper. This approach establishes an integrated storage model on vector and raster data and optimizes the retrieval mechanism at first, then designs a framework for the seamless data transfer, finally realizes the unified storage and efficient management of multi-source heterogeneous data. By comparing experimental results with the international leading similar software ArcSDE, it is proved that the proposed approach has higher data transfer performance and better query retrieval efficiency.

  13. Phenothiazine based polymers for energy and data storage application

    Energy Technology Data Exchange (ETDEWEB)

    Golriz, Seyed Ahmad Ali

    2013-03-15

    charge and discharge cycles. In addition to applications in batteries the bistability of phenothiazine polymers for high density data storage purposes was studied. Using the conductive mode of scanning force microscopy (SFM), nano-scaled patterning of spin-coated polymer films induced by electrochemical oxidation was successfully demonstrated. The scanning probe experiments revealed differences in the conductive states of written patterns before and after oxidation with no significant change in topography. Remarkably, the patterns were stable with respect to the storage time as well as mechanical wear. Finally, new synthetic approaches towards mechanically nanowear stable and redox active surfaces were established. Via grafting from methods based on Atom Transfer Radical Polymerization (ATRP), redox active polymer brushes with phenothiazine moieties were prepared and characterized by SFM and X-ray techniques. In particular, a synthetic route based on polymer brush structures with activated ester functionality appeared as a very promising and versatile fabrication method. The activated ester brushes were used for attachment of phenothiazine moieties in a successive step. By using crosslinkable diamine moieties, polymer brushes with redox functionalities and with increased surface wear resistance were successfully synthesized. In summary, this work offers deep insights into the electronic properties of polymers with phenothiazine redox active moieties. Furthermore, the applicability of phenothiazine polymers for electronic devices was explored and improved from synthetic polymer chemistry point of view.

  14. Integrated data acquisition, storage, retrieval and processing using the COMPASS DataBase (CDB)

    Energy Technology Data Exchange (ETDEWEB)

    Urban, J., E-mail: urban@ipp.cas.cz [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Pipek, J.; Hron, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Janky, F.; Papřok, R.; Peterka, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Department of Surface and Plasma Science, Faculty of Mathematics and Physics, Charles University in Prague, V Holešovičkách 2, 180 00 Praha 8 (Czech Republic); Duarte, A.S. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)

    2014-05-15

    Highlights: • CDB is used as a new data storage solution for the COMPASS tokamak. • The software is light weight, open, fast and easily extensible and scalable. • CDB seamlessly integrates with any data acquisition system. • Rich metadata are stored for physics signals. • Data can be processed automatically, based on dependence rules. - Abstract: We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (COMPASS DataBase), integrates different data sources as an assortment of data acquisition hardware and software from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks.

  15. Evaluation of the Huawei UDS cloud storage system for CERN specific data

    International Nuclear Information System (INIS)

    Resines, M Zotes; Hughes, J; Wang, L; Heikkila, S S; Duellmann, D; Adde, G; Toebbicke, R

    2014-01-01

    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.

  16. Evaluation of the Huawei UDS cloud storage system for CERN specific data

    Science.gov (United States)

    Zotes Resines, M.; Heikkila, S. S.; Duellmann, D.; Adde, G.; Toebbicke, R.; Hughes, J.; Wang, L.

    2014-06-01

    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.

  17. Long-time data storage: relevant time scales

    NARCIS (Netherlands)

    Elwenspoek, Michael Curt

    2011-01-01

    Dynamic processes relevant for long-time storage of information about human kind are discussed, ranging from biological and geological processes to the lifecycle of stars and the expansion of the universe. Major results are that life will end ultimately and the remaining time that the earth is

  18. Rewritable 3D bit optical data storage in a PMMA-based photorefractive polymer

    Energy Technology Data Exchange (ETDEWEB)

    Day, D.; Gu, M. [Swinburne Univ. of Tech., Hawthorn, Vic. (Australia). Centre for Micro-Photonics; Smallridge, A. [Victoria Univ., Melbourne (Australia). School of Life Sciences and Technology

    2001-07-04

    A cheap, compact, and rewritable high-density optical data storage system for CD and DVD applications is presented by the authors. Continuous-wave illumination under two-photon excitation in a new poly(methylmethacrylate) (PMMA) based photorefractive polymer allows 3D bit storage of sub-Tbyte data. (orig.)

  19. 10 CFR 95.25 - Protection of National Security Information and Restricted Data in storage.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Protection of National Security Information and Restricted Data in storage. 95.25 Section 95.25 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) FACILITY SECURITY... Protection of National Security Information and Restricted Data in storage. (a) Secret matter, while...

  20. AN APPROACH TO REDUCE THE STORAGE REQUIREMENT FOR BIOMETRIC DATA IN AADHAR PROJECT

    Directory of Open Access Journals (Sweden)

    T. Sivakumar

    2013-02-01

    Full Text Available AADHAR is an Indian Government Project to provide unique identification to each Citizen of India. The objective of the project is to collect all the personal details and the biometric traits from each individual. Biometric traits such as iris, face and fingerprint are being collected for authentication. All the information will be stored in a centralized data repository. Considering about the storage requirement for the biometric data of the entire population of India, approximately 20,218 TB of storage space will be required. Since 10 fingerprint data are stored, fingerprint details will take most of the space. In this paper, the storage requirement for the biometric data in the AADHAR project is analyzed and a method is proposed to reduce the storage by cropping the original biometric image before storing. This method can reduce the storage space of the biometric data drastically. All the measurements given in this paper are approximate only.

  1. Archiving and retrieval of experimental data using SAN based centralized storage system for SST-1

    Energy Technology Data Exchange (ETDEWEB)

    Bhandarkar, Manisha, E-mail: manisha@ipr.res.in; Masand, Harish; Kumar, Aveg; Patel, Kirit; Dhongde, Jasraj; Gulati, Hitesh; Mahajan, Kirti; Chudasama, Hitesh; Pradhan, Subrata

    2016-11-15

    Highlights: • SAN (Storage Area Network) based centralized data storage system of SST-1 has envisaged to address the need of centrally availability of SST-1 storage system to archive/retrieve experimental data for the authenticated users for 24 × 7. • The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS cluster file system with multipath support. • The adopted SAN based data storage for SST-1 is a modular, robust, and allows future expandability. • Important considerations has been taken like, Handling of varied Data writing speed from different subsystems to central storage, Simultaneous read access of the bulk experimental and as well as essential diagnostic data, The life expectancy of data, How often data will be retrieved and how fast it will be needed, How much historical data should be maintained at storage. - Abstract: SAN (Storage Area Network, a high-speed, block level storage device) based centralized data storage system of SST-1 (Steady State superconducting Tokamak) has envisaged to address the need of availability of SST-1 operation & experimental data centrally for archival as well as retrieval [2]. Considering the initial data volume requirement, ∼10 TB (Terabytes) capacity of SAN based data storage system has configured/installed with optical fiber backbone with compatibility considerations of existing Ethernet network of SST-1. The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS (Global File System) cluster file system with multipath support. Tier-1 is of ∼3 TB (frequent access and low data storage capacity) comprises of Fiber channel (FC) based hard disks for optimum throughput. Tier-2 is of ∼6 TB (less frequent access and high data storage capacity) comprises of SATA based hard disks. Tier-3 will be planned later to store offline historical data. In the SAN configuration two tightly coupled storage servers (with cluster configuration) are

  2. Archiving and retrieval of experimental data using SAN based centralized storage system for SST-1

    International Nuclear Information System (INIS)

    Bhandarkar, Manisha; Masand, Harish; Kumar, Aveg; Patel, Kirit; Dhongde, Jasraj; Gulati, Hitesh; Mahajan, Kirti; Chudasama, Hitesh; Pradhan, Subrata

    2016-01-01

    Highlights: • SAN (Storage Area Network) based centralized data storage system of SST-1 has envisaged to address the need of centrally availability of SST-1 storage system to archive/retrieve experimental data for the authenticated users for 24 × 7. • The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS cluster file system with multipath support. • The adopted SAN based data storage for SST-1 is a modular, robust, and allows future expandability. • Important considerations has been taken like, Handling of varied Data writing speed from different subsystems to central storage, Simultaneous read access of the bulk experimental and as well as essential diagnostic data, The life expectancy of data, How often data will be retrieved and how fast it will be needed, How much historical data should be maintained at storage. - Abstract: SAN (Storage Area Network, a high-speed, block level storage device) based centralized data storage system of SST-1 (Steady State superconducting Tokamak) has envisaged to address the need of availability of SST-1 operation & experimental data centrally for archival as well as retrieval [2]. Considering the initial data volume requirement, ∼10 TB (Terabytes) capacity of SAN based data storage system has configured/installed with optical fiber backbone with compatibility considerations of existing Ethernet network of SST-1. The SAN based data storage system has been designed/configured with 3-tiered architecture and GFS (Global File System) cluster file system with multipath support. Tier-1 is of ∼3 TB (frequent access and low data storage capacity) comprises of Fiber channel (FC) based hard disks for optimum throughput. Tier-2 is of ∼6 TB (less frequent access and high data storage capacity) comprises of SATA based hard disks. Tier-3 will be planned later to store offline historical data. In the SAN configuration two tightly coupled storage servers (with cluster configuration) are

  3. Adaptive data migration scheme with facilitator database and multi-tier distributed storage in LHD

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Masaki, Ohsuna; Mamoru, Kojima; Setsuo, Imazu; Miki, Nonomura; Kenji, Watanabe; Masayoshi, Moriya; Yoshio, Nagayama; Kazuo, Kawahata

    2008-01-01

    Recent 'data explosion' induces the demand for high flexibility of storage extension and data migration. The data amount of LHD plasma diagnostics has grown 4.6 times bigger than that of three years before. Frequent migration or replication between plenty of distributed storage becomes mandatory, and thus increases the human operational costs. To reduce them computationally, a new adaptive migration scheme has been developed on LHD's multi-tier distributed storage. So-called the HSM (Hierarchical Storage Management) software usually adopts a low-level cache mechanism or simple watermarks for triggering the data stage-in and out between two storage devices. However, the new scheme can deal with a number of distributed storage by the facilitator database that manages the whole data locations with their access histories and retrieval priorities. Not only the inter-tier migration but also the intra-tier replication and moving are even manageable so that it can be a big help in extending or replacing storage equipment. The access history of each data object is also utilized to optimize the volume size of fast and costly RAID, in addition to a normal cache effect for frequently retrieved data. The new scheme has been verified its effectiveness so that LHD multi-tier distributed storage and other next-generation experiments can obtain such the flexible expandability

  4. INFORMATION SECURITY AND SECURE SEARCH OVER ENCRYPTED DATA IN CLOUD STORAGE SERVICES

    OpenAIRE

    Mr. A Mustagees Shaikh *; Prof. Nitin B. Raut

    2016-01-01

    Cloud computing is most widely used as the next generation architecture of IT enterprises, that provide convenient remote access to data storage and application services. This cloud storage can potentially bring great economical savings for data owners and users, but due to wide concerns of data owners that their private data may be exposed or handled by cloud providers. Hence end-to-end encryption techniques and fuzzy fingerprint technique have been used as solutions for secure cloud data st...

  5. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wadhwa, Bharti [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science; Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Butt, Ali R. [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objects to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.

  6. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data

    OpenAIRE

    Fischer, Felix; Selver, M. Alper; Gezer, Sinem; Dicle, O?uz; Hillen, Walter

    2015-01-01

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant addi...

  7. The National Institute on Aging Genetics of Alzheimer’s Disease Data Storage Site (NIAGADS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The National Institute on Aging Genetics of Alzheimer's Disease Data Storage Site (NIAGADS) is a national genetics data repository facilitating access to genotypic...

  8. Holographic memory for high-density data storage and high-speed pattern recognition

    Science.gov (United States)

    Gu, Claire

    2002-09-01

    As computers and the internet become faster and faster, more and more information is transmitted, received, and stored everyday. The demand for high density and fast access time data storage is pushing scientists and engineers to explore all possible approaches including magnetic, mechanical, optical, etc. Optical data storage has already demonstrated its potential in the competition against other storage technologies. CD and DVD are showing their advantages in the computer and entertainment market. What motivated the use of optical waves to store and access information is the same as the motivation for optical communication. Light or an optical wave has an enormous capacity (or bandwidth) to carry information because of its short wavelength and parallel nature. In optical storage, there are two types of mechanism, namely localized and holographic memories. What gives the holographic data storage an advantage over localized bit storage is the natural ability to read the stored information in parallel, therefore, meeting the demand for fast access. Another unique feature that makes the holographic data storage attractive is that it is capable of performing associative recall at an incomparable speed. Therefore, volume holographic memory is particularly suitable for high-density data storage and high-speed pattern recognition. In this paper, we review previous works on volume holographic memories and discuss the challenges for this technology to become a reality.

  9. Secure data storage by three-dimensional absorbers in highly scattering volume medium

    International Nuclear Information System (INIS)

    Matoba, Osamu; Matsuki, Shinichiro; Nitta, Kouichi

    2008-01-01

    A novel data storage in a volume medium with highly scattering coefficient is proposed for data security application. Three-dimensional absorbers are used as data. These absorbers can not be measured by interferometer when the scattering in a volume medium is strong enough. We present a method to reconstruct three-dimensional absorbers and present numerical results to show the effectiveness of the proposed data storage.

  10. Technology Assessment of High Capacity Data Storage Systems: Can We Avoid a Data Survivability Crisis?

    Science.gov (United States)

    Halem, Milton

    1999-01-01

    In a recent address at the California Science Center in Los Angeles, Vice President Al Gore articulated a Digital Earth Vision. That vision spoke to developing a multi-resolution, three-dimensional visual representation of the planet into which we can roam and zoom into vast quantities of embedded geo-referenced data. The vision was not limited to moving through space, but also allowing travel over a time-line, which can be set for days, years, centuries, or even geological epochs. A working group of Federal Agencies, developing a coordinated program to implement the Vice President's vision, developed the definition of the Digital Earth as a visual representation of our planet that enables a person to explore and interact with the vast amounts of natural and cultural geo-referenced information gathered about the Earth. One of the challenges identified by the agencies was whether the technology existed that would be available to permanently store and deliver all the digital data that enterprises might want to save for decades and centuries. Satellite digital data is growing by Moore's Law as is the growth of computer generated data. Similarly, the density of digital storage media in our information-intensive society is also increasing by a factor of four every three years. The technological bottleneck is that the bandwidth for transferring data is only growing at a factor of four every nine years. This implies that the migration of data to viable long-term storage is growing more slowly. The implication is that older data stored on increasingly obsolete media are at considerable risk if they cannot be continuously migrated to media with longer life times. Another problem occurs when the software and hardware systems for which the media were designed are no longer serviced by their manufacturers. Many instances exist where support for these systems are phased out after mergers or even in going out of business. In addition, survivability of older media can suffer from

  11. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    Science.gov (United States)

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  12. Identifying Non-Volatile Data Storage Areas: Unique Notebook Identification Information as Digital Evidence

    Directory of Open Access Journals (Sweden)

    Nikica Budimir

    2007-03-01

    Full Text Available The research reported in this paper introduces new techniques to aid in the identification of recovered notebook computers so they may be returned to the rightful owner. We identify non-volatile data storage areas as a means of facilitating the safe storing of computer identification information. A forensic proof of concept tool has been designed to test the feasibility of several storage locations identified within this work to hold the data needed to uniquely identify a computer. The tool was used to perform the creation and extraction of created information in order to allow the analysis of the non-volatile storage locations as valid storage areas capable of holding and preserving the data created within them.  While the format of the information used to identify the machine itself is important, this research only discusses the insertion, storage and ability to retain such information.

  13. 21 CFR 58.190 - Storage and retrieval of records and data.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Storage and retrieval of records and data. 58.190 Section 58.190 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES..., protocols, specimens, and interim and final reports. Conditions of storage shall minimize deterioration of...

  14. Rewritable azobenzene polyester for polarization holographic data storage

    DEFF Research Database (Denmark)

    Kerekes, A; Sajti, Sz.; Loerincz, Emoeke

    2000-01-01

    Optical storage properties of thin azobenzene side-chain polyester films were examined by polarization holographic measurements. The new amorphous polyester film is the candidate material for the purpose of rewritable holographic memory system. Temporal formation of anisotropic and topographic...... gratings was studied in case of films with and without a hard protective layer. We showed that the dominant contribution to the diffraction efficiency comes from the anisotropy in case of expositions below 1 sec even for high incident intensity. The usage of the same wavelength for writing, reading...

  15. Empirical Analysis of Using Erasure Coding in Outsourcing Data Storage With Provable Security

    Science.gov (United States)

    2016-06-01

    computing and communication technologies become powerful and advanced , people are exchanging a huge amount of data, and they are de- manding more storage...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS EMPIRICAL ANALYSIS OF USING ERASURE CODING IN OUTSOURCING DATA STORAGEWITH PROVABLE SECURITY by...2015 to 06-17-2016 4. TITLE AND SUBTITLE EMPIRICAL ANALYSIS OF USING ERASURE CODING IN OUTSOURCING DATA STORAGE WITH PROVABLE SECURITY 5. FUNDING

  16. dCache: Big Data storage for HEP communities and beyond

    International Nuclear Information System (INIS)

    Millar, A P; Bernardt, C; Fuhrmann, P; Mkrtchyan, T; Petersen, A; Schwank, K; Behrmann, G; Litvintsev, D; Rossi, A

    2014-01-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  17. Sharing Privacy Protected and Statistically Sound Clinical Research Data Using Outsourced Data Storage

    Directory of Open Access Journals (Sweden)

    Geontae Noh

    2014-01-01

    Full Text Available It is critical to scientific progress to share clinical research data stored in outsourced generally available cloud computing services. Researchers are able to obtain valuable information that they would not otherwise be able to access; however, privacy concerns arise when sharing clinical data in these outsourced publicly available data storage services. HIPAA requires researchers to deidentify private information when disclosing clinical data for research purposes and describes two available methods for doing so. Unfortunately, both techniques degrade statistical accuracy. Therefore, the need to protect privacy presents a significant problem for data sharing between hospitals and researchers. In this paper, we propose a controlled secure aggregation protocol to secure both privacy and accuracy when researchers outsource their clinical research data for sharing. Since clinical data must remain private beyond a patient’s lifetime, we take advantage of lattice-based homomorphic encryption to guarantee long-term security against quantum computing attacks. Using lattice-based homomorphic encryption, we design an aggregation protocol that aggregates outsourced ciphertexts under distinct public keys. It enables researchers to get aggregated results from outsourced ciphertexts of distinct researchers. To the best of our knowledge, our protocol is the first aggregation protocol which can aggregate ciphertexts which are encrypted with distinct public keys.

  18. The design of data storage system based on Lustre for EAST

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feng, E-mail: wangfeng@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Chen, Ying; Li, Shi [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Yang, Fei [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Department of Computer Science, Anhui Medical University, Hefei, Anhui (China); Xiao, Bingjia [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui (China)

    2016-11-15

    Highlights: • A high performance data storage system based on Lustre and InfiniBand network has been designed and implemented on EAST tokamak. • The acquired data are stored into MDSplus database continuously on Lustre storage system during discharge. • The high performance computing clusters are interconnected with data acquisition and storage system by Lustre and InfiniBand network. - Abstract: The quasi-steady state operation is one of the main purposes of EAST tokamak, and more than 400 s discharge pulse has been achieved in the past campaigns. The acquired data amount increases continuously with the discharge length. At the same time to meet the requirement of the upgrade and improvement of the diagnostic systems, more and more data acquisition channels have come into service. Some new diagnostic systems require high sampling rate data acquisition more than 10MSPS. In the last campaign 2014, the data streaming is about 2000MB/s and the total data amount is more than 100TB. How to store the huge data continuously becomes a big problem. A new data storage system based on Lustre has been designed to solve the problem. All the storage nodes and servers are connected to InfiniBand FDR 56Gbps network. The maximum parallel throughput of the total storage system is about 10GB/s. It is easy to expand the storage system by adding I/O nodes when more capacity and performance are required in the future. The new data storage system will be applied in the next campaign of EAST. The system details are given in the paper.

  19. The design of data storage system based on Lustre for EAST

    International Nuclear Information System (INIS)

    Wang, Feng; Chen, Ying; Li, Shi; Yang, Fei; Xiao, Bingjia

    2016-01-01

    Highlights: • A high performance data storage system based on Lustre and InfiniBand network has been designed and implemented on EAST tokamak. • The acquired data are stored into MDSplus database continuously on Lustre storage system during discharge. • The high performance computing clusters are interconnected with data acquisition and storage system by Lustre and InfiniBand network. - Abstract: The quasi-steady state operation is one of the main purposes of EAST tokamak, and more than 400 s discharge pulse has been achieved in the past campaigns. The acquired data amount increases continuously with the discharge length. At the same time to meet the requirement of the upgrade and improvement of the diagnostic systems, more and more data acquisition channels have come into service. Some new diagnostic systems require high sampling rate data acquisition more than 10MSPS. In the last campaign 2014, the data streaming is about 2000MB/s and the total data amount is more than 100TB. How to store the huge data continuously becomes a big problem. A new data storage system based on Lustre has been designed to solve the problem. All the storage nodes and servers are connected to InfiniBand FDR 56Gbps network. The maximum parallel throughput of the total storage system is about 10GB/s. It is easy to expand the storage system by adding I/O nodes when more capacity and performance are required in the future. The new data storage system will be applied in the next campaign of EAST. The system details are given in the paper.

  20. A Hybrid Multilevel Storage Architecture for Electric Power Dispatching Big Data

    Science.gov (United States)

    Yan, Hu; Huang, Bibin; Hong, Bowen; Hu, Jing

    2017-10-01

    Electric power dispatching is the center of the whole power system. In the long run time, the power dispatching center has accumulated a large amount of data. These data are now stored in different power professional systems and form lots of information isolated islands. Integrating these data and do comprehensive analysis can greatly improve the intelligent level of power dispatching. In this paper, a hybrid multilevel storage architecture for electrical power dispatching big data is proposed. It introduces relational database and NoSQL database to establish a power grid panoramic data center, effectively meet power dispatching big data storage needs, including the unified storage of structured and unstructured data fast access of massive real-time data, data version management and so on. It can be solid foundation for follow-up depth analysis of power dispatching big data.

  1. Long-Time Data Storage: Relevant Time Scales

    Directory of Open Access Journals (Sweden)

    Miko C. Elwenspoek

    2011-02-01

    Full Text Available Dynamic processes relevant for long-time storage of information about human kind are discussed, ranging from biological and geological processes to the lifecycle of stars and the expansion of the universe. Major results are that life will end ultimately and the remaining time that the earth is habitable for complex life is about half a billion years. A system retrieved within the next million years will be read by beings very closely related to Homo sapiens. During this time the surface of the earth will change making it risky to place a small number of large memory systems on earth; the option to place it on the moon might be more favorable. For much longer timescales both options do not seem feasible because of geological processes on the earth and the flux of small meteorites to the moon.

  2. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    Energy Technology Data Exchange (ETDEWEB)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro; Kuhn, Michael; Carns, Philip; Ludwig, Thomas

    2017-09-05

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question: Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms

  3. A secure and efficient audit mechanism for dynamic shared data in cloud storage.

    Science.gov (United States)

    Kwon, Ohmin; Koo, Dongyoung; Shin, Yongjoo; Yoon, Hyunsoo

    2014-01-01

    With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data.

  4. A Secure and Efficient Audit Mechanism for Dynamic Shared Data in Cloud Storage

    Science.gov (United States)

    2014-01-01

    With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data. PMID:24959630

  5. A Survey on the Architectures of Data Security in Cloud Storage Infrastructure

    OpenAIRE

    T.Brindha; R.S.Shaji; G.P.Rajesh

    2013-01-01

    Cloud computing is a most alluring technology that facilitates conducive, on-demand network access based on the requirement of users with nominal effort on management and interaction among cloud providers. The cloud storage serves as a dependable platform for long term storage needs which enables the users to move the data to the cloud in a rapid and secure manner. It assists activities and government agencies considerably decrease their economic overhead of data organization, as they can sto...

  6. Privacy-Preserving Outsourced Auditing Scheme for Dynamic Data Storage in Cloud

    OpenAIRE

    Tu, Tengfei; Rao, Lu; Zhang, Hua; Wen, Qiaoyan; Xiao, Jia

    2017-01-01

    As information technology develops, cloud storage has been widely accepted for keeping volumes of data. Remote data auditing scheme enables cloud user to confirm the integrity of her outsourced file via the auditing against cloud storage, without downloading the file from cloud. In view of the significant computational cost caused by the auditing process, outsourced auditing model is proposed to make user outsource the heavy auditing task to third party auditor (TPA). Although the first outso...

  7. Cost-effective data storage/archival subsystem for functional PACS

    Science.gov (United States)

    Chen, Y. P.; Kim, Yongmin

    1993-09-01

    Not the least of the requirements of a workable PACS is the ability to store and archive vast amounts of information. A medium-size hospital will generate between 1 and 2 TBytes of data annually on a fully functional PACS. A high-speed image transmission network coupled with a comparably high-speed central data storage unit can make local memory and magnetic disks in the PACS workstations less critical and, in an extreme case, unnecessary. Under these circumstances, the capacity and performance of the central data storage subsystem and database is critical in determining the response time at the workstations, thus significantly affecting clinical acceptability. The central data storage subsystem not only needs to provide sufficient capacity to store about ten days worth of images (five days worth of new studies, and on the average, about one comparison study for each new study), but also supplies images to the requesting workstation in a timely fashion. The database must provide fast retrieval responses upon users' requests for images. This paper analyzes both advantages and disadvantages of multiple parallel transfer disks versus RAID disks for short-term central data storage subsystem, as well as optical disk jukebox versus digital recorder tape subsystem for long-term archive. Furthermore, an example high-performance cost-effective storage subsystem which integrates both the RAID disks and high-speed digital tape subsystem as a cost-effective PACS data storage/archival unit are presented.

  8. A privacy-preserving solution for compressed storage and selective retrieval of genomic data.

    Science.gov (United States)

    Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S; Molyneaux, Adam; Xu, Zhenyu; Fellay, Jacques; Steinmetz, Lars M; Hubaux, Jean-Pierre

    2016-12-01

    In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients' complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. © 2016 Huang et al.; Published by Cold Spring Harbor Laboratory Press.

  9. A guide to reliability data collection, validation and storage

    International Nuclear Information System (INIS)

    Stevens, B.

    1986-01-01

    The EuReDatA Working Group produced a basic document that addressed many of the problems associated with the design of a suitable data collection scheme to achieve pre-defined objectives. The book that resulted from this work describes the need for reliability data, data sources and collection procedures, component description and classification, form design, data management, updating and checking procedures, the estimation of failure rates, availability and utilisation factors, and uncertainties in reliability parameters. (DG)

  10. Data needs for long-term dry storage of LWR fuel. Interim report

    International Nuclear Information System (INIS)

    Einziger, R.E.; Baldwin, D.L.; Pitman, S.G.

    1998-04-01

    The NRC approved dry storage of spent fuel in an inert environment for a period of 20 years pursuant to 10CFR72. However, at-reactor dry storage of spent LWR fuel may need to be implemented for periods of time significantly longer than the NRC's original 20-year license period, largely due to uncertainty as to the date the US DOE will begin accepting commercial spent fuel. This factor is leading utilities to plan not only for life-of-plant spent-fuel storage during reactor operation but also for the contingency of a lengthy post-shutdown storage. To meet NRC standards, dry storage must (1) maintain subcriticality, (2) prevent release of radioactive material above acceptable limits, (3) ensure that radiation rates and doses do not exceed acceptable limits, and (4) maintain retrievability of the stored radioactive material. In light of these requirements, this study evaluates the potential for storing spent LWR fuel for up to 100 years. It also identifies major uncertainties as well as the data required to eliminate them. Results show that the lower radiation fields and temperatures after 20 years of dry storage promote acceptable fuel behavior and the extension of storage for up to 100 years. Potential changes in the properties of dry storage system components, other than spent-fuel assemblies, must still be evaluated

  11. Using RFID to Enhance Security in Off-Site Data Storage

    Science.gov (United States)

    Lopez-Carmona, Miguel A.; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R.

    2010-01-01

    Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID)-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system’s benefits in terms of efficiency and failure prevention. PMID:22163638

  12. Using RFID to enhance security in off-site data storage.

    Science.gov (United States)

    Lopez-Carmona, Miguel A; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R

    2010-01-01

    Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID)-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system's benefits in terms of efficiency and failure prevention.

  13. Site characterization data for Solid Waste Storage Area 6

    International Nuclear Information System (INIS)

    Boegly, W.J. Jr.

    1984-12-01

    Currently, the only operating shallow land burial site for low-level radioactive waste at the Oak Ridge National Laboratory (ORNL) is Solid Waste Storage Area No. 6 (SWSA-6). In 1984, the US Department of Energy (DOE) issued Order 5820.2, Radioactive Waste Management, which establishes policies and guidelines by which DOE manages its radioactive waste, waste by-products, and radioactively contaminated surplus facilities. The ORNL Operations Division has given high priority to characterization of SWSA-6 because of the need for continued operation under DOE 5820.2. The purpose of this report is to compile existing information on the geologic and hydrologic conditions in SWSA-6 for use in further studies related to assessing compliance with 5820.2. Burial operations in SWSA-6 began in 1969 on a limited scale, and full operation was initiated in 1973. Since that time, ca. 29,100 m 3 of low-level waste containing ca. 251,000 Ci of activity has been buried in SWSA-6. No transuranic waste has been disposed of in SWSA-6; rather this waste is retrievably stored in SWSA-5. Estimates of the remaining usable space in SWSA-6 vary; however, in 1982 sufficient useful land was reported for about 10 more years of operation. Analysis of the information available on SWSA-6 indicates that more information is required to evaluate the surface water hydrology, the geology at depths below the burial trenches, and the nature and extent of soils within the site. Also, a monitoring network will be required to allow detection of potential contaminant movement in groundwater. Although these are the most obvious needs, a number of specific measurements must be made to evaluate the spatial heterogeneity of the site and to provide background information for geohydrological modeling. Some indication of the nature of these measurements is included

  14. Measuring and processing measured data in the MAW and HTR fuel element storage experiment. Pt. 2

    International Nuclear Information System (INIS)

    Henze, R.

    1987-01-01

    The central data collection plant for the MAW experimental storage in the Asse salt mine consists of 3 components: a) Front end computers assigned to the experiment for data collection, with few and simple components for the difficult ambient conditions underground. b) An overground central computer, which carries out the tasks of intermediate data storage, display at site, monitoring of the experiment, alarms and remote data transmission for final evaluation. c) A local network connects the front end computers to the central computer. It should take over network tasks (data transmission reports) from the front end computers and should make a flexible implementation of new experiments possible. (orig./RB) [de

  15. CMS users data management service integration and first experiences with its NoSQL data storage

    International Nuclear Information System (INIS)

    Riahi, H; Spiga, D; Cinquilli, M; Boccali, T; Ciangottini, D; Santocchia, A; Hernàndez, J M; Konstantinov, P; Mascheroni, M

    2014-01-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site. The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly nearly 200k users' files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and reports to users and service operators, while being highly available. The associated data volume represents a new set of challenges in the areas of database scalability and service performance and efficiency. In this paper, we present an overview of the AsyncStageOut model and the integration strategy with the Common Analysis Framework. The motivations for using the NoSQL technology are also presented, as well as data design and the techniques used for efficient indexing and monitoring of the data. We describe deployment model for the high availability and scalability of the service. We also discuss the hardware requirements and the results achieved as they were determined by testing with actual data and realistic loads during the commissioning and the initial production phase with the Common Analysis Framework.

  16. CMS users data management service integration and first experiences with its NoSQL data storage

    Science.gov (United States)

    Riahi, H.; Spiga, D.; Boccali, T.; Ciangottini, D.; Cinquilli, M.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Santocchia, A.

    2014-06-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site. The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly nearly 200k users' files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and reports to users and service operators, while being highly available. The associated data volume represents a new set of challenges in the areas of database scalability and service performance and efficiency. In this paper, we present an overview of the AsyncStageOut model and the integration strategy with the Common Analysis Framework. The motivations for using the NoSQL technology are also presented, as well as data design and the techniques used for efficient indexing and monitoring of the data. We describe deployment model for the high availability and scalability of the service. We also discuss the hardware requirements and the results achieved as they were determined by testing with actual data and realistic loads during the commissioning and the initial production phase with the Common Analysis Framework.

  17. EOS as the present and future solution for data storage at CERN

    CERN Document Server

    Peters, AJ; Adde, G

    2015-01-01

    EOS is an open source distributed disk storage system in production since 2011 at CERN. Development focus has been on low-latency analysis use cases for LHC(1) and non- LHC experiments and life-cycle management using JBOD(2) hardware for multi PB storage installations. The EOS design implies a split of hot and cold storage and introduced a change of the traditional HSM(3) functionality based workflows at CERN.The 2015 deployment brings storage at CERN to a new scale and foresees to breach 100 PB of disk storage in a distributed environment using tens of thousands of (heterogeneous) hard drives. EOS has brought to CERN major improvements compared to past storage solutions by allowing quick changes in the quality of service of the storage pools. This allows the data centre to quickly meet the changing performance and reliability requirements of the LHC experiments with minimal data movements and dynamic reconfiguration. For example, the software stack has met the specific needs of the dual computing centre set-...

  18. Data base management system configuration specification. [computer storage devices

    Science.gov (United States)

    Neiers, J. W.

    1979-01-01

    The functional requirements and the configuration of the data base management system are described. Techniques and technology which will enable more efficient and timely transfer of useful data from the sensor to the user, extraction of information by the user, and exchange of information among the users are demonstrated.

  19. Comparison of Decadal Water Storage Trends from Global Hydrological Models and GRACE Satellite Data

    Science.gov (United States)

    Scanlon, B. R.; Zhang, Z. Z.; Save, H.; Sun, A. Y.; Mueller Schmied, H.; Van Beek, L. P.; Wiese, D. N.; Wada, Y.; Long, D.; Reedy, R. C.; Doll, P. M.; Longuevergne, L.

    2017-12-01

    Global hydrology is increasingly being evaluated using models; however, the reliability of these global models is not well known. In this study we compared decadal trends (2002-2014) in land water storage from 7 global models (WGHM, PCR-GLOBWB, and GLDAS: NOAH, MOSAIC, VIC, CLM, and CLSM) to storage trends from new GRACE satellite mascon solutions (CSR-M and JPL-M). The analysis was conducted over 186 river basins, representing about 60% of the global land area. Modeled total water storage trends agree with those from GRACE-derived trends that are within ±0.5 km3/yr but greatly underestimate large declining and rising trends outside this range. Large declining trends are found mostly in intensively irrigated basins and in some basins in northern latitudes. Rising trends are found in basins with little or no irrigation and are generally related to increasing trends in precipitation. The largest decline is found in the Ganges (-12 km3/yr) and the largest rise in the Amazon (43 km3/yr). Differences between models and GRACE are greatest in large basins (>0.5x106 km2) mostly in humid regions. There is very little agreement in storage trends between models and GRACE and among the models with values of r2 mostly store water over decadal timescales that is underrepresented by the models. The storage capacity in the modeled soil and groundwater compartments may be insufficient to accommodate the range in water storage variations shown by GRACE data. The inability of the models to capture the large storage trends indicates that model projections of climate and human-induced changes in water storage may be mostly underestimated. Future GRACE and model studies should try to reduce the various sources of uncertainty in water storage trends and should consider expanding the modeled storage capacity of the soil profiles and their interaction with groundwater.

  20. Cavity-enhanced eigenmode and angular hybrid multiplexing in holographic data storage systems.

    Science.gov (United States)

    Miller, Bo E; Takashima, Yuzuru

    2016-12-26

    Resonant optical cavities have been demonstrated to improve energy efficiencies in Holographic Data Storage Systems (HDSS). The orthogonal reference beams supported as cavity eigenmodes can provide another multiplexing degree of freedom to push storage densities toward the limit of 3D optical data storage. While keeping the increased energy efficiency of a cavity enhanced reference arm, image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at two Bragg angles. We experimentally confirmed write rates are enhanced by an average factor of 1.1, and page crosstalk is about 2.5%. This hybrid multiplexing opens up a pathway to increase storage density while minimizing modification of current angular multiplexing HDSS.

  1. Data systems and computer science space data systems: Onboard memory and storage

    Science.gov (United States)

    Shull, Tom

    1991-01-01

    The topics are presented in viewgraph form and include the following: technical objectives; technology challenges; state-of-the-art assessment; mass storage comparison; SODR drive and system concepts; program description; vertical Bloch line (VBL) device concept; relationship to external programs; and backup charts for memory and storage.

  2. Description of SODAR data storage. WISE project WP2

    International Nuclear Information System (INIS)

    Barhorst, S.A.M.; Verhoef, J.P.; Van der Werff, P.A.; Eecen, P.J.

    2003-10-01

    The partners in the WISE project are investigating whether application of the SODAR (sonic detection and ranging) measurement technique in wind energy experimental work is feasible as a replacement for cup anemometers and wind direction sensors mounted on tall meteorological masts both from the view of accuracy and cost. In Work Package 2 (WP2) of the WISE project extensive controlled experiments with the SODAR have been performed. For example, SODAR measurements have been compared with measurements from nearby masts and different brands of SODARs have been compared. Part of the work package was the creation of a database to gather the measured SODAR data. The database was created by ECN in order to enable further analysis by the partners in the project. The database structure that has been defined by ECN is described in full detail. The database is based on SQL (structured query language), and care is taken that data that is unchanged during a measurement period is only stored once. The logic behind the structure is described and the relations between the various tables are described. Up to now the description of the database is limited to include SODAR data measured close to a meteorological mast. Power measurements from wind turbines are not yet included. However, the database can easily be extended to include these data. The data measured by means of the ECN SODAR have completely been re-processed. A new directory structure was defined which is accessible from both the Unix (Linux) and the Microsoft Windows platform. The processed and validated data have been stored in a database to make retrieval of specific data sets possible. The database is also accessible from the Windows platform. The defined format is available for the WISE project, so that the database containing data from all partners can be created

  3. Data Blocks : Hybrid OLTP and OLAP on compressed storage using both vectorization and compilation

    NARCIS (Netherlands)

    Lang, Harald; Mühlbauer, Tobias; Funke, Florian; Boncz, Peter; Neumann, Thomas; Kemper, Alfons

    2016-01-01

    This work aims at reducing the main-memory footprint in high performance hybrid OLTP&OLAP databases, while retaining high query performance and transactional throughput. For this purpose, an innovative compressed columnar storage format for cold data, called Data Blocks is introduced. Data Blocks

  4. CMS users data management service integration and first experiences with its NoSQL data storage

    CERN Document Server

    Riahi, H; Cinquilli, M; Hernandez, J M; Konstantinov, P; Mascheroni, M; Santocchia, A

    2014-01-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site.The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly 200k users files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and repor...

  5. Bio-Cryptography Based Secured Data Replication Management in Cloud Storage

    OpenAIRE

    Elango Pitchai

    2016-01-01

    Cloud computing is new way of economical and efficient storage. The single data mart storage system is a less secure because data remain under a single data mart. This can lead to data loss due to different causes like hacking, server failure etc. If an attacker chooses to attack a specific client, then he can aim at a fixed cloud provider, try to have access to the client’s information. This makes an easy job of the attackers, both inside and outside attackers get the benefit of ...

  6. Geolocating fish using Hidden Markov Models and Data Storage Tags

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Pedersen, Martin Wæver; Madsen, Henrik

    2009-01-01

    Geolocation of fish based on data from archival tags typically requires a statistical analysis to reduce the effect of measurement errors. In this paper we present a novel technique for this analysis, one based on Hidden Markov Models (HMM's). We assume that the actual path of the fish is generated...... by a biased random walk. The HMM methodology produces, for each time step, the probability that the fish resides in each grid cell. Because there is no Monte Carlo step in our technique, we are able to estimate parameters within the likelihood framework. The method does not require the distribution...... of inference in state-space models of animals. The technique can be applied to geolocation based on light, on tidal patterns, or measurement of other variables that vary with space. We illustrate the method through application to a simulated data set where geolocation relies on depth data exclusively....

  7. Securing the Data Storage and Processing in Cloud Computing Environment

    Science.gov (United States)

    Owens, Rodney

    2013-01-01

    Organizations increasingly utilize cloud computing architectures to reduce costs and energy consumption both in the data warehouse and on mobile devices by better utilizing the computing resources available. However, the security and privacy issues with publicly available cloud computing infrastructures have not been studied to a sufficient depth…

  8. Effective Data Backup System Using Storage Area Network Solution

    African Journals Online (AJOL)

    PROF. OLIVER OSUAGWA

    2015-06-01

    Jun 1, 2015 ... One of the most crucial benefits of the computer system is its ability to manage data send to it for processing. ... mirroring, backup and restore, archival and retrieval of ..... enable schools and cooperate organization create a ...

  9. Survey and alignment data analysis for the ALS storage ring

    International Nuclear Information System (INIS)

    Keller, R.

    1993-05-01

    The survey and alignment effort for the Advanced Light Source (ALS) accelerator complex has been described elsewhere. Data analysis for this task comprises the creation of ideal data, comparison of measured coordinates with ideal ones, and computation of alignment values, taking into account the effects caused by finite observation accuracy. A novel approach has been taken, using personal computer spreadsheets rather than more conventional programming methods. This approach was induced by the necessities to create and frequently refine the analysis procedures while measurements were already underway, and further by hardware constraints that limited the use of an available surveying code. A major benefit consists in the ability to identify and deal with discrepancies that occasionally arise when different techniques are used to observe the same object, in a timely and efficient manner. As a result of the performed survey and alignment work, the ALS lattice magnets have been positioned with accuracies well exceeding the original specifications

  10. StorNet: Integrated Dynamic Storage and Network Resource Provisioning and Management for Automated Data Transfers

    International Nuclear Information System (INIS)

    Gu Junmin; Natarajan, Vijaya; Shoshani, Arie; Sim, Alex; Katramatos, Dimitrios; Liu Xin; Yu Dantong; Bradley, Scott; McKee, Shawn

    2011-01-01

    StorNet is a joint project of Brookhaven National Laboratory (BNL) and Lawrence Berkeley National Laboratory (LBNL) to research, design, and develop an integrated end-to-end resource provisioning and management framework for high-performance data transfers. The StorNet framework leverages heterogeneous network protocols and storage types in a federated computing environment to provide the capability of predictable, efficient delivery of high-bandwidth data transfers for data intensive applications. The framework incorporates functional modules to perform such data transfers through storage and network bandwidth co-scheduling, storage and network resource provisioning, and performance monitoring, and is based on LBNL's BeStMan/SRM, BNL's TeraPaths, and ESNet's OSCARS systems.

  11. Statistical analyses of the magnet data for the advanced photon source storage ring magnets

    International Nuclear Information System (INIS)

    Kim, S.H.; Carnegie, D.W.; Doose, C.; Hogrefe, R.; Kim, K.; Merl, R.

    1995-01-01

    The statistics of the measured magnetic data of 80 dipole, 400 quadrupole, and 280 sextupole magnets of conventional resistive designs for the APS storage ring is summarized. In order to accommodate the vacuum chamber, the curved dipole has a C-type cross section and the quadrupole and sextupole cross sections have 180 degrees and 120 degrees symmetries, respectively. The data statistics include the integrated main fields, multipole coefficients, magnetic and mechanical axes, and roll angles of the main fields. The average and rms values of the measured magnet data meet the storage ring requirements

  12. A Secure and Effective Anonymous Integrity Checking Protocol for Data Storage in Multicloud

    Directory of Open Access Journals (Sweden)

    Lingwei Song

    2015-01-01

    Full Text Available How to verify the integrity of outsourced data is an important problem in cloud storage. Most of previous work focuses on three aspects, which are providing data dynamics, public verifiability, and privacy against verifiers with the help of a third party auditor. In this paper, we propose an identity-based data storage and integrity verification protocol on untrusted cloud. And the proposed protocol can guarantee fair results without any third verifying auditor. The theoretical analysis and simulation results show that our protocols are secure and efficient.

  13. Smart Collection and Storage Method for Network Traffic Data

    Science.gov (United States)

    2014-09-01

    to the root of an incident or under- stand what goes on in a network may mean looking at data from weeks, months, or even years ago, as has been the...KB 1.01% 69.42 TB 694.20 TB 6,941.99 TB SuSE 6.3 .pcap 51,706 KB 1.01% 104.03 TB 1,040.27 TB 10,402.68 TB HP-UX nettl .trc0 53,391 KB 1.04% 451.13

  14. Xbox one file system data storage: A forensic analysis

    OpenAIRE

    Gravel, Caitlin Elizabeth

    2015-01-01

    The purpose of this research was to answer the question, how does the file system of the Xbox One store data on its hard disk? This question is the main focus of the exploratory research and results sought. The research is focused on digital forensic investigators and experts. An out of the box Xbox One gaming console was used in the research. Three test cases were created as viable scenarios an investigator could come across in a search and seizure of evidence. The three test cases were then...

  15. Towards Blockchain-based Auditable Storage and Sharing of IoT Data

    OpenAIRE

    Shafagh , Hossein; Hithnawi , Anwar; Duquennoy , Simon

    2017-01-01

    International audience; Today the cloud plays a central role in storing, processing , and distributing data. Despite contributing to the rapid development of various applications, including the IoT, the current centralized storage architecture has led into a myriad of isolated data silos and is preventing the full potential of holistic data-driven analytics for IoT data. In this abstract, we advocate a data-centric design for IoT with focus on resilience, sharing, and auditable protection of ...

  16. The Design and Application of Data Storage System in Miyun Satellite Ground Station

    Science.gov (United States)

    Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu

    2015-04-01

    China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16

  17. Development of an integrated data storage and retrieval system for TEC

    International Nuclear Information System (INIS)

    Kemmerling, G.; Blom, H.; Busch, P.; Kooijman, W.; Korten, M.; Laat, C.T.A.M. de; Lourens, W.; Meer, E. van der; Nideroest, B.; Oomens, A.A.M.; Wijnoltz, F.; Zwoll, K.

    2000-01-01

    The database system for the storage and retrieval of experimental and technical data at TEXTOR-94 has to be revised. A new database has to be developed, which complies with future performance and multiplatform requirements. The concept, to be presented here, is based on the commercial object database Objectivity. Objectivity allows a flexible object oriented data design and is able to cope with the large amount of data, which is expected to be about 1 TByte per year. Furthermore, it offers the possibility of data distribution over several hosts. Thus, parallel data storage from the frontend to the database is possible and can be used to achieve the required storage performance of 200 MByte per min. In order to store configurational and experimental data, an object model is under design. It is aimed at describing the device specific information and the acquired data in a common way such that different aproaches for data access may be applied. There are several methods forseen for remote access. In addition to the C++ and Java interfaces already included in Objectivity/DB, CORBA and socket based C interfaces are currently under development. This could also allow an access by non-supported platforms and enable existing legacy applications an integration of the database for storage and retrieval of data by a minimum of code changes

  18. Impact of Data Placement on Resilience in Large-Scale Object Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Carns, Philip; Harms, Kevin; Jenkins, John; Mubarak, Misbah; Ross, Robert; Carothers, Christopher

    2016-05-02

    Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model to investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.

  19. Data Blocks: hybrid OLTP and OLAP on compressed storage using both vectorization and compilation

    NARCIS (Netherlands)

    H. Lang (Harald); T. Mühlbauer; F. Funke; P.A. Boncz (Peter); T. Neumann (Thomas); A. Kemper (Alfons)

    2016-01-01

    htmlabstractThis work aims at reducing the main-memory footprint in high performance hybrid OLTP & OLAP databases, while retaining high query performance and transactional throughput. For this purpose, an innovative compressed columnar storage format for cold data, called Data Blocks is introduced.

  20. Computer program for storage and retrieval of thermal-stability data for explosives

    International Nuclear Information System (INIS)

    Ashcraft, R.W.

    1981-06-01

    A computer program for storage and retrieval of thermal stability data has been written in HP Basic for the HP-9845 system. The data library is stored on a 9885 flexible disk. A program listing and sample outputs are included as appendices

  1. Sensor data storage performance: SQL or NoSQL, physical or virtual

    NARCIS (Netherlands)

    Veen, J.S. van der; Waaij, B.D. van der; Meijer, R.J.

    2012-01-01

    Sensors are used to monitor certain aspects of the physical or virtual world and databases are typically used to store the data that these sensors provide. The use of sensors is increasing, which leads to an increasing demand on sensor data storage platforms. Some sensor monitoring applications need

  2. A pre-research on GWAC massive catalog data storage and processing system

    NARCIS (Netherlands)

    M. Wan (Meng); C. Wu (Chao); Y. Zhang (Ying); Y. Xu (Yang); J. Wei (Jianyan)

    2016-01-01

    htmlabstractGWAC (Ground Wide Angle Camera) poses huge challenges in large-scale catalogue storage and real-time processing of quick search of transients among wide field-of-view time-series data. Firstly, this paper proposes the concept of using databases’ capabilities of fast data processing and

  3. Partial storage optimization and load control strategy of cloud data centers.

    Science.gov (United States)

    Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

  4. Analysing I/O bottlenecks in LHC data analysis on grid storage resources

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    We describe recent I/O testing frameworks that we have developed and applied within the UK GridPP Collaboration, the ATLAS experiment and the DPM team, for a variety of distinct purposes. These include benchmarking vendor supplied storage products, discovering scaling limits of SRM solutions, tuning of storage systems for experiment data analysis, evaluating file access protocols, and exploring IO read patterns of experiment software and their underlying event data models. With multiple grid sites now dealing with petabytes of data, such studies are becoming increasingly essential. We describe how the tests build, and improve, on previous work and contrast how the use-cases differ. We also detail the results obtained and the implications for storage hardware, middleware and experiment software.

  5. Analysing I/O bottlenecks in LHC data analysis on grid storage resources

    International Nuclear Information System (INIS)

    Bhimji, W; Clark, P; Doidge, M; Hellmich, M P; Skipsey, S; Vukotic, I

    2012-01-01

    We describe recent I/O testing frameworks that we have developed and applied within the UK GridPP Collaboration, the ATLAS experiment and the DPM team, for a variety of distinct purposes. These include benchmarking vendor supplied storage products, discovering scaling limits of SRM solutions, tuning of storage systems for experiment data analysis, evaluating file access protocols, and exploring I/O read patterns of experiment software and their underlying event data models. With multiple grid sites now dealing with petabytes of data, such studies are becoming essential. We describe how the tests build, and improve, on previous work and contrast how the use-cases differ. We also detail the results obtained and the implications for storage hardware, middleware and experiment software.

  6. Partial Storage Optimization and Load Control Strategy of Cloud Data Centers

    Science.gov (United States)

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444

  7. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    Science.gov (United States)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  8. Analyzing the Impact of Storage Shortage on Data Availability in Decentralized Online Social Networks

    Directory of Open Access Journals (Sweden)

    Songling Fu

    2014-01-01

    Full Text Available Maintaining data availability is one of the biggest challenges in decentralized online social networks (DOSNs. The existing work often assumes that the friends of a user can always contribute to the sufficient storage capacity to store all data. However, this assumption is not always true in today’s online social networks (OSNs due to the fact that nowadays the users often use the smart mobile devices to access the OSNs. The limitation of the storage capacity in mobile devices may jeopardize the data availability. Therefore, it is desired to know the relation between the storage capacity contributed by the OSN users and the level of data availability that the OSNs can achieve. This paper addresses this issue. In this paper, the data availability model over storage capacity is established. Further, a novel method is proposed to predict the data availability on the fly. Extensive simulation experiments have been conducted to evaluate the effectiveness of the data availability model and the on-the-fly prediction.

  9. [Carbon storage of forest stands in Shandong Province estimated by forestry inventory data].

    Science.gov (United States)

    Li, Shi-Mei; Yang, Chuan-Qiang; Wang, Hong-Nian; Ge, Li-Qiang

    2014-08-01

    Based on the 7th forestry inventory data of Shandong Province, this paper estimated the carbon storage and carbon density of forest stands, and analyzed their distribution characteristics according to dominant tree species, age groups and forest category using the volume-derived biomass method and average-biomass method. In 2007, the total carbon storage of the forest stands was 25. 27 Tg, of which the coniferous forests, mixed conifer broad-leaved forests, and broad-leaved forests accounted for 8.6%, 2.0% and 89.4%, respectively. The carbon storage of forest age groups followed the sequence of young forests > middle-aged forests > mature forests > near-mature forests > over-mature forests. The carbon storage of young forests and middle-aged forests accounted for 69.3% of the total carbon storage. Timber forest, non-timber product forest and protection forests accounted for 37.1%, 36.3% and 24.8% of the total carbon storage, respectively. The average carbon density of forest stands in Shandong Province was 10.59 t x hm(-2), which was lower than the national average level. This phenomenon was attributed to the imperfect structure of forest types and age groups, i. e., the notably higher percentage of timber forests and non-timber product forest and the excessively higher percentage of young forests and middle-aged forest than mature forests.

  10. The realization of the storage of XML and middleware-based data of electronic medical records

    International Nuclear Information System (INIS)

    Liu Shuzhen; Gu Peidi; Luo Yanlin

    2007-01-01

    In this paper, using the technology of XML and middleware to design and implement a unified electronic medical records storage archive management system and giving a common storage management model. Using XML to describe the structure of electronic medical records, transform the medical data from traditional 'business-centered' medical information into a unified 'patient-centered' XML document and using middleware technology to shield the types of the databases at different departments of the hospital and to complete the information integration of the medical data which scattered in different databases, conducive to information sharing between different hospitals. (authors)

  11. Structured storage in ATLAS Distributed Data Management: use cases and experiences

    International Nuclear Information System (INIS)

    Lassnig, Mario; Garonne, Vincent; Beermann, Thomas; Dimitrov, Gancho; Canali, Luca; Molfetas, Angelos; Zang Donal; Azzurra Chinzer, Lisa

    2012-01-01

    The distributed data management system of the high-energy physics experiment ATLAS has a critical dependency on the Oracle Relational Database Management System. Recently however, the increased appearance of data warehouselike workload in the experiment has put considerable and increasing strain on the Oracle database. In particular, the analysis of archived data, and the aggregation of data for summary purposes has been especially demanding. For this reason, structured storage systems were evaluated to offload the Oracle database, and to handle processing of data in a non-transactional way. This includes distributed file systems like HDFS that support parallel execution of computational tasks on distributed data, as well as non-relational databases like HBase, Cassandra, or MongoDB. In this paper, the most important analysis and aggregation use cases of the data management system are presented, and how structured storage systems were established to process them.

  12. Development of Data Storage System for Portable Multichannel Analyzer using S D Card

    International Nuclear Information System (INIS)

    Suksompong, Tanate; Ngernvijit, Narippawaj; Sudprasert, Wanwisa

    2009-07-01

    Full text: The development of data storage system for portable multichannel analyzer (MCA) focused on the application of SD card as a storage device instead of the older devices that could not easily extend their capacity. The entire work consisted of two parts: the first part was the study for pulse detection by designing the input pulse detecting circuit. The second part dealed with the accuracy testing of data storage system for portable MCA, consisting of the design of connecting circuit between micro controller and SD card, the transfer of input pulse data into SD card and the ability of data storage system for radiation detection. It was found that the input pulse detecting circuit could detect the input pulse with the maximum voltage, then the signal was transferred to micro controller for data processing. The micro controller could connect to SD card via SPI MODE. The portable MCA could perfectly verify the input signal ranging from 0.2 to 5.0 volts. The SD card could store the data as . xls file which could easily be accessed by the compatible software such as Microsoft Excel

  13. ESGF and WDCC: The Double Structure of the Digital Data Storage at DKRZ

    Science.gov (United States)

    Toussaint, F.; Höck, H.

    2016-12-01

    Since a couple of years, Digital Repositories of climate science face new challenges: International projects are global collaborations. The data storage in parallel moved to federated, distributed storage systems like ESGF. For the long term archival storage (LTA) on the other hand, communities, funders, and data users make stronger demands for data and metadata quality to facilitate data use and reuse. At DKRZ, this situation led to a twofold data dissemination system - a situation which has influence on administration, workflows, and sustainability of the data. The ESGF system is focused on the needs of users as partners in global projects. It includes replication tools, detailed global project standards, and efficient search for the data to download. In contrast, DKRZ's classical CERA LTA storage aims for long term data holding and data curation as well as for data reuse requiring high metadata quality standards. In addition, for LTA data a Digital Object Identifier publication service for the direct integration of research data in scientific publications has been implemented. The editorial process at DKRZ-LTA ensures the quality of metadata and research data. The DOI and a citation code are provided and afterwards registered under DataCite's (datacite.org) regulations. In the overall data life cycle continuous reliability of the data and metadata quality is essential to allow for data handling at Petabytes level, data long term usability, and adequate publication of the results. These considerations lead to the question "What is quality" - with respect to data, to the repository itself, to the publisher, and the user? Global consensus is needed for these assessments as the phases of the end to end workflow gear into each other: For data and metadata, checks need to go hand in hand with the processes of production and storage. The results can be judged following a Quality Maturity Matrix (QMM). Repositories can be certified according to their trustworthiness

  14. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    Science.gov (United States)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  15. Data storage

    CERN Multimedia

    McIntosh, E.

    1972-01-01

    The workload of the CERN central computer system is dominated, in terms of number of jobs, by short jobs submitted from, terminals and remote batch stations. The arrival of the CDC 7600, which will be even more easily accessible and will give the user his results more quickly, is likely to accentuate this domination.

  16. Archiving and Managing Remote Sensing Data using State of the Art Storage Technologies

    Science.gov (United States)

    Lakshmi, B.; Chandrasekhara Reddy, C.; Kishore, S. V. S. R. K.

    2014-11-01

    Integrated Multi-mission Ground Segment for Earth Observation Satellites (IMGEOS) was established with an objective to eliminate human interaction to the maximum extent. All emergency data products will be delivered within an hour of acquisition through FTP delivery. All other standard data products will be delivered through FTP within a day. The IMGEOS activity was envisaged to reengineer the entire chain of operations at the ground segment facilities of NRSC at Shadnagar and Balanagar campuses to adopt an integrated multi-mission approach. To achieve this, the Information Technology Infrastructure was consolidated by implementing virtualized tiered storage and network computing infrastructure in a newly built Data Centre at Shadnagar Campus. One important activity that influences all other activities in the integrated multi-mission approach is the design of appropriate storage and network architecture for realizing all the envisaged operations in a highly streamlined, reliable and secure environment. Storage was consolidated based on the major factors like accessibility, long term data protection, availability, manageability and scalability. The broad operational activities are reception of satellite data, quick look, generation of browse, production of standard and valueadded data products, production chain management, data quality evaluation, quality control and product dissemination. For each of these activities, there are numerous other detailed sub-activities and pre-requisite tasks that need to be implemented to support the above operations. The IMGEOS architecture has taken care of choosing the right technology for the given data sizes, their movement and long-term lossless retention policies. Operational costs of the solution are kept to the minimum possible. Scalability of the solution is also ensured. The main function of the storage is to receive and store the acquired satellite data, facilitate high speed availability of the data for further

  17. Meta-Key: A Secure Data-Sharing Protocol under Blockchain-Based Decentralised Storage Architecture

    OpenAIRE

    Fu, Yue

    2017-01-01

    In this paper a secure data-sharing protocol under blockchain-based decentralised storage architecture is proposed, which fulfils users who need to share their encrypted data on-cloud. It implements a remote data-sharing mechanism that enables data owners to share their encrypted data to other users without revealing the original key. Nor do they have to download on-cloud data with re-encryption and re-uploading. Data security as well as efficiency are ensured by symmetric encryption, whose k...

  18. Data compilation report: Gas and liquid samples from K West Basin fuel storage canisters

    International Nuclear Information System (INIS)

    Trimble, D.J.

    1995-01-01

    Forty-one gas and liquid samples were taken from spent fuel storage canisters in the K West Basin during a March 1995 sampling campaign. (Spent fuel from the N Reactor is stored in sealed canisters at the bottom of the K West Basin.) A description of the sampling process, gamma energy analysis data, and quantitative gas mass spectroscopy data are documented. This documentation does not include data analysis

  19. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    Science.gov (United States)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  20. Comparison of data file and storage configurations for efficient temporal access of satellite image data

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-01-01

    Full Text Available . Traditional storage formats store such a series of images as a sequence of individual files, with each file internally storing the pixels in their spatial order. Consequently, the construction of a time series profile of a single pixel requires reading from...

  1. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage.

    Science.gov (United States)

    Guo, Yeting; Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-04-13

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query.

  2. Unleashed Microactuators electrostatic wireless actuation for probe-based data storage

    NARCIS (Netherlands)

    Hoexum, A.M.

    2007-01-01

    Summary A hierarchical overview of the currently available data storage systems for desktop computer systems can be visualised as a pyramid in which the height represents both the price per bit and the access rate. The width of the pyramid represents the capacity of the medium. At the bottom slow,

  3. KEYNOTE ADDRESS: The role of standards in the emerging optical digital data disk storage systems market

    Science.gov (United States)

    Bainbridge, Ross C.

    1984-09-01

    The Institute for Computer Sciences and Technology at the National Bureau of Standards is pleased to cooperate with the International Society for Optical Engineering and to join with the other distinguished organizations in cosponsoring this conference on applications of optical digital data disk storage systems.

  4. myPhyloDB: a local web server for the storage and analysis of metagenomics data

    Science.gov (United States)

    myPhyloDB is a user-friendly personal database with a browser-interface designed to facilitate the storage, processing, analysis, and distribution of metagenomics data. MyPhyloDB archives raw sequencing files, and allows for easy selection of project(s)/sample(s) of any combination from all availab...

  5. Evaluating water storage variations in the MENA region using GRACE satellite data

    KAUST Repository

    Lopez, Oliver; Houborg, Rasmus; McCabe, Matthew

    2013-01-01

    estimates of water storage and fluxes over areas covering a minimum of 150,000 km2 (length scales of a few hundred kilometers) and thus prove to be a valuable tool for regional water resources management, particularly for areas with a lack of in-situ data

  6. Land Water Storage within the Congo Basin Inferred from GRACE Satellite Gravity Data

    Science.gov (United States)

    Crowley, John W.; Mitrovica, Jerry X.; Bailey, Richard C.; Tamisiea, Mark E.; Davis, James L.

    2006-01-01

    GRACE satellite gravity data is used to estimate terrestrial (surface plus ground) water storage within the Congo Basin in Africa for the period of April, 2002 - May, 2006. These estimates exhibit significant seasonal (30 +/- 6 mm of equivalent water thickness) and long-term trends, the latter yielding a total loss of approximately 280 km(exp 3) of water over the 50-month span of data. We also combine GRACE and precipitation data set (CMAP, TRMM) to explore the relative contributions of the source term to the seasonal hydrological balance within the Congo Basin. We find that the seasonal water storage tends to saturate for anomalies greater than 30-44 mm of equivalent water thickness. Furthermore, precipitation contributed roughly three times the peak water storage after anomalously rainy seasons, in early 2003 and 2005, implying an approximately 60-70% loss from runoff and evapotranspiration. Finally, a comparison of residual land water storage (monthly estimates minus best-fitting trends) in the Congo and Amazon Basins shows an anticorrelation, in agreement with the 'see-saw' variability inferred by others from runoff data.

  7. GraphStore: A Distributed Graph Storage System for Big Data Networks

    Science.gov (United States)

    Martha, VenkataSwamy

    2013-01-01

    Networks, such as social networks, are a universal solution for modeling complex problems in real time, especially in the Big Data community. While previous studies have attempted to enhance network processing algorithms, none have paved a path for the development of a persistent storage system. The proposed solution, GraphStore, provides an…

  8. Pulse-modulated multilevel data storage in an organic ferroelectric resistive memory diode

    NARCIS (Netherlands)

    Lee, J.; Breemen, A.J.J.M. van; Khikhlovskyi, V.; Kemerink, M.; Janssen, R.A.J.; Gelinck, G.H.

    2016-01-01

    We demonstrate multilevel data storage in organic ferroelectric resistive memory diodes consisting of a phase-separated blend of P(VDF-TrFE) and a semiconducting polymer. The dynamic behaviour of the organic ferroelectric memory diode can be described in terms of the inhomogeneous field mechanism

  9. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage

    Directory of Open Access Journals (Sweden)

    Yeting Guo

    2018-04-01

    Full Text Available Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE, an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query.

  10. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage

    Science.gov (United States)

    Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-01-01

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query. PMID:29652810

  11. Effective grouping for energy and performance: Construction of adaptive, sustainable, and maintainable data storage

    Science.gov (United States)

    Essary, David S.

    The performance gap between processors and storage systems has been increasingly critical over the years. Yet the performance disparity remains, and further, storage energy consumption is rapidly becoming a new critical problem. While smarter caching and predictive techniques do much to alleviate this disparity, the problem persists, and data storage remains a growing contributor to latency and energy consumption. Attempts have been made at data layout maintenance, or intelligent physical placement of data, yet in practice, basic heuristics remain predominant. Problems that early studies sought to solve via layout strategies were proven to be NP-Hard, and data layout maintenance today remains more art than science. With unknown potential and a domain inherently full of uncertainty, layout maintenance persists as an area largely untapped by modern systems. But uncertainty in workloads does not imply randomness; access patterns have exhibited repeatable, stable behavior. Predictive information can be gathered, analyzed, and exploited to improve data layouts. Our goal is a dynamic, robust, sustainable predictive engine, aimed at improving existing layouts by replicating data at the storage device level. We present a comprehensive discussion of the design and construction of such a predictive engine, including workload evaluation, where we present and evaluate classical workloads as well as our own highly detailed traces collected over an extended period. We demonstrate significant gains through an initial static grouping mechanism, and compare against an optimal grouping method of our own construction, and further show significant improvement over competing techniques. We also explore and illustrate the challenges faced when moving from static to dynamic (i.e. online) grouping, and provide motivation and solutions for addressing these challenges. These challenges include metadata storage, appropriate predictive collocation, online performance, and physical placement

  12. Cloud object store for archive storage of high performance computing data using decoupling middleware

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  13. Determination of the size of an imaging data storage device at a full PACS hospital

    International Nuclear Information System (INIS)

    Cha, S. J.; Kim, Y. H.; Hur, G.

    2000-01-01

    To determine the appropriate size of a short and long-term storage device, bearing in mind the design factors involved and the installation costs. The number of radiologic studies quoted is the number of these undertaken during a one-year period at a university hospital with 650 beds, and reflects the actual number of each type of examination performed at a full PACS hospital. The average daily number of outpatients was 1586, while that of inpatients was 639.5. The numbers of radiologic studies performed were as follows : 378 among 189 outpatients, and 165 among 41 inpatients. The average daily number of examinations was 543, comprising 460 CR, 30 ultrasonograms, 25 CT, 8 MRI, 20 others. The total amount of digital images was 17.4 GB per day, while the amount of short-term data with lossless compression was 6.7 GB per day. During 14 days short-term storage, the amount of image data was 93.7 GB in disk array. The amount of data stored mid term (1 year), with lossy compression, was 369.1 GB. The amount of data stored in the form of long-term cache and educational images was 38.7 GB and 30 GB, respectively, The total size of disk array was 531.5 GB. A device suitable for the long-term storage of images, for at least five years, requires a capacity of 1845.5 GB. At a full PACS hospital with 600 beds, the minimum disk space required for the short-and mid-term storage of image data in disk array is 540 GB. The capacity required for long term storage (at least five years) is 1900 GB. (author)

  14. Revised cloud storage structure for light-weight data archiving in LHD

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Masaki, Ohsuna; Mamoru, Kojima; Setsuo, Imazu; Miki, Nonomura; Masahiko, Emoto; Takashi, Yamamoto; Yoshio, Nagayama; Takahisa, Ozeki; Noriyoshi, Nakajima; Katsumi, Ida; Osamu, Kaneko

    2014-01-01

    Highlights: • GlusterFS is adopted to replace IznaStor cloud storage in LHD. • GlusterFS and OpenStack/Swift are compared. • SSD-based GlusterFS distributed replicated volume is separated from normal RAID storage. • LABCOM system changes the storage technology every 4 years for cost efficiency. - Abstract: The LHD data archiving system has newly selected GlusterFS distributed filesystem for the replacement of the present cloud storage software named “IznaStor/dSS”. Even though the prior software provided many favorable functionalities of hot plug and play node insertion, internal auto-replication of data files, and symmetric load balancing between all member nodes, it revealed a poor feature in recovering from an accidental malfunction of a storage node. Once a failure happened, the recovering process usually took at least several days or sometimes more than a week with a heavy cpu load. In some cases they fell into the so-called “split-brain” or “amnesia” condition, not to get recovered from it. Since the recovery time tightly depends on the capacity size of the fault node, individual HDD management is more desirable than large volumes of HDD arrays. In addition, the dynamic mutual awareness of data location information may be removed if some other static data distribution method can be applied. In this study, the candidate middleware of “OpenStack/Swift” and “GlusterFS” has been tested by using the real mass of LHD data for more than half a year, and finally GlusterFS has been selected to replace the present IznaStor. It has implemented very limited functionalities of cloud storage but a simplified RAID10-like structure, which may consequently provide lighter-weight read/write ability. Since the LABCOM data system is implemented to be independent of the storage structure, it is easy to plug off the IznaStor and on the new GlusterFS. The effective I/O speed is also confirmed to be on the same level as the estimated one from raw

  15. Revised cloud storage structure for light-weight data archiving in LHD

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, Hideya, E-mail: nakanisi@nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Masaki, Ohsuna; Mamoru, Kojima; Setsuo, Imazu; Miki, Nonomura; Masahiko, Emoto; Takashi, Yamamoto; Yoshio, Nagayama [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Takahisa, Ozeki [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan); Noriyoshi, Nakajima; Katsumi, Ida; Osamu, Kaneko [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan)

    2014-05-15

    Highlights: • GlusterFS is adopted to replace IznaStor cloud storage in LHD. • GlusterFS and OpenStack/Swift are compared. • SSD-based GlusterFS distributed replicated volume is separated from normal RAID storage. • LABCOM system changes the storage technology every 4 years for cost efficiency. - Abstract: The LHD data archiving system has newly selected GlusterFS distributed filesystem for the replacement of the present cloud storage software named “IznaStor/dSS”. Even though the prior software provided many favorable functionalities of hot plug and play node insertion, internal auto-replication of data files, and symmetric load balancing between all member nodes, it revealed a poor feature in recovering from an accidental malfunction of a storage node. Once a failure happened, the recovering process usually took at least several days or sometimes more than a week with a heavy cpu load. In some cases they fell into the so-called “split-brain” or “amnesia” condition, not to get recovered from it. Since the recovery time tightly depends on the capacity size of the fault node, individual HDD management is more desirable than large volumes of HDD arrays. In addition, the dynamic mutual awareness of data location information may be removed if some other static data distribution method can be applied. In this study, the candidate middleware of “OpenStack/Swift” and “GlusterFS” has been tested by using the real mass of LHD data for more than half a year, and finally GlusterFS has been selected to replace the present IznaStor. It has implemented very limited functionalities of cloud storage but a simplified RAID10-like structure, which may consequently provide lighter-weight read/write ability. Since the LABCOM data system is implemented to be independent of the storage structure, it is easy to plug off the IznaStor and on the new GlusterFS. The effective I/O speed is also confirmed to be on the same level as the estimated one from raw

  16. NEON's Eddy-Covariance Storage Exchange: from Tower to Data Portal

    Science.gov (United States)

    Durden, N. P.; Luo, H.; Xu, K.; Metzger, S.; Durden, D.

    2017-12-01

    NEON's eddy-covariance storage exchange system (ECSE) consists of a suite of sensors including temperature sensors, a CO2 and H2O gas analyzer, and isotopic CO2 and H2O analyzers. NEON's ECSE was developed to provide the vertical profile measurements of temperature, CO2 and H2O concentrations, the stable isotope ratios in CO2 (δ13C) and H2O (δ18O and δ2H) in the atmosphere. The profiles of temperature and concentrations of CO2 and H2O are key to calculate storage fluxes for eddy-covariance tower sites. Storage fluxes have a strong diurnal cycle and can be large in magnitude, especially at temporal scales less than one day. However, the storage term is often neglected in flux computations. To obtian accurate eddy-covariance fluxes, the storage fluxes are calculated and incorporated into the calculations of net surface-atmosphere ecosystem exchange of heat, CO2, and H2O for each NEON tower site. Once the ECSE raw data (Level 0, or L0) is retrieved at NEON's headquarters, it is preconditioned through a sequence of unit conversion, time regularization, and plausibility tests. By utilizing NEON's eddy4R framework (Metzger et al., 2017), higher-level data products are generated including: Level 1 (L1): Measurement-level specific averages of temperature and concentrations of CO2 and H2O. Level 2 (L2): Time rate of change of temperature and concentrations of CO2 and H2O over 30 min at each measurement level along the vertical tower profile. Level 3 (L3): Time rate of change of temperature and concentrations of CO2 and H2O over 30 min (L2), spatially interpolated along the vertical tower profile. Level 4 (L4): Storage fluxes of heat, CO2, and H2O calculated from the integrated time rate of change spatially interpolated profile (L3). The L4 storage fluxes are combined with turbulent fluxes to calculate the net surface-atmosphere ecosystem exchange of heat, CO2, and H2O. Moreover, a final quality flag and uncertainty budget are produced individually for each data stream

  17. Using Object Storage Technology vs Vendor Neutral Archives for an Image Data Repository Infrastructure.

    Science.gov (United States)

    Bialecki, Brian; Park, James; Tilkin, Mike

    2016-08-01

    The intent of this project was to use object storage and its database, which has the ability to add custom extensible metadata to an imaging object being stored within the system, to harness the power of its search capabilities, and to close the technology gap that healthcare faces. This creates a non-disruptive tool that can be used natively by both legacy systems and the healthcare systems of today which leverage more advanced storage technologies. The base infrastructure can be populated alongside current workflows without any interruption to the delivery of services. In certain use cases, this technology can be seen as a true alternative to the VNA (Vendor Neutral Archive) systems implemented by healthcare today. The scalability, security, and ability to process complex objects makes this more than just storage for image data and a commodity to be consumed by PACS (Picture Archiving and Communication System) and workstations. Object storage is a smart technology that can be leveraged to create vendor independence, standards compliance, and a data repository that can be mined for truly relevant content by adding additional context to search capabilities. This functionality can lead to efficiencies in workflow and a wealth of minable data to improve outcomes into the future.

  18. Using RFID to Enhance Security in Off-Site Data Storage

    Directory of Open Access Journals (Sweden)

    Enrique de la Hoz

    2010-08-01

    Full Text Available Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system’s benefits in terms of efficiency and failure prevention.

  19. Selective phase masking to reduce material saturation in holographic data storage systems

    Science.gov (United States)

    Phillips, Seth; Fair, Ivan

    2014-09-01

    Emerging networks and applications require enormous data storage. Holographic techniques promise high-capacity storage, given resolution of a few remaining technical issues. In this paper, we propose a technique to overcome one such issue: mitigation of large magnitude peaks in the stored image that cause material saturation resulting in readout errors. We consider the use of ternary data symbols, with modulation in amplitude and phase, and use a phase mask during the encoding stage to reduce the probability of large peaks arising in the stored Fourier domain image. An appropriate mask is selected from a predefined set of pseudo-random masks by computing the Fourier transform of the raw data array as well as the data array multiplied by each mask. The data array or masked array with the lowest Fourier domain peak values is recorded. On readout, the recorded array is multiplied by the mask used during recording to recover the original data array. Simulations are presented that demonstrate the benefit of this approach, and provide insight into the appropriate number of phase masks to use in high capacity holographic data storage systems.

  20. Managing security and privacy concerns over data storage in healthcare research.

    Science.gov (United States)

    Mackenzie, Isla S; Mantay, Brian J; McDonnell, Patrick G; Wei, Li; MacDonald, Thomas M

    2011-08-01

    Issues surrounding data security and privacy are of great importance when handling sensitive health-related data for research. The emphasis in the past has been on balancing the risks to individuals with the benefit to society of the use of databases for research. However, a new way of looking at such issues is that by optimising procedures and policies regarding security and privacy of data to the extent that there is no appreciable risk to the privacy of individuals, we can create a 'win-win' situation in which everyone benefits, and pharmacoepidemiological research can flourish with public support. We discuss holistic measures, involving both information technology and people, taken to improve the security and privacy of data storage. After an internal review, we commissioned an external audit by an independent consultant with a view to optimising our data storage and handling procedures. Improvements to our policies and procedures were implemented as a result of the audit. By optimising our storage of data, we hope to inspire public confidence and hence cooperation with the use of health care data in research. Copyright © 2011 John Wiley & Sons, Ltd.

  1. An Empirical Study on Android for Saving Non-shared Data on Public Storage

    OpenAIRE

    Liu, Xiangyu; Zhou, Zhe; Diao, Wenrui; Li, Zhou; Zhang, Kehuan

    2014-01-01

    With millions of apps that can be downloaded from official or third-party market, Android has become one of the most popular mobile platforms today. These apps help people in all kinds of ways and thus have access to lots of user's data that in general fall into three categories: sensitive data, data to be shared with other apps, and non-sensitive data not to be shared with others. For the first and second type of data, Android has provided very good storage models: an app's private sensitive...

  2. Compression and decompression of digital seismic waveform data for storage and communication

    International Nuclear Information System (INIS)

    Bhadauria, Y.S.; Kumar, Vijai

    1991-01-01

    Two different classes of data compression schemes, namely physical data compression schemes and logical data compression schemes are examined for their use in storage and communication of digital seismic waveform data. In physical data compression schemes, the physical size of the waveform is reduced. One, therefore, gets only a broad picture of the original waveform, when the data are retrieved and the waveform is reconstituted. Coerrelation between original and decompressed waveform varies inversely with the data compresion ratio. In the logical data compression schemes, the data are stored in a logically encoded form. Storage of unnecessary characters like blank space is avoided. On decompression original data are retrieved and compression error is nil. Three algorithms of logical data compression schemes have been developed and studied. These are : 1) optimum formatting schemes, 2) differential bit reduction scheme, and 3) six bit compression scheme. Results of the above three algorithms of logical compression class are compared with those of physical compression schemes reported in literature. It is found that for all types of data, six bit compression scheme gives the highest value of data compression ratio. (author). 6 refs., 8 figs., 1 appendix, 2 tabs

  3. On-Chip Fluorescence Switching System for Constructing a Rewritable Random Access Data Storage Device.

    Science.gov (United States)

    Nguyen, Hoang Hiep; Park, Jeho; Hwang, Seungwoo; Kwon, Oh Seok; Lee, Chang-Soo; Shin, Yong-Beom; Ha, Tai Hwan; Kim, Moonil

    2018-01-10

    We report the development of on-chip fluorescence switching system based on DNA strand displacement and DNA hybridization for the construction of a rewritable and randomly accessible data storage device. In this study, the feasibility and potential effectiveness of our proposed system was evaluated with a series of wet experiments involving 40 bits (5 bytes) of data encoding a 5-charactered text (KRIBB). Also, a flexible data rewriting function was achieved by converting fluorescence signals between "ON" and "OFF" through DNA strand displacement and hybridization events. In addition, the proposed system was successfully validated on a microfluidic chip which could further facilitate the encoding and decoding process of data. To the best of our knowledge, this is the first report on the use of DNA hybridization and DNA strand displacement in the field of data storage devices. Taken together, our results demonstrated that DNA-based fluorescence switching could be applicable to construct a rewritable and randomly accessible data storage device through controllable DNA manipulations.

  4. Rack Aware Data Placement for Network Consumption in Erasure-Coded Clustered Storage Systems

    Directory of Open Access Journals (Sweden)

    Bilin Shao

    2018-06-01

    Full Text Available The amount of encoded data replication in an erasure-coded clustered storage system has a great impact on the bandwidth consumption and network latency, mostly during data reconstruction. Aimed at the reasons that lead to the excess data transmission between racks, a rack aware data block placement method is proposed. In order to ensure rack-level fault tolerance and reduce the frequency and amount of the cross-rack data transmission during data reconstruction, the method deploys partial data block concentration to store the data blocks of a file in fewer racks. Theoretical analysis and simulation results show that our proposed strategy greatly reduces the frequency and data volume of the cross-rack transmission during data reconstruction. At the same time, it has better performance than the typical random distribution method in terms of network usage and data reconstruction efficiency.

  5. Conceptual design report: Nuclear materials storage facility renovation. Part 7, Estimate data

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-14

    The Nuclear Materials Storage Facility (NMSF) at the Los Alamos National Laboratory (LANL) was a Fiscal Year (FY) 1984 line-item project completed in 1987 that has never been operated because of major design and construction deficiencies. This renovation project, which will correct those deficiencies and allow operation of the facility, is proposed as an FY 97 line item. The mission of the project is to provide centralized intermediate and long-term storage of special nuclear materials (SNM) associated with defined LANL programmatic missions and to establish a centralized SNM shipping and receiving location for Technical Area (TA)-55 at LANL. Based on current projections, existing storage space for SNM at other locations at LANL will be loaded to capacity by approximately 2002. This will adversely affect LANUs ability to meet its mission requirements in the future. The affected missions include LANL`s weapons research, development, and testing (WRD&T) program; special materials recovery; stockpile survelliance/evaluation; advanced fuels and heat sources development and production; and safe, secure storage of existing nuclear materials inventories. The problem is further exacerbated by LANL`s inability to ship any materials offsite because of the lack of receiver sites for mate rial and regulatory issues. Correction of the current deficiencies and enhancement of the facility will provide centralized storage close to a nuclear materials processing facility. The project will enable long-term, cost-effective storage in a secure environment with reduced radiation exposure to workers, and eliminate potential exposures to the public. This report is organized according to the sections and subsections outlined by Attachment III-2 of DOE Document AL 4700.1, Project Management System. It is organized into seven parts. This document, Part VII - Estimate Data, contains the project cost estimate information.

  6. Conceptual design report: Nuclear materials storage facility renovation. Part 7, Estimate data

    International Nuclear Information System (INIS)

    1995-01-01

    The Nuclear Materials Storage Facility (NMSF) at the Los Alamos National Laboratory (LANL) was a Fiscal Year (FY) 1984 line-item project completed in 1987 that has never been operated because of major design and construction deficiencies. This renovation project, which will correct those deficiencies and allow operation of the facility, is proposed as an FY 97 line item. The mission of the project is to provide centralized intermediate and long-term storage of special nuclear materials (SNM) associated with defined LANL programmatic missions and to establish a centralized SNM shipping and receiving location for Technical Area (TA)-55 at LANL. Based on current projections, existing storage space for SNM at other locations at LANL will be loaded to capacity by approximately 2002. This will adversely affect LANUs ability to meet its mission requirements in the future. The affected missions include LANL's weapons research, development, and testing (WRD ampersand T) program; special materials recovery; stockpile survelliance/evaluation; advanced fuels and heat sources development and production; and safe, secure storage of existing nuclear materials inventories. The problem is further exacerbated by LANL's inability to ship any materials offsite because of the lack of receiver sites for mate rial and regulatory issues. Correction of the current deficiencies and enhancement of the facility will provide centralized storage close to a nuclear materials processing facility. The project will enable long-term, cost-effective storage in a secure environment with reduced radiation exposure to workers, and eliminate potential exposures to the public. This report is organized according to the sections and subsections outlined by Attachment III-2 of DOE Document AL 4700.1, Project Management System. It is organized into seven parts. This document, Part VII - Estimate Data, contains the project cost estimate information

  7. The TDR: A Repository for Long Term Storage of Geophysical Data and Metadata

    Science.gov (United States)

    Wilson, A.; Baltzer, T.; Caron, J.

    2006-12-01

    For many years Unidata has provided easy, low cost data access to universities and research labs. Historically Unidata technology provided access to data in near real time. In recent years Unidata has additionally turned to providing middleware to serve longer term data and associated metadata via its THREDDS technology, the most recent offering being the THREDDS Data Server (TDS). The TDS provides middleware for metadata access and management, OPeNDAP data access, and integration with the Unidata Integrated Data Viewer (IDV), among other benefits. The TDS was designed to support rolling archives of data, that is, data that exist only for a relatively short, predefined time window. Now we are creating an addition to the TDS, called the THREDDS Data Repository (TDR), which allows users to store and retrieve data and other objects for an arbitrarily long time period. Data in the TDR can also be served by the TDS. The TDR performs important functions of locating storage for the data, moving the data to and from the repository, assigning unique identifiers, and generating metadata. The TDR framework supports pluggable components that allow tailoring an implementation for a particular application. The Linked Environments for Atmospheric Discovery (LEAD) project provides an excellent use case for the TDR. LEAD is a multi-institutional Large Information Technology Research project funded by the National Science Foundation (NSF). The goal of LEAD is to create a framework based on Grid and Web Services to support mesoscale meteorology research and education. This includes capabilities such as launching forecast models, mining data for meteorological phenomena, and dynamic workflows that are automatically reconfigurable in response to changing weather. LEAD presents unique challenges in managing and storing large data volumes from real-time observational systems as well as data that are dynamically created during the execution of adaptive workflows. For example, in order to

  8. Sustainable storage of data. Energy conservation by sustainable storage in colleges; Duurzame opslag van data. Energiebesparing door duurzame opslag binnen het hoger onderwijs

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-11-15

    SURFnet, the Dutch organization of for colleges and universities in the field of ICT, issued another innovation scheme in the field of sustainability and ICT for 2012. The aim of the innovation scheme is to encourage people to start sustainable projects by means of ICT. In this context the College of Arnhem and Nijmegen (HAN) executed a project in which the possibilities to save energy through sustainable storage of data in its educational facilities [Dutch] SURFnet, de samenwerkingsorganisatie van hogescholen en universiteiten op het gebied van ICT, heeft voor 2012 opnieuw een innovatieregeling op het gebied van duurzaamheid en ICT uitgeschreven. Doel van de innovatieregeling is om instellingen te stimuleren projecten te starten om door middel van of met ICT structureel bij te dragen aan verduurzaming. De Hogeschool van Arnhem en Nijmegen (HAN) heeft in dit kader een project uitgevoerd waarin is onderzocht wat de mogelijkheden zijn om energie te besparen d.m.v. duurzame opslag van data in haar onderwijsinstelling.

  9. Hierarchical storage of large volume of multidector CT data using distributed servers

    Science.gov (United States)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  10. Data acquisition, storage and control architecture for the SuperNova Acceleration Probe

    International Nuclear Information System (INIS)

    Prosser, Alan; Fermilab; Cardoso, Guilherme; Chramowicz, John; Marriner, John; Rivera, Ryan; Turqueti, Marcos; Fermilab

    2007-01-01

    The SuperNova Acceleration Probe (SNAP) instrument is being designed to collect image and spectroscopic data for the study of dark energy in the universe. In this paper, we describe a distributed architecture for the data acquisition system which interfaces to visible light and infrared imaging detectors. The architecture includes the use of NAND flash memory for the storage of exposures in a file system. Also described is an FPGA-based lossless data compression algorithm with a configurable pre-scaler based on a novel square root data compression method to improve compression performance. The required interactions of the distributed elements with an instrument control unit will be described as well

  11. Water storage changes in North America retrieved from GRACE gravity and GPS data

    Directory of Open Access Journals (Sweden)

    Hansheng Wang

    2015-07-01

    Full Text Available As global warming continues, the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management. In North America as elsewhere in the world, changes in water resources strongly impact agriculture and animal husbandry. From a combination of Gravity Recovery and Climate Experiment (GRACE gravity and Global Positioning System (GPS data, it is recently found that water storage from August, 2002 to March, 2011 recovered after the extreme Canadian Prairies drought between 1999 and 2005. In this paper, we use GRACE monthly gravity data of Release 5 to track the water storage change from August, 2002 to June, 2014. In Canadian Prairies and the Great Lakes areas, the total water storage is found to have increased during the last decade by a rate of 73.8 ± 14.5 Gt/a, which is larger than that found in the previous study due to the longer time span of GRACE observations used and the reduction of the leakage error. We also find a long term decrease of water storage at a rate of −12.0 ± 4.2 Gt/a in Ungava Peninsula, possibly due to permafrost degradation and less snow accumulation during the winter in the region. In addition, the effect of total mass gain in the surveyed area, on present-day sea level, amounts to −0.18 mm/a, and thus should be taken into account in studies of global sea level change.

  12. Use of DBMS-10 for storage and retrieval of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Dunford, C.L.

    1977-01-01

    The use of a data base management system (DBMS) for storage of, and retrieval from, the many scientific data bases maintained by the National Nuclear Data Center is currently being investigated. It would appear that a commercially available DBMS package would save the Center considerable money and manpower when adding new data files to the library and in the long-term maintenance of current data files. Current DBMS technology and experience with an internal DBMS system suggests an inherent inefficiency in processing large data networks where significant portions are accessed in a sequential manner. Such a file is the Evaluated Nuclear Data File (ENDF/B), which contains many large data tables, each one normally accessed in a sequential manner. After gaining some experience and success in small applications of the commercially available DBMS package, DBMS-10, on the Center's DECsystem-10 computer, it was decided to select a large data base as a test case before making a final decision on the implementation of DBMS-10 for all data bases. The obvious approach is to utilize the DBMS to index a random-access file. In this way one is able to increase the storage and retrieval efficiency at the one-time cost of additional programing effort. 2 figures

  13. Use of DBMS-10 for storage and retrieval of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Dunford, C.L.

    1978-01-01

    The use of a data base management system (DBMS) for storage of, and retrieval from, the many scientific data bases maintained by the National Nuclear Data Center is currently being investigated. It would appear that a commercially available DBMS package would save the Center considerable money and manpower when adding new data files to our library and in the long-term maintenance of our current data files. Current DBMS technology and experience with our internal DBMS system suggests an inherent inefficiency in processing large data networks where significant portions are accessed in a sequential manner. Such a file is the Evaluated Nuclear Data File (ENDF/B) which contains many large data tables, each one normally accessed in a sequential manner. After gaining some experience and success in small applications of the commercially available DBMS package, DBMS-10, on the Center's DECsystem-10 computer, it was decided to select one of our large data bases as a test case before making a final decision on the implementation of DBMS-10 for all our data bases. The obvious approach is to utilize the DBMS to index a random access file. In this way one is able to increase the storage and retrieval efficiency at the one-time cost of additional programming effort

  14. Monitoring of large-scale federated data storage: XRootD and beyond

    International Nuclear Information System (INIS)

    Andreeva, J; Beche, A; Arias, D Diguez; Giordano, D; Saiz, P; Tuckett, D; Belov, S; Oleynik, D; Petrosyan, A; Tadel, M; Vukotic, I

    2014-01-01

    The computing models of the LHC experiments are gradually moving from hierarchical data models with centrally managed data pre-placement towards federated storage which provides seamless access to data files independently of their location and dramatically improve recovery due to fail-over mechanisms. Construction of the data federations and understanding the impact of the new approach to data management on user analysis requires complete and detailed monitoring. Monitoring functionality should cover the status of all components of the federated storage, measuring data traffic and data access performance, as well as being able to detect any kind of inefficiencies and to provide hints for resource optimization and effective data distribution policy. Data mining of the collected monitoring data provides a deep insight into new usage patterns. In the WLCG context, there are several federations currently based on the XRootD technology. This paper will focus on monitoring for the ATLAS and CMS XRootD federations implemented in the Experiment Dashboard monitoring framework. Both federations consist of many dozens of sites accessed by many hundreds of clients and they continue to grow in size. Handling of the monitoring flow generated by these systems has to be well optimized in order to achieve the required performance. Furthermore, this paper demonstrates the XRootD monitoring architecture is sufficiently generic to be easily adapted for other technologies, such as HTTP/WebDAV dynamic federations.

  15. Large-scale electrophysiology: acquisition, compression, encryption, and storage of big data.

    Science.gov (United States)

    Brinkmann, Benjamin H; Bower, Mark R; Stengel, Keith A; Worrell, Gregory A; Stead, Matt

    2009-05-30

    The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single-neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single-neuron action potentials, high frequency oscillations, and high amplitude ultra-slow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range-encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information.

  16. Mahanaxar: quality of service guarantees in high-bandwidth, real-time streaming data storage

    Energy Technology Data Exchange (ETDEWEB)

    Bigelow, David [Los Alamos National Laboratory; Bent, John [Los Alamos National Laboratory; Chen, Hsing-Bung [Los Alamos National Laboratory; Brandt, Scott [UCSC

    2010-04-05

    Large radio telescopes, cyber-security systems monitoring real-time network traffic, and others have specialized data storage needs: guaranteed capture of an ultra-high-bandwidth data stream, retention of the data long enough to determine what is 'interesting,' retention of interesting data indefinitely, and concurrent read/write access to determine what data is interesting, without interrupting the ongoing capture of incoming data. Mahanaxar addresses this problem. Mahanaxar guarantees streaming real-time data capture at (nearly) the full rate of the raw device, allows concurrent read and write access to the device on a best-effort basis without interrupting the data capture, and retains data as long as possible given the available storage. It has built in mechanisms for reliability and indexing, can scale to meet arbitrary bandwidth requirements, and handles both small and large data elements equally well. Results from our prototype implementation shows that Mahanaxar provides both better guarantees and better performance than traditional file systems.

  17. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    Science.gov (United States)

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  18. A Note on Interfacing Object Warehouses and Mass Storage Systems for Data Mining Applications

    Science.gov (United States)

    Grossman, Robert L.; Northcutt, Dave

    1996-01-01

    Data mining is the automatic discovery of patterns, associations, and anomalies in data sets. Data mining requires numerically and statistically intensive queries. Our assumption is that data mining requires a specialized data management infrastructure to support the aforementioned intensive queries, but because of the sizes of data involved, this infrastructure is layered over a hierarchical storage system. In this paper, we discuss the architecture of a system which is layered for modularity, but exploits specialized lightweight services to maintain efficiency. Rather than use a full functioned database for example, we use light weight object services specialized for data mining. We propose using information repositories between layers so that components on either side of the layer can access information in the repositories to assist in making decisions about data layout, the caching and migration of data, the scheduling of queries, and related matters.

  19. Data on the no-load performance analysis of a tomato postharvest storage system.

    Science.gov (United States)

    Ayomide, Orhewere B; Ajayi, Oluseyi O; Banjo, Solomon O; Ajayi, Adesola A

    2017-08-01

    In this present investigation, an original and detailed empirical data on the transfer of heat in a tomato postharvest storage system was presented. No-load tests were performed for a period of 96 h. The heat distribution at different locations, namely the top, middle and bottom of the system was acquired, at a time interval of 30 min for the test period. The humidity inside the system was taken into consideration. Thus, No-load tests with or without introduction of humidity were carried out and data showing the effect of a rise in humidity level, on temperature distribution were acquired. The temperatures at the external mechanical cooling components were acquired and could be used for showing the performance analysis of the storage system.

  20. Spatially coupled low-density parity-check error correction for holographic data storage

    Science.gov (United States)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  1. Oxidation of graphene 'bow tie' nanofuses for permanent, write-once-read-many data storage devices.

    Science.gov (United States)

    Pearson, A C; Jamieson, S; Linford, M R; Lunt, B M; Davis, R C

    2013-04-05

    We have fabricated nanoscale fuses from CVD graphene sheets with a 'bow tie' geometry for write-once-read-many data storage applications. The fuses are programmed using thermal oxidation driven by Joule heating. Fuses that were 250 nm wide with 2.5 μm between contact pads were programmed with average voltages and powers of 4.9 V and 2.1 mW, respectively. The required voltages and powers decrease with decreasing fuse sizes. Graphene shows extreme chemical and electronic stability; fuses require temperatures of about 400 °C for oxidation, indicating that they are excellent candidates for permanent data storage. To further demonstrate this stability, fuses were subjected to applied biases in excess of typical read voltages; stable currents were observed when a voltage of 10 V was applied to the devices in the off state and 1 V in the on state for 90 h each.

  2. UV-Photodimerization in Uracil-substituted dendrimers for high density data storage

    DEFF Research Database (Denmark)

    Lohse, Brian; Vestberg, Robert; Ivanov, Mario Tonev

    2007-01-01

    Two series of uracil-functionalized dendritic macromolecules based on poly (amidoamine) PAMAM and 2,2-bis(hydroxymethylpropionic acid) bis-MPA backbones were prepared and their photoinduced (2 pi+2 pi) cycloaddition reactions upon exposure to UV light at 257 nm examined. Dendrimers up to 4th...... generation were synthesized and investigated as potential materials for high capacity optical data storage with their dimerization efficiency compared to uracil as a reference compound. This allows the impact of increasing the generation number of the dendrimers, both the number of chromophores, as well...... nm with an intensity of 70 mW/cm(2) could be obtained suggesting future use as recording media for optical data storage. (c) 2007 Wiley Periodicals, Inc....

  3. Geochemical modelling of CO2-water-rock interactions for carbon storage : data requirements and outputs

    International Nuclear Information System (INIS)

    Kirste, D.

    2008-01-01

    A geochemical model was used to predict the short-term and long-term behaviour of carbon dioxide (CO 2 ), formation water, and reservoir mineralogy at a carbon sequestration site. Data requirements for the geochemical model included detailed mineral petrography; formation water chemistry; thermodynamic and kinetic data for mineral phases; and rock and reservoir physical characteristics. The model was used to determine the types of outputs expected for potential CO 2 storage sites and natural analogues. Reaction path modelling was conducted to determine the total reactivity or CO 2 storage capability of the rock by applying static equilibrium and kinetic simulations. Potential product phases were identified using the modelling technique, which also enabled the identification of the chemical evolution of the system. Results of the modelling study demonstrated that changes in porosity and permeability over time should be considered during the site selection process.

  4. International Network Performance and Security Testing Based on Distributed Abyss Storage Cluster and Draft of Data Lake Framework

    Directory of Open Access Journals (Sweden)

    ByungRae Cha

    2018-01-01

    Full Text Available The megatrends and Industry 4.0 in ICT (Information Communication & Technology are concentrated in IoT (Internet of Things, BigData, CPS (Cyber Physical System, and AI (Artificial Intelligence. These megatrends do not operate independently, and mass storage technology is essential as large computing technology is needed in the background to support them. In order to evaluate the performance of high-capacity storage based on open source Ceph, we carry out the network performance test of Abyss storage with domestic and overseas sites using KOREN (Korea Advanced Research Network. And storage media and network bonding are tested to evaluate the performance of the storage itself. Additionally, the security test is demonstrated by Cuckoo sandbox and Yara malware detection among Abyss storage cluster and oversea sites. Lastly, we have proposed the draft design of Data Lake framework in order to solve garbage dump problem.

  5. Logic operations and data storage using vortex magnetization states in mesoscopic permalloy rings, and optical readout

    Energy Technology Data Exchange (ETDEWEB)

    Bowden, S R; Gibson, U J, E-mail: u.gibson@dartmouth.ed [Thayer School of Engineering, Dartmouth College, Hanover, NH 03755-8000 (United States)

    2010-01-01

    Optical coatings applied to one-half of thin film magnetic rings allow real-time readout of the chirality of the vortex state of micro- and nanomagnetic structures by breaking the symmetry of the optical signal. We use this technique to demonstrate data storage, operation of a NOT gate that uses exchange interactions between slightly overlapping rings, and to investigate the use of chains of rings as connecting wires for linking gates.

  6. Design of a Mission Data Storage and Retrieval System for NASA Dryden Flight Research Center

    Science.gov (United States)

    Lux, Jessica; Downing, Bob; Sheldon, Jack

    2007-01-01

    The Western Aeronautical Test Range (WATR) at the NASA Dryden Flight Research Center (DFRC) employs the WATR Integrated Next Generation System (WINGS) for the processing and display of aeronautical flight data. This report discusses the post-mission segment of the WINGS architecture. A team designed and implemented a system for the near- and long-term storage and distribution of mission data for flight projects at DFRC, providing the user with intelligent access to data. Discussed are the legacy system, an industry survey, system operational concept, high-level system features, and initial design efforts.

  7. Measuring Mangrove Type, Structure And Carbon Storage With UAVSAR And ALOS/PALSAR Data

    Science.gov (United States)

    Fatoyinbo, T. E.; Cornforth, W.; Pinto, N.; Simard, M.; Pettorelli, N.

    2011-12-01

    Mangrove forests provide a great number of ecosystem services ranging from shoreline protection (e.g. against erosion, tsunamis and storms), nutrient cycling, fisheries production, building materials and habitat. Mangrove forests have been shown to store very large amounts of Carbon, both above and belowground, with storage capacities even greater than tropical rainforests. But as a result of their location and economic value, they are among the most rapidly changing landscapes in the World. Mangrove extent is limited 1) in total extent to tidally influenced coastal areas and 2) to tropical and subtropical regions. This can lead to difficulties mapping mangrove type (such as degraded vs non degraded, scrub vs tall, dense vs sparse) because of cloud cover and limited access to high-resolution optical data. To accurately quantify the effect of land use and climate change on tropical wetland ecosystems, we must develop effective mapping methodologies that take into account not only extent, but also the structure and health of the ecosystem. This must be done by including Synthetic Aperture Radar (SAR) data. In this research, we used L-band Synthetic Aperture Radar data from the ALOS/PALSAR and UAVSAR instruments over selected sites in the Americas (Sierpe, Costa Rica and Everglades, Florida)and Asia (Sundarbans). In particular, we used the SAR data in combination with other remotely sensed data and field data to 1) map mangrove extent 2) determine mangrove type, health and adjascent land use, and 3) estimate aboveground biomass and carbon storage for entire mangrove systems. We used different classification methodologies such as polarimetric decomposition, unsupervised classification and image segmentation to map mangrove type. Because of the high resolution of the radar data, and its ability to interact with forest volume, we are able to identify mangrove zones and differentiate between mangroves and other forests/land uses. We also integrated InSAR data (SRTM

  8. A Toolkit For Storage Qos Provisioning For Data-Intensive Applications

    Directory of Open Access Journals (Sweden)

    Renata Słota

    2012-01-01

    Full Text Available This paper describes a programming toolkit developed in the PL-Grid project, named QStorMan, which supports storage QoS provisioning for data-intensive applications in distributed environments. QStorMan exploits knowledge-oriented methods for matching storage resources to non-functional requirements, which are defined for a data-intensive application. In order to support various usage scenarios, QStorMan provides two interfaces, such as programming libraries or a web portal. The interfaces allow to define the requirements either directly in an application source code or by using an intuitive graphical interface. The first way provides finer granularity, e.g., each portion of data processed by an application can define a different set of requirements. The second method is aimed at legacy applications support, which source code can not be modified. The toolkit has been evaluated using synthetic benchmarks and the production infrastructure of PL-Grid, in particular its storage infrastructure, which utilizes the Lustre file system.

  9. nmrML: A Community Supported Open Data Standard for the Description, Storage, and Exchange of NMR Data.

    Science.gov (United States)

    Schober, Daniel; Jacob, Daniel; Wilson, Michael; Cruz, Joseph A; Marcu, Ana; Grant, Jason R; Moing, Annick; Deborde, Catherine; de Figueiredo, Luis F; Haug, Kenneth; Rocca-Serra, Philippe; Easton, John; Ebbels, Timothy M D; Hao, Jie; Ludwig, Christian; Günther, Ulrich L; Rosato, Antonio; Klein, Matthias S; Lewis, Ian A; Luchinat, Claudio; Jones, Andrew R; Grauslys, Arturas; Larralde, Martin; Yokochi, Masashi; Kobayashi, Naohiro; Porzel, Andrea; Griffin, Julian L; Viant, Mark R; Wishart, David S; Steinbeck, Christoph; Salek, Reza M; Neumann, Steffen

    2018-01-02

    NMR is a widely used analytical technique with a growing number of repositories available. As a result, demands for a vendor-agnostic, open data format for long-term archiving of NMR data have emerged with the aim to ease and encourage sharing, comparison, and reuse of NMR data. Here we present nmrML, an open XML-based exchange and storage format for NMR spectral data. The nmrML format is intended to be fully compatible with existing NMR data for chemical, biochemical, and metabolomics experiments. nmrML can capture raw NMR data, spectral data acquisition parameters, and where available spectral metadata, such as chemical structures associated with spectral assignments. The nmrML format is compatible with pure-compound NMR data for reference spectral libraries as well as NMR data from complex biomixtures, i.e., metabolomics experiments. To facilitate format conversions, we provide nmrML converters for Bruker, JEOL and Agilent/Varian vendor formats. In addition, easy-to-use Web-based spectral viewing, processing, and spectral assignment tools that read and write nmrML have been developed. Software libraries and Web services for data validation are available for tool developers and end-users. The nmrML format has already been adopted for capturing and disseminating NMR data for small molecules by several open source data processing tools and metabolomics reference spectral libraries, e.g., serving as storage format for the MetaboLights data repository. The nmrML open access data standard has been endorsed by the Metabolomics Standards Initiative (MSI), and we here encourage user participation and feedback to increase usability and make it a successful standard.

  10. A split-path schema-based RFID data storage model in supply chain management.

    Science.gov (United States)

    Fan, Hua; Wu, Quanyuan; Lin, Yisong; Zhang, Jianfeng

    2013-05-03

    In modern supply chain management systems, Radio Frequency IDentification (RFID) technology has become an indispensable sensor technology and massive RFID data sets are expected to become commonplace. More and more space and time are needed to store and process such huge amounts of RFID data, and there is an increasing realization that the existing approaches cannot satisfy the requirements of RFID data management. In this paper, we present a split-path schema-based RFID data storage model. With a data separation mechanism, the massive RFID data produced in supply chain management systems can be stored and processed more efficiently. Then a tree structure-based path splitting approach is proposed to intelligently and automatically split the movement paths of products . Furthermore, based on the proposed new storage model, we design the relational schema to store the path information and time information of tags, and some typical query templates and SQL statements are defined. Finally, we conduct various experiments to measure the effect and performance of our model and demonstrate that it performs significantly better than the baseline approach in both the data expression and path-oriented RFID data query performance.

  11. BRISK--research-oriented storage kit for biology-related data.

    Science.gov (United States)

    Tan, Alan; Tripp, Ben; Daley, Denise

    2011-09-01

    In genetic science, large-scale international research collaborations represent a growing trend. These collaborations have demanding and challenging database, storage, retrieval and communication needs. These studies typically involve demographic and clinical data, in addition to the results from numerous genomic studies (omics studies) such as gene expression, eQTL, genome-wide association and methylation studies, which present numerous challenges, thus the need for data integration platforms that can handle these complex data structures. Inefficient methods of data transfer and access control still plague research collaboration. As science becomes more and more collaborative in nature, the need for a system that adequately manages data sharing becomes paramount. Biology-Related Information Storage Kit (BRISK) is a package of several web-based data management tools that provide a cohesive data integration and management platform. It was specifically designed to provide the architecture necessary to promote collaboration and expedite data sharing between scientists. The software, documentation, Java source code and demo are available at http://genapha.icapture.ubc.ca/brisk/index.jsp. BRISK was developed in Java, and tested on an Apache Tomcat 6 server with a MySQL database. denise.daley@hli.ubc.ca.

  12. A Split-Path Schema-Based RFID Data Storage Model in Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Jianfeng Zhang

    2013-05-01

    Full Text Available In modern supply chain management systems, Radio Frequency IDentification (RFID technology has become an indispensable sensor technology and massive RFID data sets are expected to become commonplace. More and more space and time are needed to store and process such huge amounts of RFID data, and there is an increasing realization that the existing approaches cannot satisfy the requirements of RFID data management. In this paper, we present a split-path schema-based RFID data storage model. With a data separation mechanism, the massive RFID data produced in supply chain management systems can be stored and processed more efficiently. Then a tree structure-based path splitting approach is proposed to intelligently and automatically split the movement paths of products . Furthermore, based on the proposed new storage model, we design the relational schema to store the path information and time information of tags, and some typical query templates and SQL statements are defined. Finally, we conduct various experiments to measure the effect and performance of our model and demonstrate that it performs significantly better than the baseline approach in both the data expression and path-oriented RFID data query performance.

  13. Petaminer: Using ROOT for efficient data storage in MySQL database

    Science.gov (United States)

    Cranshaw, J.; Malon, D.; Vaniachine, A.; Fine, V.; Lauret, J.; Hamill, P.

    2010-04-01

    High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.

  14. Petaminer: Using ROOT for efficient data storage in MySQL database

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Vaniachine, A; Fine, V; Lauret, J; Hamill, P

    2010-01-01

    High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.

  15. Carbon storage estimation of main forestry ecosystems in Northwest Yunnan Province using remote sensing data

    Science.gov (United States)

    Wang, Jinliang; Wang, Xiaohua; Yue, Cairong; Xu, Tian-shu; Cheng, Pengfei

    2014-05-01

    Estimating regional forest organic carbon pool has became a hot issue in the study of forest ecosystem carbon cycle. The forest ecosystem in Shangri-La County, Northwest Yunnan Province, are well preserved, and the area of Picea Likiangensis, Quercus Aquifolioides, Pinus Densata and Pinus Yunnanensis amounts to 80% of the total arboreal forest area in Shangri-La County. Based on the field measurements, remote sensing data and GIS analysis, three models were established for carbon storage estimation. The remote sensing information model with the highest accuracy were used to calculate the carbon storages of the four main forest ecosystems. The results showed: (1) the total carbon storage of the four forest ecosystems in Shangri-La is 302.984 TgC, in which tree layer, shrub layer, herb layer, litter layer, soil layer are 60.196TgC, 5.433TgC, 1.080TgC, 3.582TgC and 232.692TgC, accounting for 19.87%, 1.79%, 0.36%, 1.18%, 76.80% of the total carbon storage respectively. (2)The order of the carbon storage from high to low is soil layer, tree layer, shrub layer, litter layer and herb layer respectively for the four main forest ecosystems. (3)The total average carbon density of the four main forest ecosystems is 403.480 t/hm2, and the carbon densities of the Picea Likiangensis, Quercus Aquifolioides, Pinus Densata and Pinus Yunnanensis are 576.889 t/hm2, 326.947 t/hm2, 279.993 t/hm2 and 255.792 t/hm2 respectively.

  16. Comparative analysis on operation strategies of CCHP system with cool thermal storage for a data center

    International Nuclear Information System (INIS)

    Song, Xu; Liu, Liuchen; Zhu, Tong; Zhang, Tao; Wu, Zhu

    2016-01-01

    Highlights: • Load characteristics of the data center make a good match with CCHP systems. • TRNSYS models was used to simulate the discussed CCHP system in a data center. • Comprehensive system performance under two operation strategies were evaluated. • Cool thermal storage was introduced to reuse the energy surplus by FEL system. • The suitable principle of equipment selection for a FEL system were proposed. - Abstract: Combined Cooling, Heating, and Power (CCHP) systems with cool thermal storage can provide an appropriate energy supply for data centers. In this work, we evaluate the CCHP system performance under two different operation strategies, i.e., following thermal load (FTL) and following electric load (FEL). The evaluation is performed through a case study by using TRNSYS software. In the FEL system, the amount of cool thermal energy generated by the absorption chillers is larger than the cooling load and it can be therefore stored and reused at the off-peak times. Results indicate that systems under both operation strategies have advantages in the fields of energy saving and environmental protection. The largest percentage of reduction of primary energy consumption, CO_2 emissions, and operation cost for the FEL system, are 18.5%, 37.4% and 46.5%, respectively. Besides, the system performance is closely dependent on the equipment selection. The relation between the amount of energy recovered through cool thermal storage and the primary energy consumption has also been taken into account. Moreover, the introduction of cool thermal storage can adjust the heat to power ratio on the energy supply side close to that on the consumer side and consequently promote system flexibility and energy efficiency.

  17. EXP-PAC: providing comparative analysis and storage of next generation gene expression data.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Lefèvre, Christophe

    2012-07-01

    Microarrays and more recently RNA sequencing has led to an increase in available gene expression data. How to manage and store this data is becoming a key issue. In response we have developed EXP-PAC, a web based software package for storage, management and analysis of gene expression and sequence data. Unique to this package is SQL based querying of gene expression data sets, distributed normalization of raw gene expression data and analysis of gene expression data across experiments and species. This package has been populated with lactation data in the international milk genomic consortium web portal (http://milkgenomics.org/). Source code is also available which can be hosted on a Windows, Linux or Mac APACHE server connected to a private or public network (http://mamsap.it.deakin.edu.au/~pcc/Release/EXP_PAC.html). Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Architecture and Implementation of a Scalable Sensor Data Storage and Analysis System Using Cloud Computing and Big Data Technologies

    Directory of Open Access Journals (Sweden)

    Galip Aydin

    2015-01-01

    Full Text Available Sensors are becoming ubiquitous. From almost any type of industrial applications to intelligent vehicles, smart city applications, and healthcare applications, we see a steady growth of the usage of various types of sensors. The rate of increase in the amount of data produced by these sensors is much more dramatic since sensors usually continuously produce data. It becomes crucial for these data to be stored for future reference and to be analyzed for finding valuable information, such as fault diagnosis information. In this paper we describe a scalable and distributed architecture for sensor data collection, storage, and analysis. The system uses several open source technologies and runs on a cluster of virtual servers. We use GPS sensors as data source and run machine-learning algorithms for data analysis.

  19. Privacy-Preserving Outsourced Auditing Scheme for Dynamic Data Storage in Cloud

    Directory of Open Access Journals (Sweden)

    Tengfei Tu

    2017-01-01

    Full Text Available As information technology develops, cloud storage has been widely accepted for keeping volumes of data. Remote data auditing scheme enables cloud user to confirm the integrity of her outsourced file via the auditing against cloud storage, without downloading the file from cloud. In view of the significant computational cost caused by the auditing process, outsourced auditing model is proposed to make user outsource the heavy auditing task to third party auditor (TPA. Although the first outsourced auditing scheme can protect against the malicious TPA, this scheme enables TPA to have read access right over user’s outsourced data, which is a potential risk for user data privacy. In this paper, we introduce the notion of User Focus for outsourced auditing, which emphasizes the idea that lets user dominate her own data. Based on User Focus, our proposed scheme not only can prevent user’s data from leaking to TPA without depending on data encryption but also can avoid the use of additional independent random source that is very difficult to meet in practice. We also describe how to make our scheme support dynamic updates. According to the security analysis and experimental evaluations, our proposed scheme is provably secure and significantly efficient.

  20. Evaluation of Big Data Containers for Popular Storage, Retrieval, and Computation Primitives in Earth Science Analysis

    Science.gov (United States)

    Das, K.; Clune, T.; Kuo, K. S.; Mattmann, C. A.; Huang, T.; Duffy, D.; Yang, C. P.; Habermann, T.

    2015-12-01

    Data containers are infrastructures that facilitate storage, retrieval, and analysis of data sets. Big data applications in Earth Science require a mix of processing techniques, data sources and storage formats that are supported by different data containers. Some of the most popular data containers used in Earth Science studies are Hadoop, Spark, SciDB, AsterixDB, and RasDaMan. These containers optimize different aspects of the data processing pipeline and are, therefore, suitable for different types of applications. These containers are expected to undergo rapid evolution and the ability to re-test, as they evolve, is very important to ensure the containers are up to date and ready to be deployed to handle large volumes of observational data and model output. Our goal is to develop an evaluation plan for these containers to assess their suitability for Earth Science data processing needs. We have identified a selection of test cases that are relevant to most data processing exercises in Earth Science applications and we aim to evaluate these systems for optimal performance against each of these test cases. The use cases identified as part of this study are (i) data fetching, (ii) data preparation for multivariate analysis, (iii) data normalization, (iv) distance (kernel) computation, and (v) optimization. In this study we develop a set of metrics for performance evaluation, define the specifics of governance, and test the plan on current versions of the data containers. The test plan and the design mechanism are expandable to allow repeated testing with both new containers and upgraded versions of the ones mentioned above, so that we can gauge their utility as they evolve.

  1. dCache data storage system implementations at a Tier-2 centre

    Energy Technology Data Exchange (ETDEWEB)

    Tsigenov, Oleg; Nowack, Andreas; Kress, Thomas [III. Physikalisches Institut B, RWTH Aachen (Germany)

    2009-07-01

    The experimental high energy physics groups of the RWTH Aachen University operate one of the largest Grid Tier-2 sites in the world and offer more than 2000 modern CPU cores and about 550 TB of disk space mainly to the CMS experiment and to a lesser extent to the Auger and Icecube collaborations.Running such a large data cluster requires a flexible storage system with high performance. We use dCache for this purpose and are integrated into the dCache support team to the benefit of the German Grid sites. Recently, a storage pre-production cluster has been built to study the setup and the behavior of novel dCache features within Chimera without interfering with the production system. This talk gives an overview about the practical experience gained with dCache on both the production and the testbed cluster and discusses future plans.

  2. Analysis of the influence of input data uncertainties on determining the reliability of reservoir storage capacity

    Directory of Open Access Journals (Sweden)

    Marton Daniel

    2015-12-01

    Full Text Available The paper contains a sensitivity analysis of the influence of uncertainties in input hydrological, morphological and operating data required for a proposal for active reservoir conservation storage capacity and its achieved values. By introducing uncertainties into the considered inputs of the water management analysis of a reservoir, the subsequent analysed reservoir storage capacity is also affected with uncertainties. The values of water outflows from the reservoir and the hydrological reliabilities are affected with uncertainties as well. A simulation model of reservoir behaviour has been compiled with this kind of calculation as stated below. The model allows evaluation of the solution results, taking uncertainties into consideration, in contributing to a reduction in the occurrence of failure or lack of water during reservoir operation in low-water and dry periods.

  3. ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization

    CERN Document Server

    Naumann, Axel; Ballintijn, Maarten; Bellenot, Bertrand; Biskup, Marek; Brun, Rene; Buncic, Nenad; Canal, Philippe; Casadei, Diego; Couet, Olivier; Fine, Valery; Franco, Leandro; Ganis, Gerardo; Gheata, Andrei; Gonzalez~Maline, David; Goto, Masaharu; Iwaszkiewicz, Jan; Kreshuk, Anna; Marcos Segura, Diego; Maunder, Richard; Moneta, Lorenzo; Offermann, Eddy; Onuchin, Valeriy; Panacek, Suzanne; Rademakers, Fons; Russo, Paul; Tadel, Matevz

    2009-01-01

    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, the RooFit package allows the user to perform complex data modeling and fitting while the RooStats library provides abstractions and implementations for advance...

  4. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    International Nuclear Information System (INIS)

    Potekhin, M

    2012-01-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with a R and D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic loads.

  5. A novel data storage logic in the cloud [version 3; referees: 2 approved, 1 not approved

    Directory of Open Access Journals (Sweden)

    Bence Mátyás

    2017-08-01

    Full Text Available Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. A feasible solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table.

  6. Bookshelf: a simple curation system for the storage of biomolecular simulation data.

    Science.gov (United States)

    Vohra, Shabana; Hall, Benjamin A; Holdbrook, Daniel A; Khalid, Syma; Biggin, Philip C

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call 'Bookshelf', that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/

  7. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip

    Science.gov (United States)

    Shulaker, Max M.; Hills, Gage; Park, Rebecca S.; Howe, Roger T.; Saraswat, Krishna; Wong, H.-S. Philip; Mitra, Subhasish

    2017-07-01

    The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors—promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage—fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce ‘highly processed’ information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

  8. The Grid Enabled Mass Storage System (GEMMS): the Storage and Data management system used at the INFN Tier1 at CNAF.

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The storage solution currently used in production at the INFN Tier-1 at CNAF, is the result of several years of case studies, software development and tests. This solution, called the Grid Enabled Mass Storage System (GEMSS), is based on a custom integration between a fast and reliable parallel filesystem (IBM GPFS), with a complete integrated tape backend based on TIVOLI TSM Hierarchical storage management (HSM) and the Storage Resource Manager (StoRM), providing access to grid users through a standard SRM interface. Since the start of the operations of the Large Hadron Collider (LHC), all the LHC experiments have been using GEMMS at CNAF for both the fast access to data on disk and the long-term tape archive. Moreover, during the last year, GEMSS has become the standard solution for all the other experiments hosted at CNAF, allowing the definitive consolidation of the data storage layer. Our choice has proved to be successful in the last two years of production with constant enhancements in the software re...

  9. Intersecting-storage-rings inclusive data and the charge ratio of cosmic-ray muons

    CERN Document Server

    Yen, E

    1973-01-01

    The ( mu /sup +// mu /sup -/) ratio at sea level has been calculated by Frazer et al (1972) using the hypothesis of limiting fragmentation together with the inclusive data below 30 GeV/c. They obtained a value of mu /sup +// mu /sup -/ approximately=1.56, to be compared with experimental value of 1.2 to 1.4. The ratio has been calculated using the recent ISR (CERN Intersecting Storage Rings) data, and obtained a value of mu /sup +// mu /sup -/ approximately 1.40 in good agreement with the experimental result. (8 refs).

  10. The use of historical data storage and retrieval systems at nuclear power plants

    International Nuclear Information System (INIS)

    Langen, P.A.

    1984-01-01

    In order to assist the nuclear plant operator in the assessment of useful historical plant information, C-E has developed the Historical Data Storage and Retrieval (HDSR) system, which will record, store, recall, and display historical information as it is needed by plant personnel. The system has been designed to respond to the user's needs under a variety of situations. The user is offered the choice of viewing historical data on color video displays as groups or on computer printouts as logs. The graphical representation is based upon a sectoring concept that provides a zoom-in enlargement of sections of the HDSR graphs

  11. Adaptation of PyFlag to Efficient Analysis of Overtaken Computer Data Storage

    Directory of Open Access Journals (Sweden)

    Aleksander Byrski

    2010-03-01

    Full Text Available Based on existing software aimed at investigation support in the analysis of computer data storage overtaken during investigation (PyFlag, an extension is proposed involving the introduction of dedicated components for data identification and filtering. Hash codes for popular software contained in NIST/NSRL database are considered in order to avoid unwanted files while searching and to classify them into several categories. The extension allows for further analysis, e.g. using artificial intelligence methods. The considerations are illustrated by the overview of the system's design.

  12. Data security in genomics: A review of Australian privacy requirements and their relation to cryptography in data storage.

    Science.gov (United States)

    Schlosberg, Arran

    2016-01-01

    The advent of next-generation sequencing (NGS) brings with it a need to manage large volumes of patient data in a manner that is compliant with both privacy laws and long-term archival needs. Outside of the realm of genomics there is a need in the broader medical community to store data, and although radiology aside the volume may be less than that of NGS, the concepts discussed herein are similarly relevant. The relation of so-called "privacy principles" to data protection and cryptographic techniques is explored with regards to the archival and backup storage of health data in Australia, and an example implementation of secure management of genomic archives is proposed with regards to this relation. Readers are presented with sufficient detail to have informed discussions - when implementing laboratory data protocols - with experts in the fields.

  13. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    International Nuclear Information System (INIS)

    Hipp, James R.; Moore, Susan G.; Myers, Stephen C.; Schultz, Craig A.; Shepherd, Ellen; Young, Christopher J.

    1999-01-01

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis for accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation

  14. Ensuring Data Storage Security in Tree cast Routing Architecture for Sensor Networks

    Science.gov (United States)

    Kumar, K. E. Naresh; Sagar, U. Vidya; Waheed, Mohd. Abdul

    2010-10-01

    In this paper presents recent advances in technology have made low-cost, low-power wireless sensors with efficient energy consumption. A network of such nodes can coordinate among themselves for distributed sensing and processing of certain data. For which, we propose an architecture to provide a stateless solution in sensor networks for efficient routing in wireless sensor networks. This type of architecture is known as Tree Cast. We propose a unique method of address allocation, building up multiple disjoint trees which are geographically inter-twined and rooted at the data sink. Using these trees, routing messages to and from the sink node without maintaining any routing state in the sensor nodes is possible. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, this routing architecture moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this paper, we focus on data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in this architecture, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server

  15. Threshold response using modulated continuous wave illumination for multilayer 3D optical data storage

    Science.gov (United States)

    Saini, A.; Christenson, C. W.; Khattab, T. A.; Wang, R.; Twieg, R. J.; Singer, K. D.

    2017-01-01

    In order to achieve a high capacity 3D optical data storage medium, a nonlinear or threshold writing process is necessary to localize data in the axial dimension. To this end, commercial multilayer discs use thermal ablation of metal films or phase change materials to realize such a threshold process. This paper addresses a threshold writing mechanism relevant to recently reported fluorescence-based data storage in dye-doped co-extruded multilayer films. To gain understanding of the essential physics, single layer spun coat films were used so that the data is easily accessible by analytical techniques. Data were written by attenuating the fluorescence using nanosecond-range exposure times from a 488 nm continuous wave laser overlapping with the single photon absorption spectrum. The threshold writing process was studied over a range of exposure times and intensities, and with different fluorescent dyes. It was found that all of the dyes have a common temperature threshold where fluorescence begins to attenuate, and the physical nature of the thermal process was investigated.

  16. Random Linear Network Coding is Key to Data Survival in Highly Dynamic Distributed Storage

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2015-01-01

    Distributed storage solutions have become widespread due to their ability to store large amounts of data reliably across a network of unreliable nodes, by employing repair mechanisms to prevent data loss. Conventional systems rely on static designs with a central control entity to oversee...... and control the repair process. Given the large costs for maintaining and cooling large data centers, our work proposes and studies the feasibility of a fully decentralized systems that can store data even on unreliable and, sometimes, unavailable mobile devices. This imposes new challenges on the design...... as the number of available nodes varies greatly over time and keeping track of the system's state becomes unfeasible. As a consequence, conventional erasure correction approaches are ill-suited for maintaining data integrity. In this highly dynamic context, random linear network coding (RLNC) provides...

  17. The Grid Enabled Mass Storage System (GEMSS): the Storage and Data management system used at the INFN Tier1 at CNAF

    International Nuclear Information System (INIS)

    Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Gregori, Daniele; Prosperini, Andrea; Rinaldi, Lorenzo; Sapunenko, Vladimir; Bonacorsi, Daniele; Vagnoni, Vincenzo

    2012-01-01

    The storage system currently used in production at the INFN Tier1 at CNAF is the result of several years of case studies, software development and tests. This solution, called the Grid Enabled Mass Storage System (GEMSS), is based on a custom integration between a fast and reliable parallel filesystem (the IBM General Parallel File System, GPFS), with a complete integrated tape backend based on the Tivoli Storage Manager (TSM), which provides Hierarchical Storage Management (HSM) capabilities, and the Grid Storage Resource Manager (StoRM), providing access to grid users through a standard SRM interface. Since the start of the Large Hadron Collider (LHC) operation, all LHC experiments have been using GEMSS at CNAF for both disk data access and long-term archival on tape media. Moreover, during last year, GEMSS has become the standard solution for all other experiments hosted at CNAF, allowing the definitive consolidation of the data storage layer. Our choice has proved to be very successful during the last two years of production with continuous enhancements, accurate monitoring and effective customizations according to the end-user requests. In this paper a description of the system is reported, addressing recent developments and giving an overview of the administration and monitoring tools. We also discuss the solutions adopted in order to grant the maximum availability of the service and the latest optimization features within the data access process. Finally, we summarize the main results obtained during these last years of activity from the perspective of some of the end-users, showing the reliability and the high performances that can be achieved using GEMSS.

  18. MiMiR: a comprehensive solution for storage, annotation and exchange of microarray data

    Directory of Open Access Journals (Sweden)

    Rahman Fatimah

    2005-11-01

    Full Text Available Abstract Background The generation of large amounts of microarray data presents challenges for data collection, annotation, exchange and analysis. Although there are now widely accepted formats, minimum standards for data content and ontologies for microarray data, only a few groups are using them together to build and populate large-scale databases. Structured environments for data management are crucial for making full use of these data. Description The MiMiR database provides a comprehensive infrastructure for microarray data annotation, storage and exchange and is based on the MAGE format. MiMiR is MIAME-supportive, customised for use with data generated on the Affymetrix platform and includes a tool for data annotation using ontologies. Detailed information on the experiment, methods, reagents and signal intensity data can be captured in a systematic format. Reports screens permit the user to query the database, to view annotation on individual experiments and provide summary statistics. MiMiR has tools for automatic upload of the data from the microarray scanner and export to databases using MAGE-ML. Conclusion MiMiR facilitates microarray data management, annotation and exchange, in line with international guidelines. The database is valuable for underpinning research activities and promotes a systematic approach to data handling. Copies of MiMiR are freely available to academic groups under licence.

  19. Frameworks for management, storage and preparation of large data volumes Big Data

    Directory of Open Access Journals (Sweden)

    Marco Antonio Almeida Pamiño

    2017-05-01

    Full Text Available Weather systems like the World Meteorological Organization ́s Global Information System need to store different kinds of images, data and files. Big Data and its 3V paradigm can provide a suitable solution to solve this problem. This tutorial presents some concepts around the Hadoop framework, de facto standard implementation of Big Data, and how to store semi-estructured data generated by automatic weather stations using this framework. Finally, a formal method to generate weather reports using Hadoop ́s ecosystem frameworks is presented.

  20. Re-organizing Earth Observation Data Storage to Support Temporal Analysis of Big Data

    Science.gov (United States)

    Lynnes, C.

    2017-12-01

    The Earth Observing System Data and Information System archives many datasets that are critical to understanding long-term variations in Earth science properties. Thus, some of these are large, multi-decadal datasets. Yet the challenge in long time series analysis comes less from the sheer volume than the data organization, which is typically one (or a small number of) time steps per file. The overhead of opening and inventorying complex, API-driven data formats such as Hierarchical Data Format introduces a small latency at each time step, which nonetheless adds up for datasets with O(10^6) single-timestep files. Several approaches to reorganizing the data can mitigate this overhead by an order of magnitude: pre-aggregating data along the time axis (time-chunking); storing the data in a highly distributed file system; or storing data in distributed columnar databases. Storing a second copy of the data incurs extra costs, so some selection criteria must be employed, which would be driven by expected or actual usage by the end user community, balanced against the extra cost.

  1. A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems.

    Science.gov (United States)

    Ma, Xingpo; Liang, Junbin; Liu, Renping; Ni, Wei; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda

    2018-02-10

    In the post-Cloud era, the proliferation of Internet of Things (IoT) has pushed the horizon of Edge computing, which is a new computing paradigm with data are processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs) play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems.

  2. A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems

    Science.gov (United States)

    Liang, Junbin; Liu, Renping; Ni, Wei; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda

    2018-01-01

    In the post-Cloud era, the proliferation of Internet of Things (IoT) has pushed the horizon of Edge computing, which is a new computing paradigm with data processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs) play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems. PMID:29439442

  3. A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems

    Directory of Open Access Journals (Sweden)

    Xingpo Ma

    2018-02-01

    Full Text Available In the post-Cloud era, the proliferation of Internet of Things (IoT has pushed the horizon of Edge computing, which is a new computing paradigm with data processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems.

  4. From data storage towards decision making: LHC technical data integration and analysis

    CERN Document Server

    Marsili, A; Nordt, A; Sapinski, M

    2011-01-01

    The monitoring of the beam conditions, equipment conditions and measurements from the beam instrumentation devices in CERN’s Large Hadron Collider (LHC) produce more than 100 Gb/day of data. Such a big quantity of data is unprecedented in accelerator monitoring and new developments are needed to access, process, combine and analyse data from different equipments. The Beam Loss Monitoring (BLM) system has been one of the most reliable equipments in the LHC during its 2010 run, issuing beam dumps when the detected losses were above the defined abort thresholds. Furthermore, the BLM system was able to detect and study unexpected losses, requiring intensive offline analysis. This article describes the techniques developed to: access the data produced (' 50000 values/s); access relevant system layout information; access, combine and display different machine data.

  5. From data storage towards decision making: LHC technical data integration and analysis

    International Nuclear Information System (INIS)

    Marsili, A.; Holzer, E.B.; Nordt, A.; Sapinski, M.

    2012-01-01

    The monitoring of the beam conditions, equipment conditions and measurements from the beam instrumentation devices in CERN's Large Hadron Collider (LHC) produce more than 100 Gb/day of data. Such a big quantity of data is unprecedented in accelerator monitoring and new developments are needed to access, process, combine and analyse data from different devices. The Beam Loss Monitoring (BLM) system has been one of the most reliable equipment in the LHC during its 2010 run, issuing beam dumps when the detected losses were above the defined abort thresholds. Furthermore, the BLM system was able to detect and study unexpected losses, requiring intensive offline analysis. This article describes the techniques developed to: access the data produced (about 50000 values/s); access relevant system layout information; and access, combine and display different machine data. (authors)

  6. Evaluating water storage variations in the MENA region using GRACE satellite data

    KAUST Repository

    Lopez, Oliver

    2013-12-01

    Terrestrial water storage (TWS) variations over large river basins can be derived from temporal gravity field variations observed by the Gravity Recovery and Climate Experiment (GRACE) satellites. These signals are useful for determining accurate estimates of water storage and fluxes over areas covering a minimum of 150,000 km2 (length scales of a few hundred kilometers) and thus prove to be a valuable tool for regional water resources management, particularly for areas with a lack of in-situ data availability or inconsistent monitoring, such as the Middle East and North Africa (MENA) region. This already stressed arid region is particularly vulnerable to climate change and overdraft of its non-renewable freshwater sources, and thus direction in managing its resources is a valuable aid. An inter-comparison of different GRACE-derived TWS products was done in order to provide a quantitative assessment on their uncertainty and their utility for diagnosing spatio-temporal variability in water storage over the MENA region. Different processing approaches for the inter-satellite tracking data from the GRACE mission have resulted in the development of TWS products, with resolutions in time from 10 days to 1 month and in space from 0.5 to 1 degree global gridded data, while some of them use input from land surface models in order to restore the original signal amplitudes. These processing differences and the difficulties in recovering the mass change signals over arid regions will be addressed. Output from the different products will be evaluated and compared over basins inside the MENA region, and compared to output from land surface models.

  7. Evaluating Water Storage Variations in the MENA region using GRACE Satellite Data

    Science.gov (United States)

    Lopez, O.; Houborg, R.; McCabe, M. F.

    2013-12-01

    Terrestrial water storage (TWS) variations over large river basins can be derived from temporal gravity field variations observed by the Gravity Recovery and Climate Experiment (GRACE) satellites. These signals are useful for determining accurate estimates of water storage and fluxes over areas covering a minimum of 150,000 km2 (length scales of a few hundred kilometers) and thus prove to be a valuable tool for regional water resources management, particularly for areas with a lack of in-situ data availability or inconsistent monitoring, such as the Middle East and North Africa (MENA) region. This already stressed arid region is particularly vulnerable to climate change and overdraft of its non-renewable freshwater sources, and thus direction in managing its resources is a valuable aid. An inter-comparison of different GRACE-derived TWS products was done in order to provide a quantitative assessment on their uncertainty and their utility for diagnosing spatio-temporal variability in water storage over the MENA region. Different processing approaches for the inter-satellite tracking data from the GRACE mission have resulted in the development of TWS products, with resolutions in time from 10 days to 1 month and in space from 0.5 to 1 degree global gridded data, while some of them use input from land surface models in order to restore the original signal amplitudes. These processing differences and the difficulties in recovering the mass change signals over arid regions will be addressed. Output from the different products will be evaluated and compared over basins inside the MENA region, and compared to output from land surface models.

  8. Eosin blue dye based poly(methacrylate) films for data storage

    Science.gov (United States)

    Sankar, Deepa; Palanisamy, P. K.; Manickasundaram, S.; Kannan, P.

    2006-06-01

    Eosin dye based poly(methacrylates) with variation in the number of methylene spacers have been prepared by free radical polymerization process. The utility of the polymers for high-density optical data storage using holography has been studied by grating formation with the 514.5 nm line of the Argon ion laser as source. The influence of various parameters on the diffraction efficiency of the polymers has been studied. The effect of increase in the number of methylene spacers, hooked to the eosin blue dye, on the diffraction efficiency of the grating formed has also been discussed. Optical microscopic observations showing grating formation in the polymers have also been presented.

  9. A study of data representation in Hadoop to optimize data storage and search performance for the ATLAS EventIndex

    Science.gov (United States)

    Baranowski, Z.; Canali, L.; Toebbicke, R.; Hrivnac, J.; Barberis, D.

    2017-10-01

    This paper reports on the activities aimed at improving the architecture and performance of the ATLAS EventIndex implementation in Hadoop. The EventIndex contains tens of billions of event records, each of which consists of ∼100 bytes, all having the same probability to be searched or counted. Data formats represent one important area for optimizing the performance and storage footprint of applications based on Hadoop. This work reports on the production usage and on tests using several data formats including Map Files, Apache Parquet, Avro, and various compression algorithms. The query engine plays also a critical role in the architecture. We report also on the use of HBase for the EventIndex, focussing on the optimizations performed in production and on the scalability tests. Additional engines that have been tested include Cloudera Impala, in particular for its SQL interface, and the optimizations for data warehouse workloads and reports.

  10. A study of data representation in Hadoop to optimise data storage and search performance for the ATLAS EventIndex

    CERN Document Server

    AUTHOR|(CDS)2078799; The ATLAS collaboration; Canali, Luca; Toebbicke, Rainer; Hrivnac, Julius; Barberis, Dario

    2017-01-01

    This paper reports on the activities aimed at improving the architecture and performance of the ATLAS EventIndex implementation in Hadoop. The EventIndex contains tens of billions of event records, each of which consists of ∼100 bytes, all having the same probability to be searched or counted. Data formats represent one important area for optimizing the performance and storage footprint of applications based on Hadoop. This work reports on the production usage and on tests using several data formats including Map Files, Apache Parquet, Avro, and various compression algorithms. The query engine plays also a critical role in the architecture. We report also on the use of HBase for the EventIndex, focussing on the optimizations performed in production and on the scalability tests. Additional engines that have been tested include Cloudera Impala, in particular for its SQL interface, and the optimizations for data warehouse workloads and reports.

  11. A study of data representations in Hadoop to optimize data storage and search performance of the ATLAS EventIndex

    CERN Document Server

    Baranowski, Zbigniew; The ATLAS collaboration

    2016-01-01

    This paper reports on the activities aimed at improving the architecture and performance of the ATLAS EventIndex implementation in Hadoop. The EventIndex contains tens of billions event records, each of which consisting of ~100 bytes, all having the same probability to be searched or counted. Data formats represent one important area for optimizing the performance and storage footprint of applications based on Hadoop. This work reports on the production usage and on tests using several data formats including Map Files, Apache Parquet, Avro, and various compression algorithms. The query engine plays also a critical role in the architecture. This paper reports on the use of HBase for the EventIndex, focussing on the optimizations performed in production and on the scalability tests. Additional engines that have been tested include Cloudera Impala, in particular for its SQL interface, and the optimizations for data warehouse workloads and reports.

  12. The Experiment Data Depot: A Web-Based Software Tool for Biological Experimental Data Storage, Sharing, and Visualization

    DEFF Research Database (Denmark)

    Morell, William C.; Birkel, Garrett W.; Forrer, Mark

    2017-01-01

    Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high quality data to be parametrized and tested, which are not gener......Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high quality data to be parametrized and tested, which...... algorithms. In this paper, we describe EDD and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes....

  13. Data collection and storage in long-term ecological and evolutionary studies: The Mongoose 2000 system.

    Science.gov (United States)

    Marshall, Harry H; Griffiths, David J; Mwanguhya, Francis; Businge, Robert; Griffiths, Amber G F; Kyabulima, Solomon; Mwesige, Kenneth; Sanderson, Jennifer L; Thompson, Faye J; Vitikainen, Emma I K; Cant, Michael A

    2018-01-01

    Studying ecological and evolutionary processes in the natural world often requires research projects to follow multiple individuals in the wild over many years. These projects have provided significant advances but may also be hampered by needing to accurately and efficiently collect and store multiple streams of the data from multiple individuals concurrently. The increase in the availability and sophistication of portable computers (smartphones and tablets) and the applications that run on them has the potential to address many of these data collection and storage issues. In this paper we describe the challenges faced by one such long-term, individual-based research project: the Banded Mongoose Research Project in Uganda. We describe a system we have developed called Mongoose 2000 that utilises the potential of apps and portable computers to meet these challenges. We discuss the benefits and limitations of employing such a system in a long-term research project. The app and source code for the Mongoose 2000 system are freely available and we detail how it might be used to aid data collection and storage in other long-term individual-based projects.

  14. Estimating continental water storage variations in Central Asia area using GRACE data

    International Nuclear Information System (INIS)

    Dapeng, Mu; Zhongchang, Sun; Jinyun, Guo

    2014-01-01

    The goal of GRACE satellite is to determine time-variations of the Earth's gravity, and particularly the effects of fluid mass redistributions at the surface of the Earth. This paper uses GRACE Level-2 RL05 data provided by CSR to estimate water storage variations of four river basins in Asia area for the period from 2003 to 2011. We apply a two-step filtering method to reduce the errors in GRACE data, which combines Gaussian averaging function and empirical de-correlation method. We use GLDAS hydrology to validate the result from GRACE. Special averaging approach is preformed to reduce the errors in GLDAS. The results of former three basins from GRACE are consistent with GLDAS hydrology model. In the Tarim River basin, there is more discrepancy between GRACE and GLDAS. Precipitation data from weather station proves that the results of GRACE are more plausible. We use spectral analysis to obtain the main periods of GRACE and GLDAS time series and then use least squares adjustment to determine the amplitude and phase. The results show that water storage in Central Asia is decreasing

  15. Assimilating GRACE terrestrial water storage data into a conceptual hydrology model for the River Rhine

    Science.gov (United States)

    Widiastuti, E.; Steele-Dunne, S. C.; Gunter, B.; Weerts, A.; van de Giesen, N.

    2009-12-01

    Terrestrial water storage (TWS) is a key component of the terrestrial and global hydrological cycles, and plays a major role in the Earth’s climate. The Gravity Recovery and Climate Experiment (GRACE) twin satellite mission provided the first space-based dataset of TWS variations, albeit with coarse resolution and limited accuracy. Here, we examine the value of assimilating GRACE observations into a well-calibrated conceptual hydrology model of the Rhine river basin. In this study, the ensemble Kalman filter (EnKF) and smoother (EnKS) were applied to assimilate the GRACE TWS variation data into the HBV-96 rainfall run-off model, from February 2003 to December 2006. Two GRACE datasets were used, the DMT-1 models produced at TU Delft, and the CSR-RL04 models produced by UT-Austin . Each center uses its own data processing and filtering methods, yielding two different estimates of TWS variations and therefore two sets of assimilated TWS estimates. To validate the results, the model estimated discharge after the data assimilation was compared with measured discharge at several stations. As expected, the updated TWS was generally somewhere between the modeled and observed TWS in both experiments and the variance was also lower than both the prior error covariance and the assumed GRACE observation error. However, the impact on the discharge was found to depend heavily on the assimilation strategy used, in particular on how the TWS increments were applied to the individual storage terms of the hydrology model.

  16. Assessment of shielding analysis methods, codes, and data for spent fuel transport/storage applications

    International Nuclear Information System (INIS)

    Parks, C.V.; Broadhead, B.L.; Hermann, O.W.; Tang, J.S.; Cramer, S.N.; Gauthey, J.C.; Kirk, B.L.; Roussin, R.W.

    1988-07-01

    This report provides a preliminary assessment of the computational tools and existing methods used to obtain radiation dose rates from shielded spent nuclear fuel and high-level radioactive waste (HLW). Particular emphasis is placed on analysis tools and techniques applicable to facilities/equipment designed for the transport or storage of spent nuclear fuel or HLW. Applications to cask transport, storage, and facility handling are considered. The report reviews the analytic techniques for generating appropriate radiation sources, evaluating the radiation transport through the shield, and calculating the dose at a desired point or surface exterior to the shield. Discrete ordinates, Monte Carlo, and point kernel methods for evaluating radiation transport are reviewed, along with existing codes and data that utilize these methods. A literature survey was employed to select a cadre of codes and data libraries to be reviewed. The selection process was based on specific criteria presented in the report. Separate summaries were written for several codes (or family of codes) that provided information on the method of solution, limitations and advantages, availability, data access, ease of use, and known accuracy. For each data library, the summary covers the source of the data, applicability of these data, and known verification efforts. Finally, the report discusses the overall status of spent fuel shielding analysis techniques and attempts to illustrate areas where inaccuracy and/or uncertainty exist. The report notes the advantages and limitations of several analysis procedures and illustrates the importance of using adequate cross-section data sets. Additional work is recommended to enable final selection/validation of analysis tools that will best meet the US Department of Energy's requirements for use in developing a viable HLW management system. 188 refs., 16 figs., 27 tabs

  17. Towards regional, error-bounded landscape carbon storage estimates for data-deficient areas of the world.

    Directory of Open Access Journals (Sweden)

    Simon Willcock

    Full Text Available Monitoring landscape carbon storage is critical for supporting and validating climate change mitigation policies. These may be aimed at reducing deforestation and degradation, or increasing terrestrial carbon storage at local, regional and global levels. However, due to data-deficiencies, default global carbon storage values for given land cover types such as 'lowland tropical forest' are often used, termed 'Tier 1 type' analyses by the Intergovernmental Panel on Climate Change (IPCC. Such estimates may be erroneous when used at regional scales. Furthermore uncertainty assessments are rarely provided leading to estimates of land cover change carbon fluxes of unknown precision which may undermine efforts to properly evaluate land cover policies aimed at altering land cover dynamics. Here, we present a repeatable method to estimate carbon storage values and associated 95% confidence intervals (CI for all five IPCC carbon pools (aboveground live carbon, litter, coarse woody debris, belowground live carbon and soil carbon for data-deficient regions, using a combination of existing inventory data and systematic literature searches, weighted to ensure the final values are regionally specific. The method meets the IPCC 'Tier 2' reporting standard. We use this method to estimate carbon storage over an area of33.9 million hectares of eastern Tanzania, reporting values for 30 land cover types. We estimate that this area stored 6.33 (5.92-6.74 Pg C in the year 2000. Carbon storage estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been underestimated for this region of Africa. Our study demonstrates the importance of obtaining regionally appropriate carbon storage estimates, and shows how such values can be produced

  18. Derived data storage and exchange workflow for large-scale neuroimaging analyses on the BIRN grid

    Directory of Open Access Journals (Sweden)

    David B Keator

    2009-09-01

    Full Text Available Organizing and annotating biomedical data in structured ways has gained much interest and focus in the last 30 years. Driven by decreases in digital storage costs and advances in genetics sequencing, imaging, electronic data collection, and microarray technologies, data is being collected at an alarming rate. The need to store and exchange data in meaningful ways in support of data analysis, hypothesis testing and future collaborative use is pervasive. Because trans-disciplinary projects rely on effective use of data from many domains, there is a genuine interest in informatics community on how best to store and combine this data while maintaining a high level of data quality and. The difficulties in sharing and combining raw data become amplified after post-processing and/or data analysis in which the new dataset of interest is a function of the original data and may have been collected by multiple collaborating sites. Simple meta-data, documenting which subject and version of data were used for a particular analysis, becomes complicated by the heterogeneity of the collecting sites yet is critically important to the interpretation and reuse of derived results. This manuscript will present a case study of using the XML-Based Clinical Experiment Data Exchange (XCEDE schema and the Human Imaging Database (HID in the Biomedical Informatics Research Network’s (BIRN distributed environment to document and exchange derived data. The discussion includes an overview of the data structures used in both the XML and the database representations, insight into the design considerations, and the extensibility of the design to support additional analysis streams.

  19. myPhyloDB: a local web server for the storage and analysis of metagenomic data.

    Science.gov (United States)

    Manter, Daniel K; Korsa, Matthew; Tebbe, Caleb; Delgado, Jorge A

    2016-01-01

    myPhyloDB v.1.1.2 is a user-friendly personal database with a browser-interface designed to facilitate the storage, processing, analysis, and distribution of microbial community populations (e.g. 16S metagenomics data). MyPhyloDB archives raw sequencing files, and allows for easy selection of project(s)/sample(s) of any combination from all available data in the database. The data processing capabilities of myPhyloDB are also flexible enough to allow the upload and storage of pre-processed data, or use the built-in Mothur pipeline to automate the processing of raw sequencing data. myPhyloDB provides several analytical (e.g. analysis of covariance,t-tests, linear regression, differential abundance (DESeq2), and principal coordinates analysis (PCoA)) and normalization (rarefaction, DESeq2, and proportion) tools for the comparative analysis of taxonomic abundance, species richness and species diversity for projects of various types (e.g. human-associated, human gut microbiome, air, soil, and water) for any taxonomic level(s) desired. Finally, since myPhyloDB is a local web-server, users can quickly distribute data between colleagues and end-users by simply granting others access to their personal myPhyloDB database. myPhyloDB is available athttp://www.ars.usda.gov/services/software/download.htm?softwareid=472 and more information along with tutorials can be found on our websitehttp://www.myphylodb.org. Database URL:http://www.myphylodb.org. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the United States.

  20. Computerization of reporting and data storage using automatic coding method in the department of radiology

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byung Hee; Lee, Kyung Sang; Kim, Woo Ho; Han, Joon Koo; Choi, Byung Ihn; Han, Man Chung [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)

    1990-10-15

    The authors developed a computer program for use in printing report as well as data storage and retrieval in the Radiology department. This program used IBM PC AT and was written in dBASE III plus language. The automatic coding method of the ACR code, developed by Kim et al was applied in this program, and the framework of this program is the same as that developed for the surgical pathology department. The working sheet, which contained the name card for X-ray film identification and the results of previous radiologic studies, were printed during registration. The word precessing function was applied for issuing the formal report of radiologic study, and the data storage was carried out during the typewriting of the report. Two kinds of data files were stored in the hard disk ; the temporary file contained full information and the permanent file contained patient's identification data, and ACR code. Searching of a specific case was performed by chart number, patients name, date of study, or ACR code within a second. All the cases were arranged by ACR codes of procedure code, anatomy code, and pathology code. Every new data was copied to the diskette after daily work automatically, with which data could be restored in case of hard diskette failure. The main advantage of this program with comparison to the larger computer system is its low price. Based on the experience in the Seoul District Armed Forces General Hospital, we assume that this program provides solution to various problems in the radiology department where a large computer system with well designed software is not available.

  1. Inspection of commercial optical devices for data storage using a three Gaussian beam microscope interferometer

    International Nuclear Information System (INIS)

    Flores, J. Mauricio; Cywiak, Moises; Servin, Manuel; Juarez P, Lorenzo

    2008-01-01

    Recently, an interferometric profilometer based on the heterodyning of three Gaussian beams has been reported. This microscope interferometer, called a three Gaussian beam interferometer, has been used to profile high quality optical surfaces that exhibit constant reflectivity with high vertical resolution and lateral resolution near λ. We report the use of this interferometer to measure the profiles of two commercially available optical surfaces for data storage, namely, the compact disk (CD-R) and the digital versatile disk (DVD-R). We include experimental results from a one-dimensional radial scan of these devices without data marks. The measurements are taken by placing the devices with the polycarbonate surface facing the probe beam of the interferometer. This microscope interferometer is unique when compared with other optical measuring instruments because it uses narrowband detection, filters out undesirable noisy signals, and because the amplitude of the output voltage signal is basically proportional to the local vertical height of the surface under test, thus detecting with high sensitivity. We show that the resulting profiles, measured with this interferometer across the polycarbonate layer, provide valuable information about the track profiles, making this interferometer a suitable tool for quality control of surface storage devices

  2. Efficient storage, retrieval and analysis of poker hands: An adaptive data framework

    Directory of Open Access Journals (Sweden)

    Gorawski Marcin

    2017-12-01

    Full Text Available In online gambling, poker hands are one of the most popular and fundamental units of the game state and can be considered objects comprising all the events that pertain to the single hand played. In a situation where tens of millions of poker hands are produced daily and need to be stored and analysed quickly, the use of relational databases no longer provides high scalability and performance stability. The purpose of this paper is to present an efficient way of storing and retrieving poker hands in a big data environment. We propose a new, read-optimised storage model that offers significant data access improvements over traditional database systems as well as the existing Hadoop file formats such as ORC, RCFile or SequenceFile. Through index-oriented partition elimination, our file format allows reducing the number of file splits that needs to be accessed, and improves query response time up to three orders of magnitude in comparison with other approaches. In addition, our file format supports a range of new indexing structures to facilitate fast row retrieval at a split level. Both index types operate independently of the Hive execution context and allow other big data computational frameworks such as MapReduce or Spark to benefit from the optimized data access path to the hand information. Moreover, we present a detailed analysis of our storage model and its supporting index structures, and how they are organised in the overall data framework. We also describe in detail how predicate based expression trees are used to build effective file-level execution plans. Our experimental tests conducted on a production cluster, holding nearly 40 billion hands which span over 4000 partitions, show that multi-way partition pruning outperforms other existing file formats, resulting in faster query execution times and better cluster utilisation.

  3. Integrated Data Acquisition, Storage, Retrieval and Processing Using the COMPASS DataBase (CDB)

    Czech Academy of Sciences Publication Activity Database

    Urban, Jakub; Pipek, Jan; Hron, Martin; Janky, Filip; Papřok, Richard; Peterka, Matěj; Duarte, A.S.

    2014-01-01

    Roč. 89, č. 5 (2014), s. 712-716 ISSN 0920-3796. [Ninth IAEA TM on Control, Data Acquisition, and Remote Participation for Fusion Research. Hefei, 06.05.2013-10.05.2013] R&D Projects: GA ČR GP13-38121P; GA ČR GAP205/11/2470; GA MŠk(CZ) LM2011021 Institutional support: RVO:61389021 Keywords : tokamak * CODAC * database Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.152, year: 2014 http://dx.doi.org/10.1016/j.fusengdes.2014.03.032

  4. Data storage for managing the health enterprise and achieving business continuity.

    Science.gov (United States)

    Hinegardner, Sam

    2003-01-01

    As organizations move away from a silo mentality to a vision of enterprise-level information, more healthcare IT departments are rejecting the idea of information storage as an isolated, system-by-system solution. IT executives want storage solutions that act as a strategic element of an IT infrastructure, centralizing storage management activities to effectively reduce operational overhead and costs. This article focuses on three areas of enterprise storage: tape, disk, and disaster avoidance.

  5. Modular routing interface for simoultaneous list mode and histogramming mode storage of coincident data

    International Nuclear Information System (INIS)

    D'Achard van Eschut, J.F.M.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1985-01-01

    A routing interface has been developed and built for successive storage of the digital output of four 13-bit ADCs, within 6 μs, into selected parts of two 16K CAMAC histogramming modules and, if an event trigger is applied, simultaneously into four 64-words deep (16-bit) first-in first-out (FIFO) CAMAC modules. In this way it is possible to accumulate on-line single spectra and, at the same time, write coincident data in list mode to magnetic tape under control of a computer. Additional routing interfaces can be used in parallel so that extensive data-collecting systems can be set up to store multi-parameter events. (orig.)

  6. Thermally-treated Pt-coated silicon AFM tips for wear resistance in ferroelectric data storage

    International Nuclear Information System (INIS)

    Bhushan, Bharat; Palacio, Manuel; Kwak, Kwang Joo

    2008-01-01

    In ferroelectric data storage, a conductive atomic force microscopy (AFM) probe with a noble metal coating is placed in contact with a lead zirconate titanate (PZT) film. The understanding and improvement of probe tip wear, particularly at high velocities, is needed for high data rate recording. A commercial Pt-coated silicon AFM probe was thermally treated in order to form platinum silicide at the near-surface. Nanoindentation, nanoscratch and wear experiments were performed to evaluate the mechanical properties and wear performance at high velocities. The thermally treated tip exhibited lower wear than the untreated tip. The tip wear mechanism is adhesive and abrasive wear with some evidence of impact wear. The enhancement in mechanical properties and wear resistance in the thermally treated film is attributed to silicide formation in the near-surface. Auger electron spectroscopy and electrical resistivity measurements confirm the formation of platinum silicide. This study advances the understanding of thin film nanoscale surface interactions

  7. Data on the changes of the mussels׳ metabolic profile under different cold storage conditions

    Directory of Open Access Journals (Sweden)

    Violetta Aru

    2016-06-01

    Full Text Available One of the main problems of seafood marketing is the ease with which fish and shellfish undergo deterioration after death. 1H NMR spectroscopy and microbiological analysis were applied to get in depth insight into the effects of cold storage (4 °C and 0 °C on the spoilage of the mussel Mytilus galloprovincialis. This data article provides information on the average distribution of the microbial loads in mussels׳ specimens and on the acquisition, processing, and multivariate analysis of the 1H NMR spectra from the hydrosoluble phase of stored mussels. This data article is referred to the research article entitled “Metabolomics analysis of shucked mussels’ freshness” (Aru et al., 2016 [1].

  8. A distributed big data storage and data mining framework for solar-generated electricity quantity forecasting

    Science.gov (United States)

    Wang, Jianzong; Chen, Yanjun; Hua, Rui; Wang, Peng; Fu, Jia

    2012-02-01

    Photovoltaic is a method of generating electrical power by converting solar radiation into direct current electricity using semiconductors that exhibit the photovoltaic effect. Photovoltaic power generation employs solar panels composed of a number of solar cells containing a photovoltaic material. Due to the growing demand for renewable energy sources, the manufacturing of solar cells and photovoltaic arrays has advanced considerably in recent years. Solar photovoltaics are growing rapidly, albeit from a small base, to a total global capacity of 40,000 MW at the end of 2010. More than 100 countries use solar photovoltaics. Driven by advances in technology and increases in manufacturing scale and sophistication, the cost of photovoltaic has declined steadily since the first solar cells were manufactured. Net metering and financial incentives, such as preferential feed-in tariffs for solar-generated electricity; have supported solar photovoltaics installations in many countries. However, the power that generated by solar photovoltaics is affected by the weather and other natural factors dramatically. To predict the photovoltaic energy accurately is of importance for the entire power intelligent dispatch in order to reduce the energy dissipation and maintain the security of power grid. In this paper, we have proposed a big data system--the Solar Photovoltaic Power Forecasting System, called SPPFS to calculate and predict the power according the real-time conditions. In this system, we utilized the distributed mixed database to speed up the rate of collecting, storing and analysis the meteorological data. In order to improve the accuracy of power prediction, the given neural network algorithm has been imported into SPPFS.By adopting abundant experiments, we shows that the framework can provide higher forecast accuracy-error rate less than 15% and obtain low latency of computing by deploying the mixed distributed database architecture for solar-generated electricity.

  9. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    International Nuclear Information System (INIS)

    Ito, H; Potekhin, M; Wenaus, T

    2012-01-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with an R and D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic rate of queries. In conclusion, we present our experience with operating a Cassandra cluster over an extended period of time and with data load adequate for planned application.

  10. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    Science.gov (United States)

    Ito, H.; Potekhin, M.; Wenaus, T.

    2012-12-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with an R&D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic rate of queries. In conclusion, we present our experience with operating a Cassandra cluster over an extended period of time and with data load adequate for planned application.

  11. Privacy-Aware Relevant Data Access with Semantically Enriched Search Queries for Untrusted Cloud Storage Services.

    Science.gov (United States)

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Lee, Sungyoung; Chung, Tae Choong

    2016-01-01

    Privacy-aware search of outsourced data ensures relevant data access in the untrusted domain of a public cloud service provider. Subscriber of a public cloud storage service can determine the presence or absence of a particular keyword by submitting search query in the form of a trapdoor. However, these trapdoor-based search queries are limited in functionality and cannot be used to identify secure outsourced data which contains semantically equivalent information. In addition, trapdoor-based methodologies are confined to pre-defined trapdoors and prevent subscribers from searching outsourced data with arbitrarily defined search criteria. To solve the problem of relevant data access, we have proposed an index-based privacy-aware search methodology that ensures semantic retrieval of data from an untrusted domain. This method ensures oblivious execution of a search query and leverages authorized subscribers to model conjunctive search queries without relying on predefined trapdoors. A security analysis of our proposed methodology shows that, in a conspired attack, unauthorized subscribers and untrusted cloud service providers cannot deduce any information that can lead to the potential loss of data privacy. A computational time analysis on commodity hardware demonstrates that our proposed methodology requires moderate computational resources to model a privacy-aware search query and for its oblivious evaluation on a cloud service provider.

  12. Decentralized data storage and processing in the context of the LHC experiments at CERN

    Energy Technology Data Exchange (ETDEWEB)

    Blomer, Jakob Johannes

    2012-06-01

    The computing facilities used to process data for the experiments at the Large Hadron Collider (LHC) at CERN are scattered around the world. The embarrassingly parallel workload allows for use of various computing resources, such as computer centers comprising the Worldwide LHC Computing Grid, commercial and institutional cloud resources, as well as individual home PCs in ''volunteer clouds''. Unlike data, the experiment software and its operating system dependencies cannot be easily split into small chunks. Deployment of experiment software on distributed grid sites is challenging since it consists of millions of small files and changes frequently. This thesis develops a systematic approach to distribute a homogeneous runtime environment to a heterogeneous and geographically distributed computing infrastructure. A uniform bootstrap environment is provided by a minimal virtual machine tailored to LHC applications. Based on a study of the characteristics of LHC experiment software, the thesis argues for the use of content-addressable storage and decentralized caching in order to distribute the experiment software. In order to utilize the technology at the required scale, new methods of pre-processing data into content-addressable storage are developed. A co-operative, decentralized memory cache is designed that is optimized for the high peer churn expected in future virtualized computing clusters. This is achieved using a combination of consistent hashing with global knowledge about the worker nodes' state. The methods have been implemented in the form of a file system for software and Conditions Data delivery. The file system has been widely adopted by the LHC community and the benefits of the presented methods have been demonstrated in practice.

  13. Decentralized data storage and processing in the context of the LHC experiments at CERN

    International Nuclear Information System (INIS)

    Blomer, Jakob Johannes

    2012-01-01

    The computing facilities used to process data for the experiments at the Large Hadron Collider (LHC) at CERN are scattered around the world. The embarrassingly parallel workload allows for use of various computing resources, such as computer centers comprising the Worldwide LHC Computing Grid, commercial and institutional cloud resources, as well as individual home PCs in ''volunteer clouds''. Unlike data, the experiment software and its operating system dependencies cannot be easily split into small chunks. Deployment of experiment software on distributed grid sites is challenging since it consists of millions of small files and changes frequently. This thesis develops a systematic approach to distribute a homogeneous runtime environment to a heterogeneous and geographically distributed computing infrastructure. A uniform bootstrap environment is provided by a minimal virtual machine tailored to LHC applications. Based on a study of the characteristics of LHC experiment software, the thesis argues for the use of content-addressable storage and decentralized caching in order to distribute the experiment software. In order to utilize the technology at the required scale, new methods of pre-processing data into content-addressable storage are developed. A co-operative, decentralized memory cache is designed that is optimized for the high peer churn expected in future virtualized computing clusters. This is achieved using a combination of consistent hashing with global knowledge about the worker nodes' state. The methods have been implemented in the form of a file system for software and Conditions Data delivery. The file system has been widely adopted by the LHC community and the benefits of the presented methods have been demonstrated in practice.

  14. UO{sub 2} oxidation under dry storage conditions: From data gaps to research needs

    Energy Technology Data Exchange (ETDEWEB)

    Feria, F.; Herranz, L. E. [CIEMAT, Andalucia (Spain)

    2008-10-15

    Dry interim storage is becoming a major activity of today's fuel cycle. The potential contact between no grossly damaged fuel rods (i.e., rods containing tiny defects like pinhole leaks and hairline cracks) and an oxidizing atmosphere during the cask water removal might lead to unacceptable consequences. One way to prevent it is to determine the time to propagation of a defect at given conditions. This paper compiles and critically reviews the existing database concerning time at temperature profile of fuel rods containing tiny defects that are exposed to oxidizing atmospheres. This review has pointed out significant drawbacks and limitations that would hinder its reliable application to assess the potential for defect propagation of current LWR fuels to be loaded in dry storage casks. Those weaknesses come essentially from data scarcity and lack of tests representativity. Based on this study, three main areas of work are recommended to fill the existing knowledge gaps: sound characterization of fuel rod responses in the low burnup range (<30 GWd/tU), extension of the database to high burnups characteristic of current discharged LWR fuels (<60GWd/tU), assessment of availability (i.e., amount and nature) of oxidizing agents. The result of the work suggested would result in a more complete and extensive database that would strongly support the potential use of 'time at temperature' curves.

  15. ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization

    CERN Document Server

    Antcheva, I; Bellenot, B; Biskup,1, M; Brun, R; Buncic, N; Canal, Ph; Casadei, D; Couet, O; Fine, V; Franco,1, L; Ganis, G; Gheata, A; Gonzalez Maline, D; Goto, M; Iwaszkiewicz, J; Kreshuk, A; Marcos Segura, D; Maunder, R; Moneta, L; Naumann, A; Offermann, E; Onuchin, V; Panacek, S; Rademakers, F; Russo, P; Tadel, M

    2009-01-01

    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, the RooFit package allows the user to perform complex data modeling and fitting while the RooStats library provides abstractions and implementations for advanced statistical tools. Multivariat...

  16. A Method of Signal Scrambling to Secure Data Storage for Healthcare Applications.

    Science.gov (United States)

    Bao, Shu-Di; Chen, Meng; Yang, Guang-Zhong

    2017-11-01

    A body sensor network that consists of wearable and/or implantable biosensors has been an important front-end for collecting personal health records. It is expected that the full integration of outside-hospital personal health information and hospital electronic health records will further promote preventative health services as well as global health. However, the integration and sharing of health information is bound to bring with it security and privacy issues. With extensive development of healthcare applications, security and privacy issues are becoming increasingly important. This paper addresses the potential security risks of healthcare data in Internet-based applications and proposes a method of signal scrambling as an add-on security mechanism in the application layer for a variety of healthcare information, where a piece of tiny data is used to scramble healthcare records. The former is kept locally and the latter, along with security protection, is sent for cloud storage. The tiny data can be derived from a random number generator or even a piece of healthcare data, which makes the method more flexible. The computational complexity and security performance in terms of theoretical and experimental analysis has been investigated to demonstrate the efficiency and effectiveness of the proposed method. The proposed method is applicable to all kinds of data that require extra security protection within complex networks.

  17. Determining water storage depletion within Iran by assimilating GRACE data into the W3RA hydrological model

    Science.gov (United States)

    Khaki, M.; Forootan, E.; Kuhn, M.; Awange, J.; van Dijk, A. I. J. M.; Schumacher, M.; Sharifi, M. A.

    2018-04-01

    Groundwater depletion, due to both unsustainable water use and a decrease in precipitation, has been reported in many parts of Iran. In order to analyze these changes during the recent decade, in this study, we assimilate Terrestrial Water Storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) into the World-Wide Water Resources Assessment (W3RA) model. This assimilation improves model derived water storage simulations by introducing missing trends and correcting the amplitude and phase of seasonal water storage variations. The Ensemble Square-Root Filter (EnSRF) technique is applied, which showed stable performance in propagating errors during the assimilation period (2002-2012). Our focus is on sub-surface water storage changes including groundwater and soil moisture variations within six major drainage divisions covering the whole Iran including its eastern part (East), Caspian Sea, Centre, Sarakhs, Persian Gulf and Oman Sea, and Lake Urmia. Results indicate an average of -8.9 mm/year groundwater reduction within Iran during the period 2002 to 2012. A similar decrease is also observed in soil moisture storage especially after 2005. We further apply the canonical correlation analysis (CCA) technique to relate sub-surface water storage changes to climate (e.g., precipitation) and anthropogenic (e.g., farming) impacts. Results indicate an average correlation of 0.81 between rainfall and groundwater variations and also a large impact of anthropogenic activities (mainly for irrigations) on Iran's water storage depletions.

  18. New optical architecture for holographic data storage system compatible with Blu-ray Disc™ system

    Science.gov (United States)

    Shimada, Ken-ichi; Ide, Tatsuro; Shimano, Takeshi; Anderson, Ken; Curtis, Kevin

    2014-02-01

    A new optical architecture for holographic data storage system which is compatible with a Blu-ray Disc™ (BD) system is proposed. In the architecture, both signal and reference beams pass through a single objective lens with numerical aperture (NA) 0.85 for realizing angularly multiplexed recording. The geometry of the architecture brings a high affinity with an optical architecture in the BD system because the objective lens can be placed parallel to a holographic medium. Through the comparison of experimental results with theory, the validity of the optical architecture was verified and demonstrated that the conventional objective lens motion technique in the BD system is available for angularly multiplexed recording. The test-bed composed of a blue laser system and an objective lens of the NA 0.85 was designed. The feasibility of its compatibility with BD is examined through the designed test-bed.

  19. Cathodic Protection for Above Ground Storage Tank Bottom Using Data Acquisition

    Directory of Open Access Journals (Sweden)

    Naseer Abbood Issa Al Haboubi

    2015-07-01

    Full Text Available Impressed current cathodic protection controlled by computer gives the ideal solution to the changes in environmental factors and long term coating degradation. The protection potential distribution achieved and the current demand on the anode can be regulated to protection criteria, to achieve the effective protection for the system. In this paper, cathodic protection problem of above ground steel storage tank was investigated by an impressed current of cathodic protection with controlled potential of electrical system to manage the variation in soil resistivity. Corrosion controller has been implemented for above ground tank in LabView where tank's bottom potential to soil was manipulated to the desired set point (protection criterion 850 mV. National Instruments Data Acquisition (NI-DAQ and PC controllers for tank corrosion control system provides quick response to achieve steady state condition for any kind of disturbances.

  20. Using data storage tags to link otolith macrostructure in Baltic cod Gadus morhua with environmental conditions

    DEFF Research Database (Denmark)

    Hüssy, Karin; Nielsen, Birgitte; Mosegaard, Henrik

    2009-01-01

    of a strontium chloride solution. Based on environmental conditions experienced, fish were classified into different behavioural types: non-reproducing 'non-spawner', and 'spawner' undertaking spawning migrations. Otolith opacity, an indicator of otolith and fish somatic growth and condition, was examined...... in relation to these environmental drivers. Temperature was the only environmental variable with a significant effect, overlaying a strong size-related effect. The temperature effect was not uniform across behavioural types and spawning periods. Opacity showed a negative correlation with temperature......We examined otolith opacity of Baltic cod in relation to environmental conditions in order to evaluate the formation mechanisms of seasonal patterns used in age determination. Adult fish were tagged with data storage tags (DSTs) and a permanent mark was induced in the otoliths by injection...

  1. MeV ion-beam analysis of optical data storage films

    Science.gov (United States)

    Leavitt, J. A.; Mcintyre, L. C., Jr.; Lin, Z.

    1993-01-01

    Our objectives are threefold: (1) to accurately characterize optical data storage films by MeV ion-beam analysis (IBA) for ODSC collaborators; (2) to develop new and/or improved analysis techniques; and (3) to expand the capabilities of the IBA facility itself. Using H-1(+), He-4(+), and N-15(++) ion beams in the 1.5 MeV to 10 MeV energy range from a 5.5 MV Van de Graaff accelerator, film thickness (in atoms/sq cm), stoichiometry, impurity concentration profiles, and crystalline structure were determined by Rutherford backscattering (RBS), high-energy backscattering, channeling, nuclear reaction analysis (NRA) and proton induced X-ray emission (PIXE). Most of these techniques are discussed in detail in the ODSC Annual Report (February 17, 1987), p. 74. The PIXE technique is briefly discussed in the ODSC Annual Report (March 15, 1991), p. 23.

  2. Weighty data: importance information influences estimated weight of digital information storage devices.

    Directory of Open Access Journals (Sweden)

    Iris eSchneider

    2015-01-01

    Full Text Available Previous work has suggested that perceived importance of an object influences estimates of its weight. Specifically, important books were estimated to be heavier than non-important books. However, the experimental set-up of these studies may have suffered from a potential confound and findings may be confined to books only. Addressing this, we investigate the effect of importance on weight estimates by examining whether the importance of information stored on a data storage device (USB-stick or portable hard drive can alter weight estimates. Results show that people thinking a USB-stick holds important tax information (vs. expired vs. no information estimate it to be heavier (Experiment 1 compared to people who do not. Similarly, people who are told a portable hard-drive holds personally relevant information (vs. irrelevant, also estimate the drive to be heavier (Experiment 2a and 2b. The current work shows that importance influences weight perceptions beyond specific objects.

  3. MAHA: A comprehensive system for the storage and visualization of subsoil data for seismic microzonation

    Science.gov (United States)

    Di Felice, P.; Spadoni, M.

    2013-04-01

    MAHA is a database-centred software system for the storage and visualization of subsoil data used for the production of seismic microzonation maps in Italy. The application was implemented using open source software in order to grant its maximum diffusion and customization. A conceptual model of the subsoil, jointly developed by the Italian National Research Council and the National Department of Civil Protection, inspired the structure of the underlying database, consisting of 15 tables, 3 of which of spatial nature to accommodate geo-referenced data associated to points, lines and polygons. A web-GIS interface acts as a bridge between the user and the database, drives the input of geo-referenced data and enables the users to formulate different types of spatial queries. A series of forms designed "ad hoc" and enriched with combo boxes provide guided procedures to maximize the fluency of data entry and to reduce the possibility of erroneous typing. One of these procedures helps to transform the descriptions of the geological units (granular materials), given in technical paper documents by using a conversational style, into standardized numeric codes. Summary reports, produced in the pdf format, can be generated through decoding and graphic display of the parameters previously entered in the database. MAHA was approved by the national commission for seismic microzonation established by the Italian Prime Minister and, in the next years, it is expected to significantly support the entire process of map production in the urban areas more exposed to seismic hazard.

  4. Probe Storage

    NARCIS (Netherlands)

    Gemelli, Marcellino; Abelmann, Leon; Engelen, Johannes Bernardus Charles; Khatib, M.G.; Koelmans, W.W.; Zaboronski, Olog; Campardo, Giovanni; Tiziani, Federico; Laculo, Massimo

    2011-01-01

    This chapter gives an overview of probe-based data storage research over the last three decades, encompassing all aspects of a probe recording system. Following the division found in all mechanically addressed storage systems, the different subsystems (media, read/write heads, positioning, data

  5. Intelligent Management System of Power Network Information Collection Under Big Data Storage

    Directory of Open Access Journals (Sweden)

    Qin Yingying

    2017-01-01

    Full Text Available With the development of economy and society, big data storage in enterprise management has become a problem that can’t be ignored. How to manage and optimize the allocation of tasks better is an important factor in the sustainable development of an enterprise. Now the enterprise information intelligent management has become a hot spot of management mode and concept in the information age. It presents information to the business managers in a more efficient, lower cost, and global form. The system uses the SG-UAP development tools, which is based on Eclipse development environment, and suits for Windows operating system, with Oracle as database development platform, Tomcat network information service for application server. The system uses SOA service-oriented architecture, provides RESTful style service, and HTTP(S as the communication protocol, and JSON as the data format. The system is divided into two parts, the front-end and the backs-end, achieved functions like user login, registration, password retrieving, enterprise internal personnel information management and internal data display and other functions.

  6. Monitoring Groundwater Storage Changes in the Loess Plateau Using GRACE Satellite Gravity Data, Hydrological Models and Coal Mining Data

    Directory of Open Access Journals (Sweden)

    Xiaowei Xie

    2018-04-01

    Full Text Available Monitoring the groundwater storage (GWS changes is crucial to the rational utilization of groundwater and to ecological restoration in the Loess Plateau of China, which is one of the regions with the most extreme ecological environmental damage in the world. In this region, the mass loss caused by coal mining can reach the level of billions of tons per year. For this reason, in this work, in addition to Gravity Recovery and Climate Experiment (GRACE satellite gravity data and hydrological models, coal mining data were also used to monitor GWS variation in the Loess Plateau during the period of 2005–2014. The GWS changes results from different GRACE solutions, that is, the spherical harmonics (SH solutions, mascon solutions, and Slepian solutions (which are the Slepian localization of SH solutions, were compared with in situ GWS changes, obtained from 136 groundwater observation wells, and the aim was to acquire the most robust GWS changes. The results showed that the GWS changes from mascon solutions (mascon-GWS match best with in situ GWS changes, showing the highest correlation coefficient, lowest root mean square error (RMSE values and nearest annual trend. Therefore, the Mascon-GWS changes are used for the spatial-temporal analysis of GWS changes. Based on which, the groundwater depletion rate of the Loess Plateau was −0.65 ± 0.07 cm/year from 2005–2014, with a more severe consumption rate occurring in its eastern region, reaching about −1.5 cm/year, which is several times greater than those of the other regions. Furthermore, the precipitation and coal mining data were used for analyzing the causes of the groundwater depletion: the results showed that seasonal changes in groundwater storage are closely related to rainfall, but the groundwater consumption is mainly due to human activities; coal mining in particular plays a major role in the serious groundwater consumption in eastern region of the study area. Our results will help in

  7. A DTM MULTI-RESOLUTION COMPRESSED MODEL FOR EFFICIENT DATA STORAGE AND NETWORK TRANSFER

    Directory of Open Access Journals (Sweden)

    L. Biagi

    2012-08-01

    Full Text Available In recent years the technological evolution of terrestrial, aerial and satellite surveying, has considerably increased the measurement accuracy and, consequently, the quality of the derived information. At the same time, the smaller and smaller limitations on data storage devices, in terms of capacity and cost, has allowed the storage and the elaboration of a bigger number of instrumental observations. A significant example is the terrain height surveyed by LIDAR (LIght Detection And Ranging technology where several height measurements for each square meter of land can be obtained. The availability of such a large quantity of observations is an essential requisite for an in-depth knowledge of the phenomena under study. But, at the same time, the most common Geographical Information Systems (GISs show latency in visualizing and analyzing these kind of data. This problem becomes more evident in case of Internet GIS. These systems are based on the very frequent flow of geographical information over the internet and, for this reason, the band-width of the network and the size of the data to be transmitted are two fundamental factors to be considered in order to guarantee the actual usability of these technologies. In this paper we focus our attention on digital terrain models (DTM's and we briefly analyse the problems about the definition of the minimal necessary information to store and transmit DTM's over network, with a fixed tolerance, starting from a huge number of observations. Then we propose an innovative compression approach for sparse observations by means of multi-resolution spline functions approximation. The method is able to provide metrical accuracy at least comparable to that provided by the most common deterministic interpolation algorithms (inverse distance weighting, local polynomial, radial basis functions. At the same time it dramatically reduces the number of information required for storing or for transmitting and rebuilding a

  8. Overview of direct air free cooling and thermal energy storage potential energy savings in data centres

    International Nuclear Information System (INIS)

    Oró, Eduard; Depoorter, Victor; Pflugradt, Noah; Salom, Jaume

    2015-01-01

    In the last years the total energy demand of data centres has experienced a dramatic increase which is expected to continue. This is why data centres industry and researchers are working on implementing energy efficiency measures and integrating renewable energy to overcome energy dependence and to reduce operational costs and CO 2 emissions. The cooling system of these unique infrastructures can account for 40% of the total energy consumption. To reduce the energy consumption, free cooling strategies are used more and more, but so far there has been little research about the potential of thermal energy storage (TES) solutions to match energy demand and energy availability. Hence, this work intends to provide an overview of the potential of the integration of direct air free cooling strategy and TES systems into data centres located at different European locations. For each location, the benefit of using direct air free cooling is evaluated energetically and economically for a data centre of 1250 kW. The use of direct air free cooling is shown to be feasible. This does not apply the TES systems by itself. But when using TES in combination with an off-peak electricity tariff the operational cooling cost can be drastically reduced. - Highlights: • The total annual hours for direct air free cooling in data centres are calculated. • The potential of TES integration in data centres is evaluated. • The implementation of TES to store the ambient air cold is not recommended. • TES is feasible if combined with redundant chillers and off-peak electricity price. • The cooling electricity cost is being reduced up to 51%, depending on the location

  9. Handling the data management needs of high-throughput sequencing data: SpeedGene, a compression algorithm for the efficient storage of genetic data

    Science.gov (United States)

    2012-01-01

    Background As Next-Generation Sequencing data becomes available, existing hardware environments do not provide sufficient storage space and computational power to store and process the data due to their enormous size. This is and will be a frequent problem that is encountered everyday by researchers who are working on genetic data. There are some options available for compressing and storing such data, such as general-purpose compression software, PBAT/PLINK binary format, etc. However, these currently available methods either do not offer sufficient compression rates, or require a great amount of CPU time for decompression and loading every time the data is accessed. Results Here, we propose a novel and simple algorithm for storing such sequencing data. We show that, the compression factor of the algorithm ranges from 16 to several hundreds, which potentially allows SNP data of hundreds of Gigabytes to be stored in hundreds of Megabytes. We provide a C++ implementation of the algorithm, which supports direct loading and parallel loading of the compressed format without requiring extra time for decompression. By applying the algorithm to simulated and real datasets, we show that the algorithm gives greater compression rate than the commonly used compression methods, and the data-loading process takes less time. Also, The C++ library provides direct-data-retrieving functions, which allows the compressed information to be easily accessed by other C++ programs. Conclusions The SpeedGene algorithm enables the storage and the analysis of next generation sequencing data in current hardware environment, making system upgrades unnecessary. PMID:22591016

  10. An experiment in big data: storage, querying and visualisation of data taken from the Liverpool Telescope's wide field cameras

    Science.gov (United States)

    Barnsley, R. M.; Steele, Iain A.; Smith, R. J.; Mawson, Neil R.

    2014-07-01

    The Small Telescopes Installed at the Liverpool Telescope (STILT) project has been in operation since March 2009, collecting data with three wide field unfiltered cameras: SkycamA, SkycamT and SkycamZ. To process the data, a pipeline was developed to automate source extraction, catalogue cross-matching, photometric calibration and database storage. In this paper, modifications and further developments to this pipeline will be discussed, including a complete refactor of the pipeline's codebase into Python, migration of the back-end database technology from MySQL to PostgreSQL, and changing the catalogue used for source cross-matching from USNO-B1 to APASS. In addition to this, details will be given relating to the development of a preliminary front-end to the source extracted database which will allow a user to perform common queries such as cone searches and light curve comparisons of catalogue and non-catalogue matched objects. Some next steps and future ideas for the project will also be presented.

  11. Formalizing structured file services for the data storage and retrieval subsystem of the data management system for Spacestation Freedom

    Science.gov (United States)

    Jamsek, Damir A.

    1993-01-01

    A brief example of the use of formal methods techniques in the specification of a software system is presented. The report is part of a larger effort targeted at defining a formal methods pilot project for NASA. One possible application domain that may be used to demonstrate the effective use of formal methods techniques within the NASA environment is presented. It is not intended to provide a tutorial on either formal methods techniques or the application being addressed. It should, however, provide an indication that the application being considered is suitable for a formal methods by showing how such a task may be started. The particular system being addressed is the Structured File Services (SFS), which is a part of the Data Storage and Retrieval Subsystem (DSAR), which in turn is part of the Data Management System (DMS) onboard Spacestation Freedom. This is a software system that is currently under development for NASA. An informal mathematical development is presented. Section 3 contains the same development using Penelope (23), an Ada specification and verification system. The complete text of the English version Software Requirements Specification (SRS) is reproduced in Appendix A.

  12. Report from SG 1.2: use of 3-D seismic data in exploration, production and underground storage

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    The objective of this study was to investigate the experience gained from using 3D and 4D techniques in exploration, production and underground storage. The use of 3D seismic data is increasing and considerable progress in the application of such data has been achieved in recent years. 3D is now in extensive use in exploration, field and storage development planning and reservoir management. By using 4D (or time-lapse) seismic data from a given producing area, it is also possible to monitor gas movement as a function of time in a gas field or storage. This emerging technique is therefore very useful in reservoir management, in order to obtain increased recovery, higher production, and to reduce the risk of infill wells. These techniques can also be used for monitoring underground gas storage. The study gives recommendations on the use of 3D and 4D seismic in the gas industry. For this purpose, three specific questionnaires were proposed: the first one dedicated to exploration, development and production of gas fields (Production questionnaire), the second one dedicated to gas storages (Storage questionnaire) and the third one dedicated to the servicing companies. The main results are: - The benefit from 3D is clear for both producing and storage operators in improving structural shape, fault pattern and reservoir knowledge. The method usually saves wells and improve gas volume management. - 4D seismic is an emerging technique with high potential benefits for producers. Research in 4D must focus on the integration of seismic methodology and interpretation of results with production measurements in reservoir models. (author)

  13. A Cloud Storage Platform in the Defense Context : Mobile Data Management With Unreliable Network Conditions

    NARCIS (Netherlands)

    Veen, J.S. van der; Bastiaans, M.; Jonge, M. de; Strijkers, R.J.

    2012-01-01

    This paper discusses a cloud storage platform in the defense context. The mobile and dismounted domains of defense organizations typically use devices that are light in storage, processing and communication capabilities. This means that it is difficult to store a lot of information on these devices

  14. A recursive Formulation of the Inversion of symmetric positive defite matrices in packed storage data format

    DEFF Research Database (Denmark)

    Andersen, Bjarne Stig; Gunnels, John A.; Gustavson, Fred

    2002-01-01

    A new Recursive Packed Inverse Calculation Algorithm for symmetric positive definite matrices has been developed. The new Recursive Inverse Calculation algorithm uses minimal storage, \\$n(n+1)/2\\$, and has nearly the same performance as the LAPACK full storage algorithm using \\$n\\^2\\$ memory words...

  15. Combined Statistical Analyses for Long-Term Stability Data with Multiple Storage Conditions : A Simulation Study

    NARCIS (Netherlands)

    Almalik, Osama; Nijhuis, Michiel B.; van den Heuvel, Edwin R.

    2014-01-01

    Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear

  16. N-1-Alkylated Pyrimidine Films as a New Potential Optical Data Storage Medium

    DEFF Research Database (Denmark)

    Lohse, Brian; Hvilsted, Søren; Berg, Rolf Henrik

    2006-01-01

    storage. Their dimerization efficiency was compared, in solution, with uracil as a reference, and as films, to investigate the correlation between solution and film. Films of good quality displaying excellent thermal and optical stability can be fabricated. A significant optical contrast between...... grating storage are also demonstrated in the films. Writing and reading of the gray scale can be performed at the same wavelength....

  17. ROOT: A C++ framework for petabyte data storage, statistical analysis and visualization

    International Nuclear Information System (INIS)

    Antcheva, I.; Ballintijn, M.; Bellenot, B.; Biskup, M.; Brun, R.; Buncic, N.; Couet, O.; Franco, L.; Canal, Ph.; Casadei, D.; Fine, V.

    2009-01-01

    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, the RooFit package allows the user to perform complex data modeling and fitting while the RooStats library provides abstractions and implementations for advanced statistical tools. Multivariate classification methods based on machine learning techniques are available via the TMVA package. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks - e.g. data mining in HEP - by using PROOF, which will take care of optimally

  18. Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Vazhkudai, Sudharshan S [ORNL

    2014-01-01

    With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.

  19. Coherent scattering noise reduction method with wavelength diversity detection for holographic data storage system

    Science.gov (United States)

    Nakamura, Yusuke; Hoshizawa, Taku; Takashima, Yuzuru

    2017-09-01

    A new method, wavelength diversity detection (WDD), for improving signal quality is proposed and its effectiveness is numerically confirmed. We consider that WDD is especially effective for high-capacity systems having low hologram diffraction efficiencies. In such systems, the signal quality is primarily limited by coherent scattering noise; thus, effective improvement of the signal quality under a scattering-limited system is of great interest. WDD utilizes a new degree of freedom, the spectrum width, and scattering by molecules to improve the signal quality of the system. We found that WDD improves the quality by counterbalancing the degradation of the quality due to Bragg mismatch. With WDD, a higher-scattering-coefficient medium can improve the quality. The result provides an interesting insight into the requirements for material characteristics, especially for a large-M/# material. In general, a larger-M/# material contains more molecules; thus, the system is subject to more scattering, which actually improves the quality with WDD. We propose a pathway for a future holographic data storage system (HDSS) using WDD, which can record a larger amount of data than a conventional HDSS.

  20. Holographic storage of three-dimensional image and data using photopolymer and polymer dispersed liquid crystal films

    International Nuclear Information System (INIS)

    Gao Hong-Yue; Liu Pan; Zeng Chao; Yao Qiu-Xiang; Zheng Zhiqiang; Liu Jicheng; Zheng Huadong; Yu Ying-Jie; Zeng Zhen-Xiang; Sun Tao

    2016-01-01

    We present holographic storage of three-dimensional (3D) images and data in a photopolymer film without any applied electric field. Its absorption and diffraction efficiency are measured, and reflective analog hologram of real object and image of digital information are recorded in the films. The photopolymer is compared with polymer dispersed liquid crystals as holographic materials. Besides holographic diffraction efficiency of the former is little lower than that of the latter, this work demonstrates that the photopolymer is more suitable for analog hologram and big data permanent storage because of its high definition and no need of high voltage electric field. Therefore, our study proposes a potential holographic storage material to apply in large size static 3D holographic displays, including analog hologram displays, digital hologram prints, and holographic disks. (special topic)

  1. Model-independent and fast determination of optical functions in storage rings via multiturn and closed-orbit data

    Directory of Open Access Journals (Sweden)

    Bernard Riemann

    2011-06-01

    Full Text Available Multiturn (or turn-by-turn data acquisition has proven to be a new source of direct measurements for Twiss parameters in storage rings. On the other hand, closed-orbit measurements are a long-known tool for analyzing closed-orbit perturbations with conventional beam position monitor (BPM systems and are necessarily available at every storage ring. This paper aims at combining the advantages of multiturn measurements and closed-orbit data. We show that only two multiturn BPMs and four correctors in one localized drift space in the storage ring (diagnostic drift are sufficient for model-independent and absolute measuring of β and φ functions at all BPMs, including the conventional ones, instead of requiring all BPMs being equipped with multiturn electronics.

  2. Model-independent and fast determination of optical functions in storage rings via multiturn and closed-orbit data

    Science.gov (United States)

    Riemann, Bernard; Grete, Patrick; Weis, Thomas

    2011-06-01

    Multiturn (or turn-by-turn) data acquisition has proven to be a new source of direct measurements for Twiss parameters in storage rings. On the other hand, closed-orbit measurements are a long-known tool for analyzing closed-orbit perturbations with conventional beam position monitor (BPM) systems and are necessarily available at every storage ring. This paper aims at combining the advantages of multiturn measurements and closed-orbit data. We show that only two multiturn BPMs and four correctors in one localized drift space in the storage ring (diagnostic drift) are sufficient for model-independent and absolute measuring of β and φ functions at all BPMs, including the conventional ones, instead of requiring all BPMs being equipped with multiturn electronics.

  3. Technology for organization of the onboard system for processing and storage of ERS data for ultrasmall spacecraft

    Science.gov (United States)

    Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.

    2017-10-01

    Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.

  4. Hydrological storage variations in a lake water balance, observed from multi-sensor satellite data and hydrological models.

    Science.gov (United States)

    Singh, Alka; Seitz, Florian; Schwatke, Christian; Guentner, Andreas

    2013-04-01

    Freshwater lakes and reservoirs account for 74.5% of continental water storage in surface water bodies and only 1.8% resides in rivers. Lakes and reservoirs are a key component of the continental hydrological cycle but in-situ monitoring networks are very limited either because of sparse spatial distribution of gauges or national data policy. Monitoring and predicting extreme events is very challenging in that case. In this study we demonstrate the use of optical remote sensing, satellite altimetry and the GRACE gravity field mission to monitor the lake water storage variations in the Aral Sea. Aral Sea is one of the most unfortunate examples of a large anthropogenic catastrophe. The 4th largest lake of 1960s has been decertified for more than 75% of its area due to the diversion of its primary rivers for irrigation purposes. Our study is focused on the time frame of the GRACE mission; therefore we consider changes from 2002 onwards. Continuous monthly time series of water masks from Landsat satellite data and water level from altimetry missions were derived. Monthly volumetric variations of the lake water storage were computed by intersecting a digital elevation model of the lake with respective water mask and altimetry water level. With this approach we obtained volume from two independent remote sensing methods to reduce the error in the estimated volume through least square adjustment. The resultant variations were then compared with mass variability observed by GRACE. In addition, GARCE estimates of water storage variations were compared with simulation results of the Water Gap Hydrology Model (WGHM). The different observations from all missions agree that the lake reached an absolute minimum in autumn 2009. A marked reversal of the negative trend occured in 2010 but water storage in the lake decreased again afterwards. The results reveal that water storage variations in the Aral Sea are indeed the principal, but not the only contributor to the GRACE signal of

  5. Development and evaluation of a low-cost and high-capacity DICOM image data storage system for research.

    Science.gov (United States)

    Yakami, Masahiro; Ishizu, Koichi; Kubo, Takeshi; Okada, Tomohisa; Togashi, Kaori

    2011-04-01

    Thin-slice CT data, useful for clinical diagnosis and research, is now widely available but is typically discarded in many institutions, after a short period of time due to data storage capacity limitations. We designed and built a low-cost high-capacity Digital Imaging and COmmunication in Medicine (DICOM) storage system able to store thin-slice image data for years, using off-the-shelf consumer hardware components, such as a Macintosh computer, a Windows PC, and network-attached storage units. "Ordinary" hierarchical file systems, instead of a centralized data management system such as relational database, were adopted to manage patient DICOM files by arranging them in directories enabling quick and easy access to the DICOM files of each study by following the directory trees with Windows Explorer via study date and patient ID. Software used for this system was open-source OsiriX and additional programs we developed ourselves, both of which were freely available via the Internet. The initial cost of this system was about $3,600 with an incremental storage cost of about $900 per 1 terabyte (TB). This system has been running since 7th Feb 2008 with the data stored increasing at the rate of about 1.3 TB per month. Total data stored was 21.3 TB on 23rd June 2009. The maintenance workload was found to be about 30 to 60 min once every 2 weeks. In conclusion, this newly developed DICOM storage system is useful for research due to its cost-effectiveness, enormous capacity, high scalability, sufficient reliability, and easy data access.

  6. Design rules for phase-change materials in data storage applications

    Energy Technology Data Exchange (ETDEWEB)

    Lencer, Dominic; Salinga, Martin [I. Physikalisches Institut IA, RWTH Aachen University, 52056 Aachen (Germany); Wuttig, Matthias [I. Physikalisches Institut IA, RWTH Aachen University, 52056 Aachen (Germany); Juelich-Aachen Research Alliance, Section Fundamentals of Future Information Technology (JARA-FIT), 52056 Aachen (Germany)

    2011-05-10

    Phase-change materials can rapidly and reversibly be switched between an amorphous and a crystalline phase. Since both phases are characterized by very different optical and electrical properties, these materials can be employed for rewritable optical and electrical data storage. Hence, there are considerable efforts to identify suitable materials, and to optimize them with respect to specific applications. Design rules that can explain why the materials identified so far enable phase-change based devices would hence be very beneficial. This article describes materials that have been successfully employed and discusses common features regarding both typical structures and bonding mechanisms. It is shown that typical structural motifs and electronic properties can be found in the crystalline state that are indicative for resonant bonding, from which the employed contrast originates. The occurence of resonance is linked to the composition, thus providing a design rule for phase-change materials. This understanding helps to unravel characteristic properties such as electrical and thermal conductivity which are discussed in the subsequent section. Then, turning to the transition kinetics between the phases, the current understanding and modeling of the processes of amorphization and crystallization are discussed. Finally, present approaches for improved high-capacity optical discs and fast non-volatile electrical memories, that hold the potential to succeed present-day's Flash memory, are presented. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  7. Retention of intermediate polarization states in ferroelectric materials enabling memories for multi-bit data storage

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Dong; Asadi, Kamal; Blom, Paul W. M.; Leeuw, Dago M. de, E-mail: deleeuw@mpip-mainz.mpg.de [Max-Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz (Germany); Katsouras, Ilias [Holst Centre, High Tech Campus 31, 5656AE Eindhoven (Netherlands); Groen, Wilhelm A. [Holst Centre, High Tech Campus 31, 5656AE Eindhoven (Netherlands); Faculty of Aerospace Engineering, Delft University of Technology, Kluyverweg 1 2629 HS, Delft (Netherlands)

    2016-06-06

    A homogeneous ferroelectric single crystal exhibits only two remanent polarization states that are stable over time, whereas intermediate, or unsaturated, polarization states are thermodynamically instable. Commonly used ferroelectric materials however, are inhomogeneous polycrystalline thin films or ceramics. To investigate the stability of intermediate polarization states, formed upon incomplete, or partial, switching, we have systematically studied their retention in capacitors comprising two classic ferroelectric materials, viz. random copolymer of vinylidene fluoride with trifluoroethylene, P(VDF-TrFE), and Pb(Zr,Ti)O{sub 3}. Each experiment started from a discharged and electrically depolarized ferroelectric capacitor. Voltage pulses were applied to set the given polarization states. The retention was measured as a function of time at various temperatures. The intermediate polarization states are stable over time, up to the Curie temperature. We argue that the remarkable stability originates from the coexistence of effectively independent domains, with different values of polarization and coercive field. A domain growth model is derived quantitatively describing deterministic switching between the intermediate polarization states. We show that by using well-defined voltage pulses, the polarization can be set to any arbitrary value, allowing arithmetic programming. The feasibility of arithmetic programming along with the inherent stability of intermediate polarization states makes ferroelectric materials ideal candidates for multibit data storage.

  8. Retention of intermediate polarization states in ferroelectric materials enabling memories for multi-bit data storage

    Science.gov (United States)

    Zhao, Dong; Katsouras, Ilias; Asadi, Kamal; Groen, Wilhelm A.; Blom, Paul W. M.; de Leeuw, Dago M.

    2016-06-01

    A homogeneous ferroelectric single crystal exhibits only two remanent polarization states that are stable over time, whereas intermediate, or unsaturated, polarization states are thermodynamically instable. Commonly used ferroelectric materials however, are inhomogeneous polycrystalline thin films or ceramics. To investigate the stability of intermediate polarization states, formed upon incomplete, or partial, switching, we have systematically studied their retention in capacitors comprising two classic ferroelectric materials, viz. random copolymer of vinylidene fluoride with trifluoroethylene, P(VDF-TrFE), and Pb(Zr,Ti)O3. Each experiment started from a discharged and electrically depolarized ferroelectric capacitor. Voltage pulses were applied to set the given polarization states. The retention was measured as a function of time at various temperatures. The intermediate polarization states are stable over time, up to the Curie temperature. We argue that the remarkable stability originates from the coexistence of effectively independent domains, with different values of polarization and coercive field. A domain growth model is derived quantitatively describing deterministic switching between the intermediate polarization states. We show that by using well-defined voltage pulses, the polarization can be set to any arbitrary value, allowing arithmetic programming. The feasibility of arithmetic programming along with the inherent stability of intermediate polarization states makes ferroelectric materials ideal candidates for multibit data storage.

  9. Design rules for phase-change materials in data storage applications.

    Science.gov (United States)

    Lencer, Dominic; Salinga, Martin; Wuttig, Matthias

    2011-05-10

    Phase-change materials can rapidly and reversibly be switched between an amorphous and a crystalline phase. Since both phases are characterized by very different optical and electrical properties, these materials can be employed for rewritable optical and electrical data storage. Hence, there are considerable efforts to identify suitable materials, and to optimize them with respect to specific applications. Design rules that can explain why the materials identified so far enable phase-change based devices would hence be very beneficial. This article describes materials that have been successfully employed and dicusses common features regarding both typical structures and bonding mechanisms. It is shown that typical structural motifs and electronic properties can be found in the crystalline state that are indicative for resonant bonding, from which the employed contrast originates. The occurence of resonance is linked to the composition, thus providing a design rule for phase-change materials. This understanding helps to unravel characteristic properties such as electrical and thermal conductivity which are discussed in the subsequent section. Then, turning to the transition kinetics between the phases, the current understanding and modeling of the processes of amorphization and crystallization are discussed. Finally, present approaches for improved high-capacity optical discs and fast non-volatile electrical memories, that hold the potential to succeed present-day's Flash memory, are presented. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Thermoelectric PbTe thin film for superresolution optical data storage

    International Nuclear Information System (INIS)

    Lee, Hyun Seok; Cheong, Byung-ki; Lee, Taek Sung; Lee, Kyeong Seok; Kim, Won Mok; Lee, Jae Won; Cho, Sung Ho; Youl Huh, Joo

    2004-01-01

    To find its practical use in ultrahigh density optical data storage, superresolution (SR) technique needs a material that can render a high SR capability at no cost of durability against repeated readout and write. Thermoelectric materials appear to be promising candidates due to their capability of yielding phase-change-free thermo-optic changes. A feasibility study was carried out with PbTe for its large thermoelectric coefficient and high stability over a wide temperature range as a crystalline single phase. Under exposure to pulsed red light, the material was found to display positive, yet completely reversible changes of optical transmittance regardless of laser power, fulfilling basic requirements for SR readout and write. The material was also shown to have a high endurance against repeated static laser heating of up to 10 6 -10 7 cycles tested. A read only memory disk with a PbTe SR layer led to the carrier to noise ratio value of 47 dB at 3.5 mW for 0.25 μm pit; below the optical resolution limit (∼0.27 μm) of the tester

  11. Binary codes storage and data encryption in substrates with single proton beam writing technology

    International Nuclear Information System (INIS)

    Zhang Jun; Zhan Furu; Hu Zhiwen; Chen Lianyun; Yu Zengliang

    2006-01-01

    It has been demonstrated that characters can be written by proton beams in various materials. In contributing to the rapid development of proton beam writing technology, we introduce a new method for binary code storage and data encryption by writing binary codes of characters (BCC) in substrates with single proton beam writing technology. In this study, two kinds of BCC (ASCII BCC and long bit encrypted BCC) were written in CR-39 by a 2.6 MeV single proton beam. Our results show that in comparison to directly writing character shapes, writing ASCII BCC turned out to be about six times faster and required about one fourth the area in substrates. The approach of writing long bit encrypted BCC by single proton beams supports preserving confidential information in substrates. Additionally, binary codes fabricated by MeV single proton beams in substrates are more robust than those formed by lasers, since MeV single proton beams can make much deeper pits in the substrates

  12. ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization

    Science.gov (United States)

    Antcheva, I.; Ballintijn, M.; Bellenot, B.; Biskup, M.; Brun, R.; Buncic, N.; Canal, Ph.; Casadei, D.; Couet, O.; Fine, V.; Franco, L.; Ganis, G.; Gheata, A.; Maline, D. Gonzalez; Goto, M.; Iwaszkiewicz, J.; Kreshuk, A.; Segura, D. Marcos; Maunder, R.; Moneta, L.; Naumann, A.; Offermann, E.; Onuchin, V.; Panacek, S.; Rademakers, F.; Russo, P.; Tadel, M.

    2011-06-01

    A new stable version ("production version") v5.28.00 of ROOT [1] has been published [2]. It features several major improvements in many areas, most noteworthy data storage performance as well as statistics and graphics features. Some of these improvements have already been predicted in the original publication Antcheva et al. (2009) [3]. This version will be maintained for at least 6 months; new minor revisions ("patch releases") will be published [4] to solve problems reported with this version. New version program summaryProgram title: ROOT Catalogue identifier: AEFA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser Public License v.2.1 No. of lines in distributed program, including test data, etc.: 2 934 693 No. of bytes in distributed program, including test data, etc.: 1009 Distribution format: tar.gz Programming language: C++ Computer: Intel i386, Intel x86-64, Motorola PPC, Sun Sparc, HP PA-RISC Operating system: GNU/Linux, Windows XP/Vista/7, Mac OS X, FreeBSD, OpenBSD, Solaris, HP-UX, AIX Has the code been vectorized or parallelized?: Yes RAM: > 55 Mbytes Classification: 4, 9, 11.9, 14 Catalogue identifier of previous version: AEFA_v1_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 2499 Does the new version supersede the previous version?: Yes Nature of problem: Storage, analysis and visualization of scientific data Solution method: Object store, wide range of analysis algorithms and visualization methods Reasons for new version: Added features and corrections of deficiencies Summary of revisions: The release notes at http://root.cern.ch/root/v528/Version528.news.html give a module-oriented overview of the changes in v5.28.00. Highlights include File format Reading of TTrees has been improved dramatically with respect to CPU time (30%) and notably with respect to disk space. Histograms A

  13. Hydrogen storage as a hydride. Citations from the International Aerospace Abstracts data base

    Science.gov (United States)

    Zollars, G. F.

    1980-01-01

    These citations from the international literature concern the storage of hydrogen in various metal hydrides. Binary and intermetallic hydrides are considered. Specific alloys discussed are iron titanium, lanthanium nickel, magnesium copper and magnesium nickel among others.

  14. Programs for data accumulation and storage from the multicrate CAMAC systems basing on the M-6000 computer

    International Nuclear Information System (INIS)

    Antonichev, G.M.; Shilkin, I.P.; Bespalova, T.V.; Golutvin, I.A.; Maslov, V.V.; Nevskaya, N.A.

    1978-01-01

    Programs for data accumulation and storage from multicrate CAMAC systems organized in parallel into a branch and connected with the M-6000 computer via the branch interface are described. Program operation in different modes of CAMAC apparatus is described. All the programs operate within the real time disk operation system

  15. Analysis and comparison of NoSQL databases with an introduction to consistent references in big data storage systems

    Science.gov (United States)

    Dziedzic, Adam; Mulawka, Jan

    2014-11-01

    NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.

  16. Requirements of data acquisition and analysis for condensed matter studies at the weapons neutron research/proton storage ring facility

    International Nuclear Information System (INIS)

    Johnson, M.W.; Goldstone, J.A.; Taylor, A.D.

    1982-11-01

    With the completion of the proton storage ring (PSR) in 1985, the subsquent increase in neutron flux, and the continuing improvement in neutron scattering instruments, a significant improvement in data acquisition and data analysis capabilities will be required. A brief account of the neutron source is given together with the associated neutron scattering instruments. Based on current technology and operating instruments, a projection for 1985 to 1990 of the neutron scattering instruments and their main parameters are given. From the expected data rates and the projected instruments, the size of data storage is estimated and the user requirements are developed. General requirements are outlined with specific requirements in user hardware and software stated. A project time scale to complete the data acquisition and analysis system by 1985 is given

  17. Assimilation of Gridded GRACE Terrestrial Water Storage Estimates in the North American Land Data Assimilation System

    Science.gov (United States)

    Kumar, Sujay V.; Zaitchik, Benjamin F.; Peters-Lidard, Christa D.; Rodell, Matthew; Reichle, Rolf; Li, Bailing; Jasinski, Michael; Mocko, David; Getirana, Augusto; De Lannoy, Gabrielle; hide

    2016-01-01

    The objective of the North American Land Data Assimilation System (NLDAS) is to provide best available estimates of near-surface meteorological conditions and soil hydrological status for the continental United States. To support the ongoing efforts to develop data assimilation (DA) capabilities for NLDAS, the results of Gravity Recovery and Climate Experiment (GRACE) DA implemented in a manner consistent with NLDAS development are presented. Following previous work, GRACE terrestrial water storage (TWS) anomaly estimates are assimilated into the NASA Catchment land surface model using an ensemble smoother. In contrast to many earlier GRACE DA studies, a gridded GRACE TWS product is assimilated, spatially distributed GRACE error estimates are accounted for, and the impact that GRACE scaling factors have on assimilation is evaluated. Comparisons with quality-controlled in situ observations indicate that GRACE DA has a positive impact on the simulation of unconfined groundwater variability across the majority of the eastern United States and on the simulation of surface and root zone soil moisture across the country. Smaller improvements are seen in the simulation of snow depth, and the impact of GRACE DA on simulated river discharge and evapotranspiration is regionally variable. The use of GRACE scaling factors during assimilation improved DA results in the western United States but led to small degradations in the eastern United States. The study also found comparable performance between the use of gridded and basin averaged GRACE observations in assimilation. Finally, the evaluations presented in the paper indicate that GRACE DA can be helpful in improving the representation of droughts.

  18. Tests of Cloud Computing and Storage System features for use in H1 Collaboration Data Preservation model

    International Nuclear Information System (INIS)

    Łobodziński, Bogdan

    2011-01-01

    Based on the currently developing strategy for data preservation and long-term analysis in HEP tests of possible future Cloud Computing based on the Eucalyptus Private Cloud platform and the petabyte scale storage open source system CEPH were performed for the H1 Collaboration. Improvements in computing power and strong development of storage systems suggests that a single Cloud Computing resource supported on a given site will be sufficient for analysis requirements beyond the end-date of experiments. This work describes our test-bed architecture which could be applied to fulfill the requirements of the physics program of H1 after the end date of the Collaboration. We discuss the reasons why we choose the Eucalyptus platform and CEPH storage infrastructure as well as our experience with installations and support of these infrastructures. Using our first test results we will examine performance characteristics, noticed failure states, deficiencies, bottlenecks and scaling boundaries.

  19. Groundwater storage changes in the Tibetan Plateau and adjacent areas revealed from GRACE satellite gravity data

    Science.gov (United States)

    Xiang, Longwei; Wang, Hansheng; Steffen, Holger; Wu, Patrick; Jia, Lulu; Jiang, Liming; Shen, Qiang

    2016-09-01

    Understanding groundwater storage (GWS) changes is vital to the utilization and control of water resources in the Tibetan Plateau. However, well level observations are rare in this big area, and reliable hydrology models including GWS are not available. We use hydro-geodesy to quantitate GWS changes in the Tibetan Plateau and surroundings from 2003 to 2009 using a combined analysis of satellite gravity and satellite altimetry data, hydrology models as well as a model of glacial isostatic adjustment (GIA). Release-5 GRACE gravity data are jointly used in a mascon fitting method to estimate the terrestrial water storage (TWS) changes during the period, from which the hydrology contributions and the GIA effects are effectively deducted to give the estimates of GWS changes for 12 selected regions of interest. The hydrology contributions are carefully calculated from glaciers and lakes by ICESat-1 satellite altimetry data, permafrost degradation by an Active-Layer Depth (ALD) model, soil moisture and snow water equivalent by multiple hydrology models, and the GIA effects are calculated with the new ICE-6G_C (VM5a) model. Taking into account the measurement errors and the variability of the models, the uncertainties are rigorously estimated for the TWS changes, the hydrology contributions (including GWS changes) and the GIA effect. For the first time, we show explicitly separated GWS changes in the Tibetan Plateau and adjacent areas except for those to the south of the Himalayas. We find increasing trend rates for eight basins: + 2.46 ± 2.24 Gt/yr for the Jinsha River basin, + 1.77 ± 2.09 Gt/yr for the Nujiang-Lancangjiang Rivers Source Region, + 1.86 ± 1.69 Gt/yr for the Yangtze River Source Region, + 1.14 ± 1.39 Gt/yr for the Yellow River Source Region, + 1.52 ± 0.95 Gt/yr for the Qaidam basin, + 1.66 ± 1.52 Gt/yr for the central Qiangtang Nature Reserve, + 5.37 ± 2.17 Gt/yr for the Upper Indus basin and + 2.77 ± 0.99 Gt/yr for the Aksu River basin. All these

  20. Global models underestimate large decadal declining and rising water storage trends relative to GRACE satellite data

    Science.gov (United States)

    Scanlon, Bridget R.; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y.; van Beek, Ludovicus P. H.; Wiese, David N.; Reedy, Robert C.; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F. P.

    2018-01-01

    Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002–2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤−0.5 km3/y) and increasing (≥0.5 km3/y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km3/y, whereas most models estimate decreasing trends (−71 to 11 km3/y). Land water storage trends, summed over all basins, are positive for GRACE (∼71–82 km3/y) but negative for models (−450 to −12 km3/y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. PMID:29358394

  1. Reproducibility of wrist home blood pressure measurement with position sensor and automatic data storage

    Directory of Open Access Journals (Sweden)

    Nickenig Georg

    2009-05-01

    Full Text Available Abstract Background Wrist blood pressure (BP devices have physiological limits with regards to accuracy, therefore they were not preferred for home BP monitoring. However some wrist devices have been successfully validated using etablished validation protocols. Therefore this study assessed the reproducibility of wrist home BP measurement with position sensor and automatic data storage. Methods To compare the reproducibility of three different(BP measurement methods: 1 office BP, 2 home BP (Omron wrist device HEM- 637 IT with position sensor, 3 24-hour ambulatory BP(24-h ABPM (ABPM-04, Meditech, Hunconventional sphygmomanometric office BP was measured on study days 1 and 7, 24-h ABPM on study days 7 and 14 and home BP between study days 1 and 7 and between study days 8 and 14 in 69 hypertensive and 28 normotensive subjects. The correlation coeffcient of each BP measurement method with echocardiographic left ventricular mass index was analyzed. The schedule of home readings was performed according to recently published European Society of Hypertension (ESH- guidelines. Results The reproducibility of home BP measurement analyzed by the standard deviation as well as the squared differeces of mean individual differences between the respective BP measurements was significantly higher than the reproducibility of office BP (p Conclusion The short-term reproducibility of home BP measurement with the Omron HEM-637 IT wrist device was superior to the reproducibility of office BP and 24- h ABPM measurement. Furthermore, home BP with the wrist device showed similar correlations to targed organ damage as recently reported for upper arm devices. Although wrist devices have to be used cautious and with defined limitations, the use of validated devices with position sensor according to recently recommended measurement schedules might have the potential to be used for therapy monitoring.

  2. A price and performance comparison of three different storage architectures for data in cloud-based systems

    Science.gov (United States)

    Gallagher, J. H. R.; Jelenak, A.; Potter, N.; Fulker, D. W.; Habermann, T.

    2017-12-01

    Providing data services based on cloud computing technology that is equivalent to those developed for traditional computing and storage systems is critical for successful migration to cloud-based architectures for data production, scientific analysis and storage. OPeNDAP Web-service capabilities (comprising the Data Access Protocol (DAP) specification plus open-source software for realizing DAP in servers and clients) are among the most widely deployed means for achieving data-as-service functionality in the Earth sciences. OPeNDAP services are especially common in traditional data center environments where servers offer access to datasets stored in (very large) file systems, and a preponderance of the source data for these services is being stored in the Hierarchical Data Format Version 5 (HDF5). Three candidate architectures for serving NASA satellite Earth Science HDF5 data via Hyrax running on Amazon Web Services (AWS) were developed and their performance examined for a set of representative use cases. The performance was based both on runtime and incurred cost. The three architectures differ in how HDF5 files are stored in the Amazon Simple Storage Service (S3) and how the Hyrax server (as an EC2 instance) retrieves their data. The results for both the serial and parallel access to HDF5 data in the S3 will be presented. While the study focused on HDF5 data, OPeNDAP and the Hyrax data server, the architectures are generic and the analysis can be extrapolated to many different data formats, web APIs, and data servers.

  3. Solar energy storage via liquid filled cans - Test data and analysis

    Science.gov (United States)

    Saha, H.

    1978-01-01

    This paper describes the design of a solar thermal storage test facility with water-filled metal cans as heat storage medium and also presents some preliminary tests results and analysis. This combination of solid and liquid mediums shows unique heat transfer and heat contents characteristics and will be well suited for use with solar air systems for space and hot water heating. The trends of the test results acquired thus far are representative of the test bed characteristics while operating in the various modes.

  4. Extracting Biological Meaning From Global Proteomic Data on Circulating-Blood Platelets: Effects of Diabetes and Storage Time

    Energy Technology Data Exchange (ETDEWEB)

    Miller, John H.; Suleiman, Atef; Daly, Don S.; Springer, David L.; Spinelli, Sherry L.; Blumberg, Neil; Phipps, Richard P.

    2008-11-25

    Transfusion of platelets into patients suffering from trauma and a variety of disease is a common medical practice that involves millions of units per year. Partial activation of platelets can result in the release of bioactive proteins and lipid mediators that increase the risk of adverse post-transfusion effects. Type-2 diabetes and storage are two factors known to cause partial activation of platelets. A global proteomic study was undertaken to investigate these effects. In this paper we discuss the methods used to interpret these data in terms of biological processes affected by diabetes and storage. The main emphasis is on the processing of proteomic data for gene ontology enrichment analysis by techniques originally designed for microarray data.

  5. Crossbar memory array of organic bistable rectifying diodes for nonvolatile data storage

    NARCIS (Netherlands)

    Asadi, Kamal; Li, Mengyuan; Stingelin, Natalie; Blom, Paul W. M.; de Leeuw, Dago M.

    2010-01-01

    Cross-talk in memories using resistive switches in a cross-bar geometry can be prevented by integration of a rectifying diode. We present a functional cross bar memory array using a phase separated blend of a ferroelectric and a semiconducting polymer as storage medium. Each intersection acts

  6. Discrete event simulation and the resultant data storage system response in the operational mission environment of Jupiter-Saturn /Voyager/ spacecraft

    Science.gov (United States)

    Mukhopadhyay, A. K.

    1978-01-01

    The Data Storage Subsystem Simulator (DSSSIM) simulating (by ground software) occurrence of discrete events in the Voyager mission is described. Functional requirements for Data Storage Subsystems (DSS) simulation are discussed, and discrete event simulation/DSSSIM processing is covered. Four types of outputs associated with a typical DSSSIM run are presented, and DSSSIM limitations and constraints are outlined.

  7. Phase modulated high density collinear holographic data storage system with phase-retrieval reference beam locking and orthogonal reference encoding.

    Science.gov (United States)

    Liu, Jinpeng; Horimai, Hideyoshi; Lin, Xiao; Huang, Yong; Tan, Xiaodi

    2018-02-19

    A novel phase modulation method for holographic data storage with phase-retrieval reference beam locking is proposed and incorporated into an amplitude-encoding collinear holographic storage system. Unlike the conventional phase retrieval method, the proposed method locks the data page and the corresponding phase-retrieval interference beam together at the same location with a sequential recording process, which eliminates piezoelectric elements, phase shift arrays and extra interference beams, making the system more compact and phase retrieval easier. To evaluate our proposed phase modulation method, we recorded and then recovered data pages with multilevel phase modulation using two spatial light modulators experimentally. For 4-level, 8-level, and 16-level phase modulation, we achieved the bit error rate (BER) of 0.3%, 1.5% and 6.6% respectively. To further improve data storage density, an orthogonal reference encoding multiplexing method at the same position of medium is also proposed and validated experimentally. We increased the code rate of pure 3/16 amplitude encoding method from 0.5 up to 1.0 and 1.5 using 4-level and 8-level phase modulation respectively.

  8. Reproducibility of wrist home blood pressure measurement with position sensor and automatic data storage

    Science.gov (United States)

    Uen, Sakir; Fimmers, Rolf; Brieger, Miriam; Nickenig, Georg; Mengden, Thomas

    2009-01-01

    Background Wrist blood pressure (BP) devices have physiological limits with regards to accuracy, therefore they were not preferred for home BP monitoring. However some wrist devices have been successfully validated using etablished validation protocols. Therefore this study assessed the reproducibility of wrist home BP measurement with position sensor and automatic data storage. Methods To compare the reproducibility of three different(BP) measurement methods: 1) office BP, 2) home BP (Omron wrist device HEM- 637 IT with position sensor), 3) 24-hour ambulatory BP(24-h ABPM) (ABPM-04, Meditech, Hun)conventional sphygmomanometric office BP was measured on study days 1 and 7, 24-h ABPM on study days 7 and 14 and home BP between study days 1 and 7 and between study days 8 and 14 in 69 hypertensive and 28 normotensive subjects. The correlation coeffcient of each BP measurement method with echocardiographic left ventricular mass index was analyzed. The schedule of home readings was performed according to recently published European Society of Hypertension (ESH)- guidelines. Results The reproducibility of home BP measurement analyzed by the standard deviation as well as the squared differeces of mean individual differences between the respective BP measurements was significantly higher than the reproducibility of office BP (p ABPM (p ABPM was not significantly different (p = 0.80 systolic BP, p = 0.1 diastolic BP). The correlation coefficient of 24-h ABMP (r = 0.52) with left ventricular mass index was significantly higher than with office BP (r = 0.31). The difference between 24-h ABPM and home BP (r = 0.46) was not significant. Conclusion The short-term reproducibility of home BP measurement with the Omron HEM-637 IT wrist device was superior to the reproducibility of office BP and 24- h ABPM measurement. Furthermore, home BP with the wrist device showed similar correlations to targed organ damage as recently reported for upper arm devices. Although wrist devices have

  9. Oxidation of graphene ‘bow tie’ nanofuses for permanent, write-once-read-many data storage devices

    International Nuclear Information System (INIS)

    Pearson, A C; Jamieson, S; Davis, R C; Linford, M R; Lunt, B M

    2013-01-01

    We have fabricated nanoscale fuses from CVD graphene sheets with a ‘bow tie’ geometry for write-once-read-many data storage applications. The fuses are programmed using thermal oxidation driven by Joule heating. Fuses that were 250 nm wide with 2.5 μm between contact pads were programmed with average voltages and powers of 4.9 V and 2.1 mW, respectively. The required voltages and powers decrease with decreasing fuse sizes. Graphene shows extreme chemical and electronic stability; fuses require temperatures of about 400 °C for oxidation, indicating that they are excellent candidates for permanent data storage. To further demonstrate this stability, fuses were subjected to applied biases in excess of typical read voltages; stable currents were observed when a voltage of 10 V was applied to the devices in the off state and 1 V in the on state for 90 h each. (paper)

  10. Oxidation of graphene ‘bow tie’ nanofuses for permanent, write-once-read-many data storage devices

    Science.gov (United States)

    Pearson, A. C.; Jamieson, S.; Linford, M. R.; Lunt, B. M.; Davis, R. C.

    2013-04-01

    We have fabricated nanoscale fuses from CVD graphene sheets with a ‘bow tie’ geometry for write-once-read-many data storage applications. The fuses are programmed using thermal oxidation driven by Joule heating. Fuses that were 250 nm wide with 2.5 μm between contact pads were programmed with average voltages and powers of 4.9 V and 2.1 mW, respectively. The required voltages and powers decrease with decreasing fuse sizes. Graphene shows extreme chemical and electronic stability; fuses require temperatures of about 400 °C for oxidation, indicating that they are excellent candidates for permanent data storage. To further demonstrate this stability, fuses were subjected to applied biases in excess of typical read voltages; stable currents were observed when a voltage of 10 V was applied to the devices in the off state and 1 V in the on state for 90 h each.

  11. Energy storage

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This chapter discusses the role that energy storage may have on the energy future of the US. The topics discussed in the chapter include historical aspects of energy storage, thermal energy storage including sensible heat storage, latent heat storage, thermochemical heat storage, and seasonal heat storage, electricity storage including batteries, pumped hydroelectric storage, compressed air energy storage, and superconducting magnetic energy storage, and production and combustion of hydrogen as an energy storage option

  12. The collection, storage and use of equipment performance data for the safety and reliability assessment of nuclear power plants

    International Nuclear Information System (INIS)

    Fothergill, C.D.H.

    1975-01-01

    It has been characteristic of the Nuclear Industry that it should grow up in an atmosphere where reliability and operational safety considerations have been of vital importance. Consequently all aspects of Nuclear Power Reactor design, construction and operation (in the U.K.A.E.A.) are subjected to rigorous reliability assessments, beginning with the automatic protective devices and the safety shut-down systems. This has resulted in the setting up of large and small private data stores to support this upsurgence of Safety and Reliability assessment work. Unfortunately, much of the information being stored and published falls short of the minimum requirements of Safety Assessors and Reliability Analysts who need to make use of it. That there is still an urgent need for more work to be done in the Reliability Data field is universally acknowledged. The characteristics which make up good quality reliability data must be defined and achievable minimum standards must be set for its identification, collection, storage and retrieval. To this end the United Kingdom Atomic Energy Authority have set up the Systems Reliability Service Data Bank. This includes a computerized storage facility comprised of two principal data stores: (i) Reliability Data Store, (ii) Event Data Store. The figures available in the Reliability Data Store range from those relating to the lifetimes of minute components to those obtained from the assessment of whole plants and complete assemblies. These data have been accumulated from many reliable sources both inside and outside the Nuclear Industry, including the transfer of 'live' data generated from the results of reliability surveillance exercises associated with Event Data collection. Computer techniques developed specifically for the Reliability Data Store enable further 'processing' of these data to be carried out. The Event Data Store consists of three discrete computerized data stores, each one providing the necessary storage, retrieval and

  13. VME data acquisition system. Interactive software for the acquisition, display and storage of one or two dimensional spectra

    International Nuclear Information System (INIS)

    Petremann, E.

    1989-01-01

    The development and construction of a complete data acquisition system for nuclear physics applications, are described. The system is based on the VME bus and the 16/32 bits microprocessor. The data acquisition system enables the obtention of line spectra, involving one or two parameters, and the simultaneous storage of events in a magnetic tape. The analysis and the description of the data acquisition software, the experimental spectra display and saving on magnetic systems are given. Pascal and Assembler are used. The development of cards, for the standard VME and electronic equipment interfaces, is performed [fr

  14. Computer program for storage of historical and routine safety data related to radiologically controlled facilities

    International Nuclear Information System (INIS)

    Marsh, D.A.; Hall, C.J.

    1984-01-01

    A method for tracking and quick retrieval of radiological status of radiation and industrial safety systems in an active or inactive facility has been developed. The system uses a mini computer, a graphics plotter, and mass storage devices. Software has been developed which allows input and storage of architectural details, radiological conditions such as exposure rates, current location of safety systems, and routine and historical information on exposure and contamination levels. A blue print size digitizer is used for input. The computer program retains facility floor plans in three dimensional arrays. The software accesses an eight pen color plotter for output. The plotter generates color plots of the floor plans and safety systems on 8 1/2 x 11 or 20 x 30 paper or on overhead transparencies for reports and presentations

  15. Performance Evaluation of Distributed Systems with Unbalanced Flows: An Analysis of the INFOPLEX Data Storage Hierarchy.

    Science.gov (United States)

    1984-07-01

    34. - . . ’-... " " " ". ’ UNCLASSIFIED SECURITY CLASSIFICATION OF T0IS PAGE (lhen Det £ntered) REPORT DOCUMENTATION PAGE READ INSTRUCTIONS...RUMERODF PAGES 267 14 MONITORING AGENCY NAME & ADDRESS(II dillerent from Controllind Office) IS. SECURITY CLASS. (o this report) UNCLASSIFIED ISa...lower storage level. This is the basis for the mapping of the PIL3 read operation and workloads into a queueing netowrk model. S PAGE 135 REFERENCE

  16. Specific storage and hydraulic conductivity tomography through the joint inversion of hydraulic heads and self-potential data

    Science.gov (United States)

    Ahmed, A. Soueid; Jardani, A.; Revil, A.; Dupont, J. P.

    2016-03-01

    Transient hydraulic tomography is used to image the heterogeneous hydraulic conductivity and specific storage fields of shallow aquifers using time series of hydraulic head data. Such ill-posed and non-unique inverse problem can be regularized using some spatial geostatistical characteristic of the two fields. In addition to hydraulic heads changes, the flow of water, during pumping tests, generates an electrical field of electrokinetic nature. These electrical field fluctuations can be passively recorded at the ground surface using a network of non-polarizing electrodes connected to a high impedance (> 10 MOhm) and sensitive (0.1 mV) voltmeter, a method known in geophysics as the self-potential method. We perform a joint inversion of the self-potential and hydraulic head data to image the hydraulic conductivity and specific storage fields. We work on a 3D synthetic confined aquifer and we use the adjoint state method to compute the sensitivities of the hydraulic parameters to the hydraulic head and self-potential data in both steady-state and transient conditions. The inverse problem is solved using the geostatistical quasi-linear algorithm framework of Kitanidis. When the number of piezometers is small, the record of the transient self-potential signals provides useful information to characterize the hydraulic conductivity and specific storage fields. These results show that the self-potential method reveals the heterogeneities of some areas of the aquifer, which could not been captured by the tomography based on the hydraulic heads alone. In our analysis, the improvement on the hydraulic conductivity and specific storage estimations were based on perfect knowledge of electrical resistivity field. This implies that electrical resistivity will need to be jointly inverted with the hydraulic parameters in future studies and the impact of its uncertainty assessed with respect to the final tomograms of the hydraulic parameters.

  17. Use of information-retrieval languages in automated retrieval of experimental data from long-term storage

    Science.gov (United States)

    Khovanskiy, Y. D.; Kremneva, N. I.

    1975-01-01

    Problems and methods are discussed of automating information retrieval operations in a data bank used for long term storage and retrieval of data from scientific experiments. Existing information retrieval languages are analyzed along with those being developed. The results of studies discussing the application of the descriptive 'Kristall' language used in the 'ASIOR' automated information retrieval system are presented. The development and use of a specialized language of the classification-descriptive type, using universal decimal classification indices as the main descriptors, is described.

  18. Operation of a Data Acquisition, Transfer, and Storage System for the Global Space-Weather Observation Network

    Directory of Open Access Journals (Sweden)

    T Nagatsuma

    2014-10-01

    Full Text Available A system to optimize the management of global space-weather observation networks has been developed by the National Institute of Information and Communications Technology (NICT. Named the WONM (Wide-area Observation Network Monitoring system, it enables data acquisition, transfer, and storage through connection to the NICT Science Cloud, and has been supplied to observatories for supporting space-weather forecast and research. This system provides us with easier management of data collection than our previously employed systems by means of autonomous system recovery, periodical state monitoring, and dynamic warning procedures. Operation of the WONM system is introduced in this report.

  19. Dynamic Prediction of Power Storage and Delivery by Data-Based Fractional Differential Models of a Lithium Iron Phosphate Battery

    Directory of Open Access Journals (Sweden)

    Yunfeng Jiang

    2016-07-01

    Full Text Available A fractional derivative system identification approach for modeling battery dynamics is presented in this paper, where fractional derivatives are applied to approximate non-linear dynamic behavior of a battery system. The least squares-based state-variable filter (LSSVF method commonly used in the identification of continuous-time models is extended to allow the estimation of fractional derivative coefficents and parameters of the battery models by monitoring a charge/discharge demand signal and a power storage/delivery signal. In particular, the model is combined by individual fractional differential models (FDMs, where the parameters can be estimated by a least-squares algorithm. Based on experimental data, it is illustrated how the fractional derivative model can be utilized to predict the dynamics of the energy storage and delivery of a lithium iron phosphate battery (LiFePO 4 in real-time. The results indicate that a FDM can accurately capture the dynamics of the energy storage and delivery of the battery over a large operating range of the battery. It is also shown that the fractional derivative model exhibits improvements on prediction performance compared to standard integer derivative model, which in beneficial for a battery management system.

  20. How to characterize a potential site for CO2 storage with sparse data coverage - a Danish onshore site case

    International Nuclear Information System (INIS)

    Nielsen, Carsten Moller; Frykman, Peter; Dalhoff, Finn

    2015-01-01

    The paper demonstrates how a potential site for CO 2 storage can be evaluated up to a sufficient level of characterization for compiling a storage permit application, even if the site is only sparsely explored. The focus of the paper is on a risk driven characterization procedure. In the initial state of a site characterization process with sparse data coverage, the regional geological and stratigraphic understanding of the area of interest can help strengthen a first model construction for predictive modeling. Static and dynamic modeling in combination with a comprehensive risk assessment can guide the different elements needed to be evaluated for fulfilling a permit application. Several essential parameters must be evaluated; the storage capacity for the site must be acceptable for the project life of the operation, the trap configuration must be efficient to secure long term containment, the injectivity must be sufficient to secure a longstanding stable operation and finally a satisfactory and operational measuring strategy must be designed. The characterization procedure is demonstrated for a deep onshore aquifer in the northern part of Denmark, the Vedsted site. The site is an anticlinal structural closure in an Upper Triassic - Lower Jurassic sandstone formation at 1 800-1 900 m depth. (authors)

  1. Design of a large remote seismic exploration data acquisition system, with the architecture of a distributed storage area network

    International Nuclear Information System (INIS)

    Cao, Ping; Song, Ke-zhu; Yang, Jun-feng; Ruan, Fu-ming

    2011-01-01

    Nowadays, seismic exploration data acquisition (DAQ) systems have been developed into remote forms with a large-scale coverage area. In this kind of application, some features must be mentioned. Firstly, there are many sensors which are placed remotely. Secondly, the total data throughput is high. Thirdly, optical fibres are not suitable everywhere because of cost control, harsh running environments, etc. Fourthly, the ability of expansibility and upgrading is a must for this kind of application. It is a challenge to design this kind of remote DAQ (rDAQ). Data transmission, clock synchronization, data storage, etc must be considered carefully. A fourth-hierarchy model of rDAQ is proposed. In this model, rDAQ is divided into four different function levels. From this model, a simple and clear architecture based on a distributed storage area network is proposed. rDAQs with this architecture have advantages of flexible configuration, expansibility and stability. This architecture can be applied to design and realize from simple single cable systems to large-scale exploration DAQs

  2. Alternatives to relational databases in precision medicine: Comparison of NoSQL approaches for big data storage using supercomputers

    Science.gov (United States)

    Velazquez, Enrique Israel

    Improvements in medical and genomic technologies have dramatically increased the production of electronic data over the last decade. As a result, data management is rapidly becoming a major determinant, and urgent challenge, for the development of Precision Medicine. Although successful data management is achievable using Relational Database Management Systems (RDBMS), exponential data growth is a significant contributor to failure scenarios. Growing amounts of data can also be observed in other sectors, such as economics and business, which, together with the previous facts, suggests that alternate database approaches (NoSQL) may soon be required for efficient storage and management of big databases. However, this hypothesis has been difficult to test in the Precision Medicine field since alternate database architectures are complex to assess and means to integrate heterogeneous electronic health records (EHR) with dynamic genomic data are not easily available. In this dissertation, we present a novel set of experiments for identifying NoSQL database approaches that enable effective data storage and management in Precision Medicine using patients' clinical and genomic information from the cancer genome atlas (TCGA). The first experiment draws on performance and scalability from biologically meaningful queries with differing complexity and database sizes. The second experiment measures performance and scalability in database updates without schema changes. The third experiment assesses performance and scalability in database updates with schema modifications due dynamic data. We have identified two NoSQL approach, based on Cassandra and Redis, which seems to be the ideal database management systems for our precision medicine queries in terms of performance and scalability. We present NoSQL approaches and show how they can be used to manage clinical and genomic big data. Our research is relevant to the public health since we are focusing on one of the main

  3. Controlled data storage for non-volatile memory cells embedded in nano magnetic logic

    Science.gov (United States)

    Riente, Fabrizio; Ziemys, Grazvydas; Mattersdorfer, Clemens; Boche, Silke; Turvani, Giovanna; Raberg, Wolfgang; Luber, Sebastian; Breitkreutz-v. Gamm, Stephan

    2017-05-01

    Among the beyond-CMOS technologies, perpendicular Nano Magnetic Logic (pNML) is a promising candidate due to its low power consumption, its non-volatility and its monolithic 3D integrability, which makes it possible to integrate memory and logic into the same device by exploiting the interaction of bi-stable nanomagnets with perpendicular magnetic anisotropy. Logic computation and signal synchronization are achieved by focus ion beam irradiation and by pinning domain walls in magnetic notches. However, in realistic circuits, the information storage and their read-out are crucial issues, often ignored in the exploration of beyond-CMOS devices. In this paper we address these issues by experimentally demonstrating a pNML memory element, whose read and write operations can be controlled by two independent pulsed currents. Our results prove the correct behavior of the proposed structure that enables high density memory embedded in the logic plane of 3D-integrated pNML circuits.

  4. Technology data for energy plants. Generation of electricity and district heating, energy storage and energy carrier generation and conversion

    Energy Technology Data Exchange (ETDEWEB)

    2012-05-15

    The Danish Energy Agency and Energinet.dk, the Danish electricity transmission and system operator, have at regular intervals published a catalogue of energy producing technologies. The previous edition was published in June 2010. This report presents the results of the most recent update. The primary objective of publishing a technology catalogue is to establish a uniform, commonly accepted and up-to-date basis for energy planning activities, such as future outlooks, evaluations of security of supply and environmental impacts, climate change evaluations, and technical and economic analyses, e.g. on the framework conditions for the development and deployment of certain classes of technologies. With this scope in mind, it has not been the intention to establish a comprehensive catalogue, including all main gasification technologies or all types of electric batteries. Only selected, representative, technologies are included, to enable generic comparisons of e.g. thermal gasification versus combustion of biomass and electricity storage in batteries versus hydro-pumped storage. It has finally been the intention to offer the catalogue for the international audience, as a contribution to similar initiatives aiming at forming a public and concerted knowledge base for international analyses and negotiations. A guiding principle for developing the catalogue has been to rely primarily on well-documented and public information, secondarily on invited expert advice. Since many experts are reluctant in estimating future quantitative performance data, the data tables are not complete, in the sense that most data tables show several blank spaces. This approach has been chosen in order to achieve data, which to some extent are equivalently reliable, rather than to risk a largely incoherent data set including unfounded guesstimates. The current update has been developed with an unbalanced focus, i.e. most attention to technologies which are most essential for current and short

  5. Dynamic Auditing Protocol for Efficient and Secure Data Storage in Cloud Computing

    OpenAIRE

    J. Noorul Ameen; J. Jamal Mohamed; N. Nilofer Begam

    2014-01-01

    Cloud computing, where the data has been stored on cloud servers and retrieved by users (data consumers) the data from cloud servers. However, there are some security challenges which are in need of independent auditing services to verify the data integrity and safety in the cloud. Until now a numerous methods has been developed for remote integrity checking whichever only serve for static archive data and cannot be implemented to the auditing service if the data in the cloud is being dynamic...

  6. Energy Storage.

    Science.gov (United States)

    Eaton, William W.

    Described are technological considerations affecting storage of energy, particularly electrical energy. The background and present status of energy storage by batteries, water storage, compressed air storage, flywheels, magnetic storage, hydrogen storage, and thermal storage are discussed followed by a review of development trends. Included are…

  7. Evidence from data storage tags for the presence of lunar and semilunar behavioral cycles in spawning Atlantic cod

    Science.gov (United States)

    Grabowski, Timothy B.; McAdam, Bruce J.; Thorsteinsson, Vilhjalmur; Marteinsdóttir, Gudrún

    2015-01-01

    Understanding the environmental processes determining the timing and success of reproduction is of critical importance to developing effective management strategies of marine fishes. Unfortunately it has proven difficult to comprehensively study the reproductive behavior of broadcast-spawning fishes. The use of electronic data storage tags (DSTs) has the potential to provide insights into the behavior of fishes. These tags allow for data collection over relatively large spatial and temporal scales that can be correlated to predicted environmental conditions and ultimately be used to refine predictions of year class strength. In this paper we present data retrieved from DSTs demonstrating that events putatively identified as Atlantic cod spawning behavior is tied to a lunar cycle with a pronounced semi-lunar cycle within it. Peak activity occurs around the full and new moon with no evidence of relationship with day/night cycles.

  8. Summary of treatment, storage, and disposal facility usage data collected from U.S. Department of Energy sites

    International Nuclear Information System (INIS)

    Jacobs, A.; Oswald, K.; Trump, C.

    1995-04-01

    This report presents an analysis for the US Department of Energy (DOE) to determine the level and extent of treatment, storage, and disposal facility (TSDF) assessment duplication. Commercial TSDFs are used as an integral part of the hazardous waste management process for those DOE sites that generate hazardous waste. Data regarding the DOE sites' usage have been extracted from three sets of data and analyzed in this report. The data are presented both qualitatively and quantitatively, as appropriate. This information provides the basis for further analysis of assessment duplication to be documented in issue papers as appropriate. Once the issues have been identified and adequately defined, corrective measures will be proposed and subsequently implemented

  9. The system for diagnostics and monitoring of the IBR-2 reactor state. Data acquisition, accumulation and storage of information

    International Nuclear Information System (INIS)

    Ermilov, V.G.; Ivanov, V.V.; Korolev, V.S.; Pepelyshev, Yu.N.; Semashko, S.V.; Tulaev, A.B.

    2000-01-01

    The architectural decisions for a developed distributed system of the IBR-2 pulsed reactor conditions monitoring are described. The system is intended for measurement of the basic reactor parameters, acquisition, storage and processing of information, the current reactor state monitoring, analysis of reactor parameters for a long time operation period both in on-line, and in off-line modes. The system is constructed in the architecture client-server using DBMS MS SQL Server 7.0 The basic hardware components of the system are measuring workstations and devices, processing and user workstations and the central server. The software of the system consists of the measuring programs, data flows dispatching services, client applications for data processing and visualization, and means for preparing data for subsequent presentation in WWW. The basic results of the first system operation phase and prospect of its development are discussed. (author)

  10. Anamorphic and Local Characterization of a Holographic Data Storage System with a Liquid-Crystal on Silicon Microdisplay as Data Pager

    Directory of Open Access Journals (Sweden)

    Fco. Javier Martínez-Guardiola

    2018-06-01

    Full Text Available In this paper, we present a method to characterize a complete optical Holographic Data Storage System (HDSS, where we identify the elements that limit the capacity to register and restore the information introduced by means of a Liquid Cristal on Silicon (LCoS microdisplay as the data pager. In the literature, it has been shown that LCoS exhibits an anamorphic and frequency dependent effect when periodic optical elements are addressed to LCoS microdisplays in diffractive optics applications. We tested whether this effect is still relevant in the application to HDSS, where non-periodic binary elements are applied, as it is the case in binary data pages codified by Binary Intensity Modulation (BIM. To test the limits in storage data density and in spatial bandwidth of the HDSS, we used anamorphic patterns with different resolutions. We analyzed the performance of the microdisplay in situ using figures of merit adapted to HDSS. A local characterization across the aperture of the system was also demonstrated with our proposed methodology, which results in an estimation of the illumination uniformity and the contrast generated by the LCoS. We show the extent of the increase in the Bit Error Rate (BER when introducing a photopolymer as the recording material, thus all the important elements in a HDSS are considered in the characterization methodology demonstrated in this paper.

  11. Controlled data storage for non-volatile memory cells embedded in nano magnetic logic

    Directory of Open Access Journals (Sweden)

    Fabrizio Riente

    2017-05-01

    Full Text Available Among the beyond-CMOS technologies, perpendicular Nano Magnetic Logic (pNML is a promising candidate due to its low power consumption, its non-volatility and its monolithic 3D integrability, which makes it possible to integrate memory and logic into the same device by exploiting the interaction of bi-stable nanomagnets with perpendicular magnetic anisotropy. Logic computation and signal synchronization are achieved by focus ion beam irradiation and by pinning domain walls in magnetic notches. However, in realistic circuits, the information storage and their read-out are crucial issues, often ignored in the exploration of beyond-CMOS devices. In this paper we address these issues by experimentally demonstrating a pNML memory element, whose read and write operations can be controlled by two independent pulsed currents. Our results prove the correct behavior of the proposed structure that enables high density memory embedded in the logic plane of 3D-integrated pNML circuits.

  12. Spent fuel dry storage technology development: report of consolidated thermal data

    International Nuclear Information System (INIS)

    Lundberg, W.L.

    1980-09-01

    Experiments indicate that PWR fuel with decay heat levels in excess of 2 kW could be stored in isolated drywells in Nevada Test Site soil without exceeding the current fuel clad temperature limit (715 0 F). The document also assesses the ability to thermally analyze near-surface drywells and above-ground storage casks and it identifies analysis development areas. It is concluded that the required analysis procedures, computer programs, etc., are already developed and available. Analysis uncertainties, however, still exist but they lie mainly in the numerical input area. Soil thermal conductivity, of primary importance in analysis, requires additional study to better understand the soil drying mechanism and effects of moisture. Work is also required to develop an internal canister subchannel model. In addition, the ability of the overall drywell thermal model to accommodate thermal interaction effects between adjacent drywells should be confirmed. In the experimental area, tests with two BWR spent fuel assemblies encapsulated in a single canister should be performed to establish the fuel clad and canister temperature relationship. This is needed to supplement similar experimental work which has already been completed with PWR fuel

  13. TRAQ I, a CAMAC system for multichannel data acquisition, storage and processing

    International Nuclear Information System (INIS)

    Broad, A.S.; Jordan, C.L.; Kojola, P.H.; Miller, M.

    1983-01-01

    Multichannel, high speed, signal sources generate large amounts of data which cannot be handled real time on the camac dataway. TRAQ I is a modular CAMAC system designed to buffer and process data of this type. The system can acquire data from up to 256 sources (ADCs etc.) and store in local memory (4 Mbytes). Many different signal sources can be controlled, working in either a histogramming or sequential mode. The system's data transfer bus is designed to accommodate other modules which can pre- or postprocess the data. Pre-processors can either intercept the data flow to memory for data compaction or passively monitor, looking for signal excursions, etc. Post-processors access memory to process and rewrite the data or transmit to other devices

  14. Modelling of seismic reflection data for underground gas storage in the Pečarovci and Dankovci structures - Mura Depression

    Directory of Open Access Journals (Sweden)

    Andrej Gosar

    1995-12-01

    Full Text Available Two antiform structures in the Mura Depression were selected as the most promising in Slovenia for the construction of an underground gas storage facility in an aquifer. Seventeen reflection lines with a total length of 157km were recorded, and three boreholes were drilled. Structural models corresponding to two different horizons (the pre-Tertiary basement and the Badenian-Sarmatianboundary were constructed using the Sierra Mimic program. Evaluation of different velocity data (velocity analysis, sonic log, the down-hole method, and laboratory measurements on cores was carried out in order to perform correct timeto-depth conversion and to estabUsh lateral velocity variations. The porous rock in Pečarovci structure is 70m thick layer of dolomite, occurring at a depth of 1900m, whereas layers of marl, several hundred meter thick, represent the impermeable cap-rock. Due to faults, the Dankovci structure, at a depth of 1200m,where the reservoir rocks consist of thin layers of conglomerate and sandstone,was proved to be less reliable. ID synthetic seismograms were used to correlatethe geological and seismic data at the borehole locations, especially at intervals with thin layers. The raytracing method on 2D models (the Sierra Quik packagewas applied to confirm lateral continuity of some horizons and to improve the interpretation of faults which are the critical factor for gas storage.

  15. A data acquisition and storage system for the ion auxiliary propulsion system cyclic thruster test

    Science.gov (United States)

    Hamley, John A.

    1989-01-01

    A nine-track tape drive interfaced to a standard personal computer was used to transport data from a remote test site to the NASA Lewis mainframe computer for analysis. The Cyclic Ground Test of the Ion Auxiliary Propulsion System (IAPS), which successfully achieved its goal of 2557 cycles and 7057 hr of thrusting beam on time generated several megabytes of test data over many months of continuous testing. A flight-like controller and power supply were used to control the thruster and acquire data. Thruster data was converted to RS232 format and transmitted to a personal computer, which stored the raw digital data on the nine-track tape. The tape format was such that with minor modifications, mainframe flight data analysis software could be used to analyze the Cyclic Ground Test data. The personal computer also converted the digital data to engineering units and displayed real time thruster parameters. Hardcopy data was printed at a rate dependent on thruster operating conditions. The tape drive provided a convenient means to transport the data to the mainframe for analysis, and avoided a development effort for new data analysis software for the Cyclic test. This paper describes the data system, interfacing and software requirements.

  16. Organizational Security Threats Related to Portable Data Storage Devices: Qualitative Exploratory Inquiry

    Science.gov (United States)

    Cooper, Paul K.

    2017-01-01

    There has been a significant growth of portable devices capable of storing both personal data as well as sensitive organizational data. This growth of these portable devices has led to an increased threat of cyber-criminal activity. The purpose of this study was to gain a better understanding of security threats to the data assets of organizations…

  17. Effective representation and storage of mass spectrometry-based proteomic data sets for the scientific community

    DEFF Research Database (Denmark)

    Olsen, Jesper V; Mann, Matthias

    2011-01-01

    Mass spectrometry-based proteomics has emerged as a technology of choice for global analysis of cell signaling networks. However, reporting and sharing of MS data are often haphazard, limiting the usefulness of proteomics to the signaling community. We argue that raw data should always be provided...... mechanisms for community-wide sharing of these data....

  18. Data demonstrating the influence of the latent storage efficiency on the dynamic thermal characteristics of a PCM layer

    Directory of Open Access Journals (Sweden)

    D. Mazzeo

    2017-06-01

    Full Text Available Dynamic thermal characteristics, for each month of the year, of PCM layers with different melting temperatures and thermophysical properties, in a steady periodic regime, were determined (Mazzeo et al., 2017 [1]. The layer is subjected to climatic conditions characterizing two locations, one with a continental climate and the second one with a Mediterranean climate. This data article provides detailed numerical data, as a function of the latent storage efficiency, including monthly average daily values: of the latent energy fraction, of the decrement factors of the temperature, of the heat flux and of the energy, and of the time lags of the maximum and minimum peaks of the temperature and of the heat flux.

  19. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    Science.gov (United States)

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Towards Blockchain-based Auditable Storage and Sharing of IoT Data

    OpenAIRE

    Shafagh, Hossein; Burkhalter, Lukas; Hithnawi, Anwar; Duquennoy, Simon

    2017-01-01

    Today the cloud plays a central role in storing, processing, and distributing data. Despite contributing to the rapid development of IoT applications, the current IoT cloud-centric architecture has led into a myriad of isolated data silos that hinders the full potential of holistic data-driven analytics within the IoT. In this paper, we present a blockchain-based design for the IoT that brings a distributed access control and data management. We depart from the current trust model that delega...

  1. Persistent storage of non-event data in the CMS databases

    International Nuclear Information System (INIS)

    De Gruttola, M; Di Guida, S; Innocente, V; Schlatter, D; Futyan, D; Glege, F; Paolucci, P; Govi, G; Picca, P; Pierro, A; Xie, Z

    2010-01-01

    In the CMS experiment, the non event data needed to set up the detector, or being produced by it, and needed to calibrate the physical responses of the detector itself are stored in ORACLE databases. The large amount of data to be stored, the number of clients involved and the performance requirements make the database system an essential service for the experiment to run. This note describes the CMS condition database architecture, the data-flow and PopCon, the tool built in order to populate the offline databases. Finally, the first experience obtained during the 2008 and 2009 cosmic data taking are presented.

  2. BE (fuel element)/ZL (interim storage facility) module. Constituents of the fuel BE data base for BE documentation with respect to the disposal planning and the support of the BE container storage administration

    International Nuclear Information System (INIS)

    Hoffmann, V.; Deutsch, S.; Busch, V.; Braun, A.

    2012-01-01

    The securing of spent fuel element disposal from German nuclear power plants is the main task of GNS. This includes the container supply and the disposal analysis and planning. Therefore GNS operates a data base comprising all in Germany implemented fuel elements and all fuel element containers in interim storage facilities. With specific program modules the data base serves an optimized repository planning for all spent fuel elements from German NPPS and the supply of required data for future final disposal. The data base has two functional models: the BE (fuel element) and the ZL (interim storage) module. The contribution presents the data structure of the modules and details of the data base operation.

  3. Towards Regional, Error-Bounded Landscape Carbon Storage Estimates for Data-Deficient Areas of the World

    DEFF Research Database (Denmark)

    Willcock, Simon; Phillips, Oliver L.; Platts, Philip J.

    2012-01-01

    estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been...

  4. Texas flexible pavements and overlays : year 1 report, test sections, data collection, analyses, and data storage system.

    Science.gov (United States)

    2012-06-01

    This five-year project was initiated to collect materials and pavement performance data on a minimum of 100 highway test sections around the State of Texas, incorporating both flexible pavements and overlays. Besides being used to calibrate and valid...

  5. Data on the changes of the mussels' metabolic profile under different cold storage conditions

    DEFF Research Database (Denmark)

    Aru, Violetta; Pisano, Maria Barbara; Savorani, Francesco

    2016-01-01

    galloprovincialis. This data article provides information on the average distribution of the microbial loads in mussels' specimens and on the acquisition, processing, and multivariate analysis of the 1H NMR spectra from the hydrosoluble phase of stored mussels. This data article is referred to the research article...

  6. Analysis of stationary fuel cell dynamic ramping capabilities and ultra capacitor energy storage using high resolution demand data

    Science.gov (United States)

    Meacham, James R.; Jabbari, Faryar; Brouwer, Jacob; Mauzey, Josh L.; Samuelsen, G. Scott

    Current high temperature fuel cell (HTFC) systems used for stationary power applications (in the 200-300 kW size range) have very limited dynamic load following capability or are simply base load devices. Considering the economics of existing electric utility rate structures, there is little incentive to increase HTFC ramping capability beyond 1 kWs -1 (0.4% s -1). However, in order to ease concerns about grid instabilities from utility companies and increase market adoption, HTFC systems will have to increase their ramping abilities, and will likely have to incorporate electrical energy storage (EES). Because batteries have low power densities and limited lifetimes in highly cyclic applications, ultra capacitors may be the EES medium of choice. The current analyses show that, because ultra capacitors have a very low energy storage density, their integration with HTFC systems may not be feasible unless the fuel cell has a ramp rate approaching 10 kWs -1 (4% s -1) when using a worst-case design analysis. This requirement for fast dynamic load response characteristics can be reduced to 1 kWs -1 by utilizing high resolution demand data to properly size ultra capacitor systems and through demand management techniques that reduce load volatility.

  7. Using Enhanced Grace Water Storage Data to Improve Drought Detection by the U.S. and North American Drought Monitors

    Science.gov (United States)

    Houborg, Rasmus; Rodell, Matthew; Lawrimore, Jay; Li, Bailing; Reichle, Rolf; Heim, Richard; Rosencrans, Matthew; Tinker, Rich; Famiglietti, James S.; Svoboda, Mark; hide

    2011-01-01

    NASA's Gravity Recovery and Climate Experiment (GRACE) satellites measure time variations of the Earth's gravity field enabling reliable detection of spatio-temporal variations in total terrestrial water storage (TWS), including groundwater. The U.S. and North American Drought Monitors rely heavily on precipitation indices and do not currently incorporate systematic observations of deep soil moisture and groundwater storage conditions. Thus GRACE has great potential to improve the Drought Monitors by filling this observational gap. GRACE TWS data were assimilating into the Catchment Land Surface Model using an ensemble Kalman smoother enabling spatial and temporal downscaling and vertical decomposition into soil moisture and groundwater components. The Drought Monitors combine several short- and long-term drought indicators expressed in percentiles as a reference to their historical frequency of occurrence. To be consistent, we generated a climatology of estimated soil moisture and ground water based on a 60-year Catchment model simulation, which was used to convert seven years of GRACE assimilated fields into drought indicator percentiles. At this stage we provide a preliminary evaluation of the GRACE assimilated moisture and indicator fields.

  8. POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph

    Science.gov (United States)

    Poat, M. D.; Lauret, J.; Betts, W.

    2015-12-01

    The STAR online computing infrastructure has become an intensive dynamic system used for first-hand data collection and analysis resulting in a dense collection of data output. As we have transitioned to our current state, inefficient, limited storage systems have become an impediment to fast feedback to online shift crews. Motivation for a centrally accessible, scalable and redundant distributed storage system had become a necessity in this environment. OpenStack Swift Object Storage and Ceph Object Storage are two eye-opening technologies as community use and development have led to success elsewhere. In this contribution, OpenStack Swift and Ceph have been put to the test with single and parallel I/O tests, emulating real world scenarios for data processing and workflows. The Ceph file system storage, offering a POSIX compliant file system mounted similarly to an NFS share was of particular interest as it aligned with our requirements and was retained as our solution. I/O performance tests were run against the Ceph POSIX file system and have presented surprising results indicating true potential for fast I/O and reliability. STAR'S online compute farm historical use has been for job submission and first hand data analysis. The goal of reusing the online compute farm to maintain a storage cluster and job submission will be an efficient use of the current infrastructure.

  9. Microbial Diagnostic Array Workstation (MDAW: a web server for diagnostic array data storage, sharing and analysis

    Directory of Open Access Journals (Sweden)

    Chang Yung-Fu

    2008-09-01

    Full Text Available Abstract Background Microarrays are becoming a very popular tool for microbial detection and diagnostics. Although these diagnostic arrays are much simpler when compared to the traditional transcriptome arrays, due to the high throughput nature of the arrays, the data analysis requirements still form a bottle neck for the widespread use of these diagnostic arrays. Hence we developed a new online data sharing and analysis environment customised for diagnostic arrays. Methods Microbial Diagnostic Array Workstation (MDAW is a database driven application designed in MS Access and front end designed in ASP.NET. Conclusion MDAW is a new resource that is customised for the data analysis requirements for microbial diagnostic arrays.

  10. Putting all that (HEP-) data to work - a REAL implementation of an unlimited computing and storage architecture

    International Nuclear Information System (INIS)

    Ernst, Michael

    1996-01-01

    Since computing in HEP left the Mainframe-Path, many institutions demonstrated a successful migration to workstation-based computing, especially for applications requiring a high CPU-to-I/O ratio. However, the difficulties and the complexity starts beyond just providing CPU-Cycles. Critical applications, requiring either sequential access to large amounts of data or to many small sets out of a multi 10-Terabyte Data Repository need technical approaches we have not had so far. Though we felt that we were hardly able to follow technology evolving in the various fields, we recently had to realize that even politics overtook technical evolution - at least in the areas mentioned above. The USA is making peace with Russia. DEC is talking to IBM, SGI communicating with HP. All these things became true, and through, unfortunately, the Cold War lasted 50 years, and-in a relative sense-we were afraid that 50 years seemed to be how long any self respecting high performance computer (or a set of workstations) had to wait for data from its Server, fortunately, we are now facing a similar progress of friendliness, harmony and balance in the former problematic (computing) areas. Buzzwords, mentioned many thousand times in talks describing today's and future requirements, including Functionality, Reliability, Scalability, Modularity and Portability are not just phrases, wishes and dreams any longer. At DESY, we are in the process of demonstrating an architecture that is taking those five issues equally into consideration, including Heterogeneous Computing Platforms with ultimate file system approaches, Heterogeneous Mass Storage Devices and an Open Distributed Hierarchical Mass Storage Management System. This contribution will provide an overview on how far we got and what the next steps will be. (author)

  11. Intelligent Data Storage and Retrieval for Design Optimisation – an Overview

    Directory of Open Access Journals (Sweden)

    C. Peebles

    2005-01-01

    Full Text Available This paper documents the findings of a literature review conducted by the Sir Lawrence Wackett Centre for Aerospace Design Technology at RMIT University. The review investigates aspects of a proposed system for intelligent design optimisation. Such a system would be capable of efficiently storing (and compressing if required a range of types of design data into an intelligent database. This database would be accessed by the system during subsequent design processes, allowing for search of relevant design data for re-use in later designs, allowing it to become very efficient in reducing the time for later designs as the database grows in size. Extensive research has been performed, in both theoretical aspects of the project, and practical examples of current similar systems. This research covers the areas of database systems, database queries, representation and compression of design data, geometric representation and heuristic methods for design applications. 

  12. Proteomics data exchange and storage: the need for common standards and public repositories.

    Science.gov (United States)

    Jiménez, Rafael C; Vizcaíno, Juan Antonio

    2013-01-01

    Both the existence of data standards and public databases or repositories have been key factors behind the development of the existing "omics" approaches. In this book chapter we first review the main existing mass spectrometry (MS)-based proteomics resources: PRIDE, PeptideAtlas, GPMDB, and Tranche. Second, we report on the current status of the different proteomics data standards developed by the Proteomics Standards Initiative (PSI): the formats mzML, mzIdentML, mzQuantML, TraML, and PSI-MI XML are then reviewed. Finally, we present an easy way to query and access MS proteomics data in the PRIDE database, as a representative of the existing repositories, using the workflow management system (WMS) tool Taverna. Two different publicly available workflows are explained and described.

  13. Two-photon polarization data storage in bacteriorhodopsin films and its potential use in security applications

    Energy Technology Data Exchange (ETDEWEB)

    Imhof, Martin; Hampp, Norbert, E-mail: hampp@staff.uni-marburg.de [Department of Chemistry, Material Sciences Center, University of Marburg, Hans-Meerwein-Str., D-35032 Marburg (Germany); Rhinow, Daniel [Max-Planck-Institute of Biophysics, Max-von-Laue-Straße 3, D-60438 Frankfurt (Germany)

    2014-02-24

    Bacteriorhodopsin (BR) films allow write-once-read-many recording of polarization data by a two-photon-absorption (TPA) process. The optical changes in BR films induced by the TPA recording were measured and the Müller matrix of a BR film was determined. A potential application of BR films in security technology is shown. Polarization data can be angle-selective retrieved with high signal-to-noise ratio. The BR film does not only carry optical information but serves also as a linear polarizer. This enables that polarization features recorded in BR films may be retrieved by merely using polarized light from a mobile phone display.

  14. Development and application of magnetic magnesium for data storage in gentelligent products

    International Nuclear Information System (INIS)

    Wu, K.-H.; Gastan, E.; Rodman, M.; Behrens, B.-A.; Bach, Fr.-W.; Gatzen, H.H.

    2010-01-01

    A new concept aims at developing genetically intelligent ('gentelligent') components, which bequeath production or application data to their next generation. For such an approach, it is desirable to store respective information on the component itself. This is accomplished by using its surface to store magnetic data on it. This way, the component itself can be used as its own information carrier throughout the manufacturing process and later throughout the working cycle. The chosen approach is to develop a magnetic magnesium (Mg), integrate it in an appropriate component, and subject the component to recording experiments.

  15. A modular system of codes for discharge data storage used at Tokamak TJ-1

    International Nuclear Information System (INIS)

    Guasp, J.

    1983-01-01

    The code system is able to sample the discharge data at a 83 KHsub(z) rate using a LPA-11K controller and AD-11K converters and store it on structured files. The file details are transparent to the retrieval and plotting programs. Graphs are available on both Tektronix 4010-1 display and Versatec 1200 plotter. The system allows data storing and retrieval on PDP-11/34 and UNIVAC-1100/8 computers as well as on magnetic tape. (author)

  16. GeoSearch: a new virtual globe application for the submission, storage, and sharing of point-based ecological data

    Science.gov (United States)

    Cardille, J. A.; Gonzales, R.; Parrott, L.; Bai, J.

    2009-12-01

    How should researchers store and share data? For most of history, scientists with results and data to share have been mostly limited to books and journal articles. In recent decades, the advent of personal computers and shared data formats has made it feasible, though often cumbersome, to transfer data between individuals or among small groups. Meanwhile, the use of automatic samplers, simulation models, and other data-production techniques has increased greatly. The result is that there is more and more data to store, and a greater expectation that they will be available at the click of a button. In 10 or 20 years, will we still send emails to each other to learn about what data exist? The development and widespread familiarity with virtual globes like Google Earth and NASA WorldWind has created the potential, in just the last few years, to revolutionize the way we share data, search for and search through data, and understand the relationship between individual projects in research networks, where sharing and dissemination of knowledge is encouraged. For the last two years, we have been building the GeoSearch application, a cutting-edge online resource for the storage, sharing, search, and retrieval of data produced by research networks. Linking NASA’s WorldWind globe platform, the data browsing toolkit prefuse, and SQL databases, GeoSearch’s version 1.0 enables flexible searches and novel geovisualizations of large amounts of related scientific data. These data may be submitted to the database by individual researchers and processed by GeoSearch’s data parser. Ultimately, data from research groups gathered in a research network would be shared among users via the platform. Access is not limited to the scientists themselves; administrators can determine which data can be presented publicly and which require group membership. Under the auspices of the Canada’s Sustainable Forestry Management Network of Excellence, we have created a moderate-sized database

  17. System for secure storage

    NARCIS (Netherlands)

    2005-01-01

    A system (100) comprising read means (112) for reading content data and control logic data from a storage medium (101), the control logic data being uniquely linked to the storage medium (101), processing means (113-117), for processing the content data and feeding the processed content data to an

  18. FPGA based data-flow injection module at 10 Gbit/s reading data from network exported storage and using standard protocols

    International Nuclear Information System (INIS)

    Lemouzy, B; Garnier, J-C; Neufeld, N

    2011-01-01

    The goal of the LHCb readout upgrade is to accelerate the DAQ to 40 MHz. Such a DAQ system will certainly employ 10 Gigabit or similar technologies and might also need new networking protocols such as a customized, light-weight TCP or more specialized protocols. A test module is being implemented to be integrated in the existing LHCb infrastructure. It is a multiple 10-Gigabit traffic generator, driven by a Stratix IV FPGA, and flexible enough to generate LHCb's raw data packets. Traffic data are either internally generated or read from external storage via the network. We have implemented a light-weight industry standard protocol ATA over Ethernet (AoE) and we present an outlook of using a file-system on these network-exported disk-drivers.

  19. LHCb: FPGA based data-flow injection module at 10 Gbit/s reading data from network exported storage and using standard protocols

    CERN Multimedia

    Lemouzy, B; Garnier, J-C

    2010-01-01

    The goal of the LHCb readout upgrade is to speed up the DAQ to 40 MHz. Such a DAQ system will certainly employ 10 Gigabit or similar technologies and might also need new networking protocols such as a customized, light-weight TCP or more specialised protocols. A test module is being implemented, which integrates in the existing LHCb infrastructure. It is a multiple 10-Gigabit traffic generator, driven by a Stratix IV FPGA, which is flexibile enough to either generate LHCb's raw data packets internally or read them from external storage via the network. For reading the data we have implemented a light-weight industry standard protocol ATA over Ethernet (AoE) and we present an outlook of using a filesystem on these network-exported disk-drivers.

  20. A data storage, retrieval and analysis system for endocrine research. [for Skylab

    Science.gov (United States)

    Newton, L. E.; Johnston, D. A.

    1975-01-01

    This retrieval system builds, updates, retrieves, and performs basic statistical analyses on blood, urine, and diet parameters for the M071 and M073 Skylab and Apollo experiments. This system permits data entry from cards to build an indexed sequential file. Programs are easily modified for specialized analyses.

  1. SECURITY ANALYSIS OF ONE SOLUTION FOR SECURE PRIVATE DATA STORAGE IN A CLOUD

    Directory of Open Access Journals (Sweden)

    Ludmila Klimentievna Babenko

    2016-03-01

    Full Text Available The paper analyzes the security of one recently proposed secure cloud data base architecture. We present an attack on it binding the security of whole solution with the security of particular encryption schemes, used in it. We show this architecture is vulnerable and consequently the solution is unviable.

  2. SECURITY ANALYSIS OF ONE SOLUTION FOR SECURE PRIVATE DATA STORAGE IN A CLOUD

    OpenAIRE

    Ludmila Klimentievna Babenko; Alina Viktorovna Trepacheva

    2016-01-01

    The paper analyzes the security of one recently proposed secure cloud data base architecture. We present an attack on it binding the security of whole solution with the security of particular encryption schemes, used in it. We show this architecture is vulnerable and consequently the solution is unviable.

  3. magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation

    International Nuclear Information System (INIS)

    Angleraud, Christophe

    2014-01-01

    The ever increasing amount of data and processing capabilities – following the well- known Moore's law – is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters

  4. [TRU waste storage, technical data and calculations electropolishing, October 21, 1977--April 1978

    Energy Technology Data Exchange (ETDEWEB)

    Allen, R. P.

    1977-12-31

    This document contains copies of three reports on electropolishing. Electropolishing is a key step in the processing of solid wastes. It is the design basis for decontaminating alpha, as well as beta-gamma, waste metals in spite of incomplete data on the process and associated equipment.

  5. compendiumdb: an R package for retrieval and storage of functional genomics data

    NARCIS (Netherlands)

    Nandal, Umesh K.; van Kampen, Antoine H. C.; Moerland, Perry D.

    2016-01-01

    Currently, the Gene Expression Omnibus (GEO) contains public data of over 1 million samples from more than 40 000 microarray-based functional genomics experiments. This provides a rich source of information for novel biological discoveries. However, unlocking this potential often requires retrieving

  6. Corporate environmental information system data storage development and management (Environmental Information System

    Directory of Open Access Journals (Sweden)

    Lyazat Naizabayeva

    2017-12-01

    Full Text Available In this article a software implementation of the environmental monitoring is developed and presented, which is responsible for receive, store, process and analysis of data. For logical database design system Computer- Aided Software Engineering (CASE technology, the AllFusion ERwin Data Modeler was selected. To develop corporate Oracle database management system used. The database contains a set of objects, which store all the primary and additional service information, as well as a set of software modules of business logic. The developed information system makes it possible to find optimal solutions for clean and disposal of the contaminated areas. There are advantages of created databases on the areas to be remediated, such as the analysis of remediation made by using plants.

  7. Decentralized Data Storage and Processing in the Context of the LHC Experiments at CERN

    CERN Document Server

    Blomer, Jakob; Fuhrmann, Thomas

    The computing facilities used to process data for the experiments at the Large Hadron Collider (LHC) at CERN are scattered around the world. The embarrassingly parallel workload allows for use of various computing resources, such as computer centers comprising the Worldwide LHC Computing Grid, commercial and institutional cloud resources, as well as individual home PCs in “volunteer clouds”. Unlike data, the experiment software and its operating system dependencies cannot be easily split into small chunks. Deployment of experiment software on distributed grid sites is challenging since it consists of millions of small files and changes frequently. This thesis develops a systematic approach to distribute a homogeneous runtime environment to a heterogeneous and geographically distributed computing infrastructure. A uniform bootstrap environment is provided by a minimal virtual machine tailored to LHC applications. Based on a study of the characteristics of LHC experiment software, the thesis argues for the ...

  8. Charting a Security Landscape in the Clouds: Data Protection and Collaboration in Cloud Storage

    Science.gov (United States)

    2016-07-01

    strength of specific cryptographic primitives used such as Advanced Encryption Standard ( AES ); protection of keys and key materials beyond the protocol...Advanced Encryption Standard ( AES ) with a 256-bit key instead of a 128-bit key for example, is not a particularly insightful observation. Rather, this... AES Advanced Encryption Standard TLS/SSL Transport Layer Security/Security Socket Layer 35 REFERENCES [1] International Data Corporation

  9. Simplified diagnostic coding sheet for computerized data storage and analysis in ophthalmology.

    Science.gov (United States)

    Tauber, J; Lahav, M

    1987-11-01

    A review of currently-available diagnostic coding systems revealed that most are either too abbreviated or too detailed. We have compiled a simplified diagnostic coding sheet based on the International Coding and Diagnosis (ICD-9), which is both complete and easy to use in a general practice. The information is transferred to a computer, which uses the relevant (ICD-9) diagnoses as database and can be retrieved later for display of patients' problems or analysis of clinical data.

  10. Analyst Performance Measures. Volume 1: Persistent Surveillance Data Processing, Storage and Retrieval

    Science.gov (United States)

    2011-09-01

    solutions to address these important challenges . The Air Force is seeking innovative architectures to process and store massive data sets in a flexible...Google Earth, the Video LAN Client ( VLC ) media player, and the Environmental Systems Research Institute corporation‘s (ESRI) ArcGIS product — to...Earth, Quantum GIS, VLC Media Player, NASA WorldWind, ESRI ArcGIS and many others. Open source GIS and media visualization software can also be

  11. Collection, storage and management of high-water marks data: praxis and recommendations

    Directory of Open Access Journals (Sweden)

    Piotte Olivier

    2016-01-01

    Full Text Available High-water marks data, in its most general definition, is a precious source of information for the many stakeholders involved in risk culture, inundation mapping, river, estuarine or coastal studies, etc. Although there have already been many initiatives to collect and exploit existing data, as well as collecting new marks after flood events, a lack of harmonization and coordination remains. The French flood forecasting services, together with Cerema, decided to provide technical and organizational solutions in order to set up a collaborative management approach of high-water marks data. On the one hand, a methodological handbook has been produced, giving recommendations for post-flood field investigations. It comes with a dedicated PDA tool and a PC desktop software. On the other hand, a national repository has been built, making it possible to gather the large range of informations usually needed. This repository is combined with a collaborative web platform, which aims to be a way of public access to the available information, a working tool for technical users, and a front door to contribute to the inventory. The last step of this approach is the setting of an organization blue-print including all the stakeholders directly or indirectly involved in high-water marks knowledge.

  12. compendiumdb: an R package for retrieval and storage of functional genomics data.

    Science.gov (United States)

    Nandal, Umesh K; van Kampen, Antoine H C; Moerland, Perry D

    2016-09-15

    Currently, the Gene Expression Omnibus (GEO) contains public data of over 1 million samples from more than 40 000 microarray-based functional genomics experiments. This provides a rich source of information for novel biological discoveries. However, unlocking this potential often requires retrieving and storing a large number of expression profiles from a wide range of different studies and platforms. The compendiumdb R package provides an environment for downloading functional genomics data from GEO, parsing the information into a local or remote database and interacting with the database using dedicated R functions, thus enabling seamless integration with other tools available in R/Bioconductor. The compendiumdb package is written in R, MySQL and Perl. Source code and binaries are available from CRAN (http://cran.r-project.org/web/packages/compendiumdb/) for all major platforms (Linux, MS Windows and OS X) under the GPLv3 license. p.d.moerland@amc.uva.nl Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Preliminary studies of tunnel interface response modeling using test data from underground storage facilities.

    Energy Technology Data Exchange (ETDEWEB)

    Sobolik, Steven Ronald; Bartel, Lewis Clark

    2010-11-01

    In attempting to detect and map out underground facilities, whether they be large-scale hardened deeply-buried targets (HDBT's) or small-scale tunnels for clandestine border or perimeter crossing, seismic imaging using reflections from the tunnel interface has been seen as one of the better ways to both detect and delineate tunnels from the surface. The large seismic impedance contrast at the tunnel/rock boundary should provide a strong, distinguishable seismic response, but in practice, such strong indicators are often lacking. One explanation for the lack of a good seismic reflection at such a strong contrast boundary is that the damage caused by the tunneling itself creates a zone of altered seismic properties that significantly changes the nature of this boundary. This report examines existing geomechanical data that define the extent of an excavation damage zone around underground tunnels, and the potential impact on rock properties such as P-wave and S-wave velocities. The data presented from this report are associated with sites used for the development of underground repositories for the disposal of radioactive waste; these sites have been excavated in volcanic tuff (Yucca Mountain) and granite (HRL in Sweden, URL in Canada). Using the data from Yucca Mountain, a numerical simulation effort was undertaken to evaluate the effects of the damage zone on seismic responses. Calculations were performed using the parallelized version of the time-domain finitedifference seismic wave propagation code developed in the Geophysics Department at Sandia National Laboratories. From these numerical simulations, the damage zone does not have a significant effect upon the tunnel response, either for a purely elastic case or an anelastic case. However, what was discovered is that the largest responses are not true reflections, but rather reradiated Stoneley waves generated as the air/earth interface of the tunnel. Because of this, data processed in the usual way may not

  14. Nuclear data processing, analysis, transformation and storage with Pade-approximants

    International Nuclear Information System (INIS)

    Badikov, S.A.; Gay, E.V.; Guseynov, M.A.; Rabotnov, N.S.

    1992-01-01

    A method is described to generate rational approximants of high order with applications to neutron data handling. The problems considered are: The approximations of neutron cross-sections in resonance region producing the parameters for Adler-Adler type formulae; calculations of resulting rational approximants' errors given in analytical form allowing to compute the error at any energy point inside the interval of approximation; calculations of the correlation coefficient of error values in two arbitrary points provided that experimental errors are independent and normally distributed; a method of simultaneous generation of a few rational approximants with identical set of poles; functionals other than LSM; two-dimensional approximation. (orig.)

  15. Solving data-at-rest for the storage and retrieval of files in ad hoc networks

    Science.gov (United States)

    Knobler, Ron; Scheffel, Peter; Williams, Jonathan; Gaj, Kris; Kaps, Jens-Peter

    2013-05-01

    Based on current trends for both military and commercial applications, the use of mobile devices (e.g. smartphones and tablets) is greatly increasing. Several military applications consist of secure peer to peer file sharing without a centralized authority. For these military applications, if one or more of these mobile devices are lost or compromised, sensitive files can be compromised by adversaries, since COTS devices and operating systems are used. Complete system files cannot be stored on a device, since after compromising a device, an adversary can attack the data at rest, and eventually obtain the original file. Also after a device is compromised, the existing peer to peer system devices must still be able to access all system files. McQ has teamed with the Cryptographic Engineering Research Group at George Mason University to develop a custom distributed file sharing system to provide a complete solution to the data at rest problem for resource constrained embedded systems and mobile devices. This innovative approach scales very well to a large number of network devices, without a single point of failure. We have implemented the approach on representative mobile devices as well as developed an extensive system simulator to benchmark expected system performance based on detailed modeling of the network/radio characteristics, CONOPS, and secure distributed file system functionality. The simulator is highly customizable for the purpose of determining expected system performance for other network topologies and CONOPS.

  16. Quantum Data Locking for Secure Communication against an Eavesdropper with Time-Limited Storage

    Directory of Open Access Journals (Sweden)

    Cosmo Lupo

    2015-05-01

    Full Text Available Quantum cryptography allows for unconditionally secure communication against an eavesdropper endowed with unlimited computational power and perfect technologies, who is only constrained by the laws of physics. We review recent results showing that, under the assumption that the eavesdropper can store quantum information only for a limited time, it is possible to enhance the performance of quantum key distribution in both a quantitative and qualitative fashion. We consider quantum data locking as a cryptographic primitive and discuss secure communication and key distribution protocols. For the case of a lossy optical channel, this yields the theoretical possibility of generating secret key at a constant rate of 1 bit per mode at arbitrarily long communication distances.

  17. Real-Time Transmission and Storage of Video, Audio, and Health Data in Emergency and Home Care Situations

    Directory of Open Access Journals (Sweden)

    Riccardo Stagnaro

    2007-01-01

    Full Text Available The increase in the availability of bandwidth for wireless links, network integration, and the computational power on fixed and mobile platforms at affordable costs allows nowadays for the handling of audio and video data, their quality making them suitable for medical application. These information streams can support both continuous monitoring and emergency situations. According to this scenario, the authors have developed and implemented the mobile communication system which is described in this paper. The system is based on ITU-T H.323 multimedia terminal recommendation, suitable for real-time data/video/audio and telemedical applications. The audio and video codecs, respectively, H.264 and G723.1, were implemented and optimized in order to obtain high performance on the system target processors. Offline media streaming storage and retrieval functionalities were supported by integrating a relational database in the hospital central system. The system is based on low-cost consumer technologies such as general packet radio service (GPRS and wireless local area network (WLAN or WiFi for lowband data/video transmission. Implementation and testing were carried out for medical emergency and telemedicine application. In this paper, the emergency case study is described.

  18. Developing a File System Structure to Solve Healthy Big Data Storage and Archiving Problems Using a Distributed File System

    Directory of Open Access Journals (Sweden)

    Atilla Ergüzen

    2018-06-01

    Full Text Available Recently, the use of internet has become widespread, increasing the use of mobile phones, tablets, computers, Internet of Things (IoT devices and other digital sources. In the health sector with the help of new generation digital medical equipment, this digital world also has tended to grow in an unpredictable way in that it has nearly 10% of the global wide data itself and continues to keep grow beyond what the other sectors have. This progress has greatly enlarged the amount of produced data which cannot be resolved with conventional methods. In this work, an efficient model for the storage of medical images using a distributed file system structure has been developed. With this work, a robust, available, scalable, and serverless solution structure has been produced, especially for storing large amounts of data in the medical field. Furthermore, the security level of the system is extreme by use of static Internet protocol (IP, user credentials, and synchronously encrypted file contents. One of the most important key features of the system is high performance and easy scalability. In this way, the system can work with fewer hardware elements and be more robust than others that use name node architecture. According to the test results, it is seen that the performance of the designed system is better than 97% from a Not Only Structured Query Language (NoSQL system, 80% from a relational database management system (RDBMS, and 74% from an operating system (OS.

  19. Inferring Large-Scale Terrestrial Water Storage Through GRACE and GPS Data Fusion in Cloud Computing Environments

    Science.gov (United States)

    Rude, C. M.; Li, J. D.; Gowanlock, M.; Herring, T.; Pankratius, V.

    2016-12-01

    Surface subsidence due to depletion of groundwater can lead to permanent compaction of aquifers and damaged infrastructure. However, studies of such effects on a large scale are challenging and compute intensive because they involve fusing a variety of data sets beyond direct measurements from groundwater wells, such as gravity change measurements from the Gravity Recovery and Climate Experiment (GRACE) or surface displacements measured by GPS receivers. Our work therefore leverages Amazon cloud computing to enable these types of analyses spanning the entire continental US. Changes in groundwater storage are inferred from surface displacements measured by GPS receivers stationed throughout the country. Receivers located on bedrock are anti-correlated with changes in water levels from elastic deformation due to loading, while stations on aquifers correlate with groundwater changes due to poroelastic expansion and compaction. Correlating linearly detrended equivalent water thickness measurements from GRACE with linearly detrended and Kalman filtered vertical displacements of GPS stations located throughout the United States helps compensate for the spatial and temporal limitations of GRACE. Our results show that the majority of GPS stations are negatively correlated with GRACE in a statistically relevant way, as most GPS stations are located on bedrock in order to provide stable reference locations and measure geophysical processes such as tectonic deformations. Additionally, stations located on the Central Valley California aquifer show statistically significant positive correlations. Through the identification of positive and negative correlations, deformation phenomena can be classified as loading or poroelastic expansion due to changes in groundwater. This method facilitates further studies of terrestrial water storage on a global scale. This work is supported by NASA AIST-NNX15AG84G (PI: V. Pankratius) and Amazon.

  20. Integrating Enhanced Grace Terrestrial Water Storage Data Into the U.S. and North American Drought Monitors

    Science.gov (United States)

    Housborg, Rasmus; Rodell, Matthew

    2010-01-01

    NASA's Gravity Recovery and Climate Experiment (GRACE) satellites measure time variations nf the Earth's gravity field enabling reliable detection of spatio-temporal variations in total terrestrial water storage (TWS), including ground water. The U.S. and North American Drought Monitors are two of the premier drought monitoring products available to decision-makers for assessing and minimizing drought impacts, but they rely heavily on precipitation indices and do not currently incorporate systematic observations of deep soil moisture and groundwater storage conditions. Thus GRACE has great potential to improve the Drought Monitors hy filling this observational gap. Horizontal, vertical and temporal disaggregation of the coarse-resolution GRACE TWS data has been accomplished by assimilating GRACE TWS anomalies into the Catchment Land Surface Model using ensemble Kalman smoother. The Drought Monitors combine several short-term and long-term drought indices and indicators expressed in percentiles as a reference to their historical frequency of occurrence for the location and time of year in question. To be consistent, we are in the process of generating a climatology of estimated soil moisture and ground water based on m 60-year Catchment model simulation which will subsequently be used to convert seven years of GRACE assimilated fields into soil moisture and groundwater percentiles. for systematic incorporation into the objective blends that constitute Drought Monitor baselines. At this stage we provide a preliminary evaluation of GRACE assimilated Catchment model output against independent datasets including soil moisture observations from Aqua AMSR-E and groundwater level observations from the U.S. Geological Survey's Groundwater Climate Response Network.

  1. A SHORT HISTORY CSISRS - AT THE CUTTING EDGE OF NUCLEAR DATA INFORMATION STORAGE AND RETRIEVAL SYSTEMS AND ITS RELATIONSHIP TO CINDA, EXFOR AND ENDF.

    Energy Technology Data Exchange (ETDEWEB)

    HOLDEN, N.E.

    2005-12-01

    A short history of CSISRS, pronounced ''scissors'' and standing for the Cross Section Information Storage and Retrieval System, is given. The relationship of CSISRS to CINDA, to the neutron nuclear data four-centers, to EXFOR and to ENDF, the evaluated neutron nuclear data file, is briefly explained.

  2. ReWritable Data Storage on DVD by Using Phase Change Technology

    Science.gov (United States)

    Kleine, H.; Martin, F.; Kapeller, M.; Cord, B.; Ebinger, H.

    It is expected that the next few years the VHS casette will be replaced by rewritable Digital Versatile Discs (DVD) for home video recording. At this moment three different standards DVD+RW, DVD-RW and DVD-RAM exist, out of which the DVD+RW is expected to dominate the market in Europe and the United States. The disc holds 4.7 GB of computer data, which is equivalent to several hours of high quality video content. At the heart of the disc is a thin film layer stack with a special phase change recording layer. By proper laser irradiation the disc can be overwritten up to 1000 times without noticeable quality loss. A shelf lifetime of 20-50 years is anticipated. With these characteristics the disc is well suited for consumer applications. The present article illuminates how a process engineer can control the disc recording sensitivity, the recording speed and the number of overwriting cycles by the design of the thin film layer stack.

  3. VALORA: data base system for storage significant information used in the behavior modelling in the biosphere

    International Nuclear Information System (INIS)

    Valdes R, M.; Aguero P, A.; Perez S, D.; Cancio P, D.

    2006-01-01

    The nuclear and radioactive facilities can emit to the environment effluents that contain radionuclides, which are dispersed and/or its accumulate in the atmosphere, the terrestrial surface and the surface waters. As part of the evaluations of radiological impact, it requires to be carried out qualitative and quantitative analysis. In many of the cases it doesn't have the real values of the parameters that are used in the modelling, neither it is possible to carry out their measure, for that to be able to carry out the evaluation it needs to be carried out an extensive search of that published in the literature about the possible values of each parameter, under similar conditions to the object of study, this work can be extensive. In this work the characteristics of the VALORA Database System developed with the purpose of organizing and to automate significant information that it appears in different sources (scientific or technique literature) of the parameters that are used in the modelling of the behavior of the pollutants in the environment and the values assigned to these parameters that are used in the evaluation of the radiological impact potential is described; VALORA allows the consultation and selection of the characteristic parametric data of different situations and processes that are required by the calculation pattern implemented. The software VALORA it is a component of a group of tools computer that have as objective to help to the resolution of dispersion models and transfer of pollutants. (Author)

  4. Optimal micro-mirror tilt angle and sync mark design for digital micro-mirror device based collinear holographic data storage system.

    Science.gov (United States)

    Liu, Jinpeng; Horimai, Hideyoshi; Lin, Xiao; Liu, Jinyan; Huang, Yong; Tan, Xiaodi

    2017-06-01

    The collinear holographic data storage system (CHDSS) is a very promising storage system due to its large storage capacities and high transfer rates in the era of big data. The digital micro-mirror device (DMD) as a spatial light modulator is the key device of the CHDSS due to its high speed, high precision, and broadband working range. To improve the system stability and performance, an optimal micro-mirror tilt angle was theoretically calculated and experimentally confirmed by analyzing the relationship between the tilt angle of the micro-mirror on the DMD and the power profiles of diffraction patterns of the DMD at the Fourier plane. In addition, we proposed a novel chess board sync mark design in the data page to reduce the system bit error rate in circumstances of reduced aperture required to decrease noise and median exposure amount. It will provide practical guidance for future DMD based CHDSS development.

  5. Synthesis and Screening of Phase Change Chalcogenide Thin Film Materials for Data Storage.

    Science.gov (United States)

    Guerin, Samuel; Hayden, Brian; Hewak, Daniel W; Vian, Chris

    2017-07-10

    A combinatorial synthetic methodology based on evaporation sources under an ultrahigh vacuum has been used to directly synthesize compositional gradient thin film libraries of the amorphous phases of GeSbTe alloys at room temperature over a wide compositional range. An optical screen is described that allows rapid parallel mapping of the amorphous-to-crystalline phase transition temperature and optical contrast associated with the phase change on such libraries. The results are shown to be consistent with the literature for compositions where published data are available along the Sb 2 Te 3 -GeTe tie line. The results reveal a minimum in the crystallization temperature along the Sb 2 Te 3 -Ge 2 Te 3 tie line, and the method is able to resolve subsequent cubic-to-hexagonal phase transitions in the GST crystalline phase. HT-XRD has been used to map the phases at sequentially higher temperatures, and the results are reconciled with the literature and trends in crystallization temperatures. The results clearly delineate compositions that crystallize to pure GST phases and those that cocrystallize Te. High-throughput measurement of the resistivity of the amorphous and crystalline phases has allowed the compositional and structural correlation of the resistivity contrast associated with the amorphous-to-crystalline transition, which range from 5-to-8 orders of magnitude for the compositions investigated. The results are discussed in terms of the compromises in the selection of these materials for phase change memory applications and the potential for further exploration through more detailed secondary screening of doped GST or similar classes of phase change materials designed for the demands of future memory devices.

  6. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xi [Brookhaven National Laboratory, Upton, Long Island, NY 11973 (United States); Huang, Xiaobiao, E-mail: xiahuang@slac.stanford.edu [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States)

    2016-08-21

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.

  7. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xi [Brookhaven National Lab. (BNL), Upton, NY (United States); Huang, Xiaobiao [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.

  8. Providing the Persistent Data Storage in a Software Engineering Environment Using Java/COBRA and a DBMS

    Science.gov (United States)

    Dhaliwal, Swarn S.

    1997-01-01

    An investigation was undertaken to build the software foundation for the WHERE (Web-based Hyper-text Environment for Requirements Engineering) project. The TCM (Toolkit for Conceptual Modeling) was chosen as the foundation software for the WHERE project which aims to provide an environment for facilitating collaboration among geographically distributed people involved in the Requirements Engineering process. The TCM is a collection of diagram and table editors and has been implemented in the C++ programming language. The C++ implementation of the TCM was translated into Java in order to allow the editors to be used for building various functionality of the WHERE project; the WHERE project intends to use the Web as its communication back- bone. One of the limitations of the translated software (TcmJava), which militated against its use in the WHERE project, was persistent data management mechanisms which it inherited from the original TCM; it was designed to be used in standalone applications. Before TcmJava editors could be used as a part of the multi-user, geographically distributed applications of the WHERE project, a persistent storage mechanism must be built which would allow data communication over the Internet, using the capabilities of the Web. An approach involving features of Java, CORBA (Common Object Request Broker), the Web, a middle-ware (Java Relational Binding (JRB)), and a database server was used to build the persistent data management infrastructure for the WHERE project. The developed infrastructure allows a TcmJava editor to be downloaded and run from a network host by using a JDK 1.1 (Java Developer's Kit) compatible Web-browser. The aforementioned editor establishes connection with a server by using the ORB (Object Request Broker) software and stores/retrieves data in/from the server. The server consists of a CORBA object or objects depending upon whether the data is to be made persistent on a single server or multiple servers. The CORBA

  9. Enhanced density of optical data storage using near-field concept: fabrication and test of nanometric aperture array

    International Nuclear Information System (INIS)

    Cha, J.; Park, J. H.; Kim, Myong R.; Jhe, W.

    1999-01-01

    We have tried to enhance the density of the near-field optical memory and to improve the recording/readout speed. The current optical memory has the limitation in both density and speed. This barrier due to the far-field nature can be overcome by the use of near-field. The optical data storage density can be increased by reducing the size of the nanometric aperture where the near-field is obtained. To fabricate the aperture in precise dimension, we applied the orientation-dependent / anisotropic etching property of crystal Si often employed in the field of MEMS. And so we fabricated the 10 x 10 aperture array. This array will be also the indispensable part for speeding up. One will see the possibility of the multi-tracking pickup in the phase changing type memory through this array. This aperture array will be expected to write the bit-mark whose size is about 100 nm. We will show the recent result obtained. (author)

  10. Rapid screening for lipid storage disorders using biochemical markers Expert center data and review of the literature

    NARCIS (Netherlands)

    Voorink-Moret, M.; Goorden, S. M. I.; van Kuilenburg, A. B. P.; Wijburg, F. A.; Ghauharali-van der Vlugt, J. M. M.; Beers-Stet, F. S.; Zoetekouw, A.; Kulik, W.; Hollak, C. E. M.; Vaz, F. M.

    2018-01-01

    Background: In patients suspected of a lipid storage disorder (sphingolipidoses, lipidoses), confirmation of the diagnosis relies predominantly on the measurement of specific enzymatic activities and genetic studies. New UPLC-MS/MS methods have been developed to measure lysosphingolipids and

  11. Effect of non-ideal characteristics of an adder on the efficiency of data storage during scintillation radiometric testing with the use of pulse radiations

    International Nuclear Information System (INIS)

    Nedavnij, O.I.

    1983-01-01

    Problems of statistical summation of electric signals during scintillation radiometric control using pulse sources-betatrons and X-ray apparatus haVe been considered. Using calculation and experimental ways it is shown that non-ideal nature of adder, conditioned by energy consumption in the process of summation, hampers the information storage to a greater degree than in the case of difference of summed signals amplitudes with similar statistical weights. A new algorithm of television introscope operation, permitting to increase the efficiency of data storage is suggested

  12. Supporting data and calculations for the NNWSI [Nevada Nuclear Waste Storage Investigations] project information management system concepts evaluation report

    International Nuclear Information System (INIS)

    1986-12-01

    This report presents the supporting data and calculations that provided the basis for the NNWSI Project Information Management System Concepts Evaluation Report. Project documentation estimates for numbers of documents and pages are presented for all nine Project participants. These estimates cover the time period from 1980 to 1990. In addition, the report presents a calculational method for estimating document and page volumes beyond the year 1990. Electronic character code and bit-mapped image storage requirements associated with the page volumes are also shown and the calculational method described. Six conceptual system approaches capable of satisfying NNWSI Project requirements are defined and described. These approaches include: fully centralized microfilm system based on computer-assisted retrieval (CAR) (Approach 1), partially distributed microfilm system based on CAR retrieval (Approach 2), fully distributed microfilm system based on CAR retrieval (Approach 3), fully centralized optical disk system based on electronic image and full-text retrieval (Approach 4), partially distributed optical disk system based on electronic image and full-text retrieval (Approach 5), and fully distributed optical disk system based on electronic image and full-text retrieval (Approach 6). All assumptions associated with these approaches are given. Data sheets in an appendix describe the capital equipment and labor components that were used as the basis of the cost evaluation. Definitions of two cost scenarios cover: (1) processing of all documents and pages and (2) processing of 10% of the total documents and 30% of the total pages. Capital equipment, labor, and summary cost tables for the years from 1987 through 1991 are presented for both scenarios. The report also describes a case for starting system operations in 1988 instead of 1987 and complete cost tables for the 1988 start-up case are given. 1 ref

  13. Tritium storage

    International Nuclear Information System (INIS)

    Hircq, B.

    1990-01-01

    This document represents a synthesis relative to tritium storage. After indicating the main storage particularities as regards tritium, storages under gaseous and solid form are after examined before establishing choices as a function of the main criteria. Finally, tritium storage is discussed regarding tritium devices associated to Fusion Reactors and regarding smaller devices [fr

  14. High-density optical data storage based on grey level recording in photobleaching polymers using two-photon excitation under ultrashort pulse and continuous wave illumination

    International Nuclear Information System (INIS)

    Ganic, D.; Day, D.; Gu, M.

    1999-01-01

    Full text: Two-photon excitation has been employed in three-dimensional optical data storage by many researchers in an attempt to increase the storage density of a given material. The probability of two-photon excitation is proportional to the squared intensity of the incident light; this effect produces excitation only within a small region of the focus spot. Another advantage of two-photon excitation is the use of infrared illumination, which results in the reduction of scattering and enables the recording of layers at a deep depth in a thick material. The storage density thus obtained using multi-layered bit optical recording can be as high as Tbit/cm 3 . To increase this storage density even further, grey level recording can be employed. This method utilises variable exposure times of a laser beam focused into a photobleaching sample. As a result, the bleached area possesses a certain pixel value which depends upon the exposure time; this can increase the storage density many times depending upon the number of grey levels used. Our experiment shows that it is possible to attain grey level recording using both ultrashort pulsed and continuous-wave illumination. Although continuous wave illumination requires an average power of approximately 2 orders of magnitude higher than that for ultrashort pulsed illumination, it is a preferred method of recording due to its relatively low system cost and compactness. Copyright (1999) Australian Optical Society

  15. Improving groundwater storage and soil moisture estimates by assimilating GRACE, SMOS, and SMAP data into CABLE using ensemble Kalman batch smoother and particle batch smoother frameworks

    Science.gov (United States)

    Han, S. C.; Tangdamrongsub, N.; Yeo, I. Y.; Dong, J.

    2017-12-01

    Soil moisture and groundwater storage are important information for comprehensive understanding of the climate system and accurate assessment of the regional/global water resources. It is possible to derive the water storage from land surface models but the outputs are commonly biased by inaccurate forcing data, inefficacious model physics, and improper model parameter calibration. To mitigate the model uncertainty, the observation (e.g., from remote sensing as well as ground in-situ data) are often integrated into the models via data assimilation (DA). This study aims to improve the estimation of soil moisture and groundwater storage by simultaneously assimilating satellite observations from the Gravity Recovery And Climate Experiment (GRACE), the Soil Moisture Ocean Salinity (SMOS), and the Soil Moisture Active Passive (SMAP) into the Community Atmosphere Biosphere Land Exchange (CABLE) land surface model using the ensemble Kalman batch smoother (EnBS) and particle batch smoother (PBS) frameworks. The uncertainty of GRACE observation is obtained rigorously from the full error variance-covariance matrix of the GRACE data product. This method demonstrates the use of a realistic representative of GRACE uncertainty, which is spatially correlated in nature, leads to a higher accuracy of water storage computation. Additionally, the comparison between EnBS and PBS results is discussed to understand the filter's performance, limitation, and suitability. The joint DA is demonstrated in the Goulburn catchment, South-East Australia, where diverse ground observations (surface soil moisture, root-zone soil moisture, and groundwater level) are available for evaluation of our DA results. Preliminary results show that both smoothers provide significant improvement of surface soil moisture and groundwater storage estimates. Importantly, our developed DA scheme disaggregates the catchment-scale GRACE information into finer vertical and spatial scales ( 25 km). We present an

  16. Monitoring climate and man-made induced variations in terrestrial water storage (TWS) across Africa using GRACE data

    Science.gov (United States)

    Ahmed, M. E.; Sultan, M.; Wahr, J. M.; Yan, E.; Bonin, J. A.; Chouinard, K.

    2012-12-01

    It is common practice for researchers engaged in research related to climate change to examine the temporal variations in relevant climatic parameters (e.g., temperature, precipitation) and to extract and examine drought indices reproduced from one or more such parameters. Drought indices (meteorological, agricultural and hydrological) define departures from normal conditions and are used as proxies for monitoring water availability. Many of these indices exclude significant controlling factor(s), do not work well in specific settings and regions, and often require long (≥50 yr) calibration time periods and substantial meteorological data, limiting their application in areas lacking adequate observational networks. Additional uncertainties are introduced by the models used in computing model-dependent indices. Aside from these uncertainties, none of these indices measure the variability in terrestrial water storage (TWS), a term that refers to the total vertically integrated water content in an area regardless of the reservoir in which it resides. Inter-annual trends in TWS were extracted from monthly Gravity Recovery and Climate Experiment (GRACE) data acquired (04/2002 to 08/2011) over Africa and correlated (in a GIS environment) with relevant temporal remote sensing, geologic, hydrologic, climatic, and topographic datasets. Findings include the following: (1) large sectors of Africa are undergoing statistically significant variations (+36 mm/yr to -16 mm/yr) due to natural and man-made causes; (2) warming of the tropical Atlantic ocean apparently intensified Atlantic monsoons and increased precipitation and TWS over western and central Africa's coastal plains, proximal mountainous source areas, and inland areas as far as central Chad; (3) warming in the central Indian Ocean decreased precipitation and TWS over eastern and southern Africa; (4) the high frequency of negative phases of the North Atlantic Oscillation (NAO) increased precipitation and TWS over

  17. Mass storage for microprocessor farms

    International Nuclear Information System (INIS)

    Areti, H.

    1990-01-01

    Experiments in high energy physics require high density and high speed mass storage. Mass storage is needed for data logging during the online data acquisition, data retrieval and storage during the event reconstruction and data manipulation during the physics analysis. This paper examines the storage and speed requirements at the first two stages of the experiments and suggests a possible starting point to deal with the problem. 3 refs., 3 figs

  18. An investigation of used electronics return flows: A data-driven approach to capture and predict consumers storage and utilization behavior

    International Nuclear Information System (INIS)

    Sabbaghi, Mostafa; Esmaeilian, Behzad; Raihanian Mashhadi, Ardeshir; Behdad, Sara; Cade, Willie

    2015-01-01

    Highlights: • We analyzed a data set of HDDs returned back to an e-waste collection site. • We studied factors that affect the storage behavior. • Consumer type, brand and size are among factors which affect the storage behavior. • Commercial consumers have stored computers more than household consumers. • Machine learning models were used to predict the storage behavior. - Abstract: Consumers often have a tendency to store their used, old or un-functional electronics for a period of time before they discard them and return them back to the waste stream. This behavior increases the obsolescence rate of used still-functional products leading to lower profitability that could be resulted out of End-of-Use (EOU) treatments such as reuse, upgrade, and refurbishment. These types of behaviors are influenced by several product and consumer-related factors such as consumers’ traits and lifestyles, technology evolution, product design features, product market value, and pro-environmental stimuli. Better understanding of different groups of consumers, their utilization and storage behavior and the connection of these behaviors with product design features helps Original Equipment Manufacturers (OEMs) and recycling and recovery industry to better overcome the challenges resulting from the undesirable storage of used products. This paper aims at providing insightful statistical analysis of Electronic Waste (e-waste) dynamic nature by studying the effects of design characteristics, brand and consumer type on the electronics usage time and end of use time-in-storage. A database consisting of 10,063 Hard Disk Drives (HDD) of used personal computers returned back to a remanufacturing facility located in Chicago, IL, USA during 2011–2013 has been selected as the base for this study. The results show that commercial consumers have stored computers more than household consumers regardless of brand and capacity factors. Moreover, a heterogeneous storage behavior is

  19. An investigation of used electronics return flows: A data-driven approach to capture and predict consumers storage and utilization behavior

    Energy Technology Data Exchange (ETDEWEB)

    Sabbaghi, Mostafa, E-mail: mostafas@buffalo.edu [Industrial and Systems Engineering Department, State University of New York, University at Buffalo, 437 Bell Hall, Buffalo, NY (United States); Esmaeilian, Behzad, E-mail: b.esmaeilian@neu.edu [Healthcare Systems Engineering Institute, Northeastern University, Boston, MA 02115 (United States); Raihanian Mashhadi, Ardeshir, E-mail: ardeshir@buffalo.edu [Mechanical and Aerospace Engineering, State University of New York, University at Buffalo, 437 Bell Hall, Buffalo, NY (United States); Behdad, Sara, E-mail: sarabehd@buffalo.edu [Industrial and Systems Engineering Department, State University of New York, University at Buffalo, 437 Bell Hall, Buffalo, NY (United States); Mechanical and Aerospace Engineering, State University of New York, University at Buffalo, 437 Bell Hall, Buffalo, NY (United States); Cade, Willie, E-mail: willie@pcrr.com [PC Rebuilder and Recyclers, 4734 W Chicago Ave, Chicago, IL 60651-3322 (United States)

    2015-02-15

    Highlights: • We analyzed a data set of HDDs returned back to an e-waste collection site. • We studied factors that affect the storage behavior. • Consumer type, brand and size are among factors which affect the storage behavior. • Commercial consumers have stored computers more than household consumers. • Machine learning models were used to predict the storage behavior. - Abstract: Consumers often have a tendency to store their used, old or un-functional electronics for a period of time before they discard them and return them back to the waste stream. This behavior increases the obsolescence rate of used still-functional products leading to lower profitability that could be resulted out of End-of-Use (EOU) treatments such as reuse, upgrade, and refurbishment. These types of behaviors are influenced by several product and consumer-related factors such as consumers’ traits and lifestyles, technology evolution, product design features, product market value, and pro-environmental stimuli. Better understanding of different groups of consumers, their utilization and storage behavior and the connection of these behaviors with product design features helps Original Equipment Manufacturers (OEMs) and recycling and recovery industry to better overcome the challenges resulting from the undesirable storage of used products. This paper aims at providing insightful statistical analysis of Electronic Waste (e-waste) dynamic nature by studying the effects of design characteristics, brand and consumer type on the electronics usage time and end of use time-in-storage. A database consisting of 10,063 Hard Disk Drives (HDD) of used personal computers returned back to a remanufacturing facility located in Chicago, IL, USA during 2011–2013 has been selected as the base for this study. The results show that commercial consumers have stored computers more than household consumers regardless of brand and capacity factors. Moreover, a heterogeneous storage behavior is

  20. Using Emergent and Internal Catchment Data to Elucidate the Influence of Landscape Structure and Storage State on Hydrologic Response in a Piedmont Watershed

    Science.gov (United States)

    Putnam, S. M.; Harman, C. J.

    2017-12-01

    Many studies have sought to unravel the influence of landscape structure and catchment state on the quantity and composition of water at the catchment outlet. These studies run into issues of equifinality where multiple conceptualizations of flow pathways or storage states cannot be discriminated against on the basis of the quantity and composition of water alone. Here we aim to parse out the influence of landscape structure, flow pathways, and storage on both the observed catchment hydrograph and chemograph, using hydrometric and water isotope data collected from multiple locations within Pond Branch, a 37-hectare Piedmont catchment of the eastern US. This data is used to infer the quantity and age distribution of water stored and released by individual hydrogeomorphic units, and the catchment as a whole, in order to test hypotheses relating landscape structure, flow pathways, and catchment storage to the hydrograph and chemograph. Initial hypotheses relating internal catchment properties or processes to the hydrograph or chemograph are formed at the catchment scale. Data from Pond Branch include spring and catchment discharge measurements, well water levels, and soil moisture, as well as three years of high frequency precipitation and surface water stable water isotope data. The catchment hydrograph is deconstructed using hydrograph separation and the quantity of water associated with each time-scale of response is compared to the quantity of discharge that could be produced from hillslope and riparian hydrogeomorphic units. Storage is estimated for each hydrogeomorphic unit as well as the vadose zone, in order to construct a continuous time series of total storage, broken down by landscape unit. Rank StorAge Selection (rSAS) functions are parameterized for each hydrogeomorphic unit as well as the catchment as a whole, and the relative importance of changing proportions of discharge from each unit as well as storage in controlling the variability in the catchment