WorldWideScience

Sample records for data files

  1. Decay data file based on the ENSDF file

    Energy Technology Data Exchange (ETDEWEB)

    Katakura, J. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    A decay data file with the JENDL (Japanese Evaluated Nuclear Data Library) format based on the ENSDF (Evaluated Nuclear Structure Data File) file was produced as a tentative one of special purpose files of JENDL. The problem using the ENSDF file as primary source data of the JENDL decay data file is presented. (author)

  2. Fast probabilistic file fingerprinting for big data.

    Science.gov (United States)

    Tretyakov, Konstantin; Laur, Sven; Smant, Geert; Vilo, Jaak; Prins, Pjotr

    2013-01-01

    Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff.

  3. Evaluated nuclear-data file for niobium

    International Nuclear Information System (INIS)

    Smith, A.B.; Smith, D.L.; Howerton, R.J.

    1985-03-01

    A comprehensive evaluated nuclear-data file for elemental niobium is provided in the ENDF/B format. This file, extending over the energy range 10 -11 -20 MeV, is suitable for comprehensive neutronic calculations, particulary those dealing with fusion-energy systems. It also provides dosimetry information. Attention is given to the internal consistancy of the file, energy balance, and the quantitative specification of uncertainties. Comparisons are made with experimental data and previous evaluated files. The results of integral tests are described and remaining outstanding problem areas are cited. 107 refs

  4. Adding Data Management Services to Parallel File Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, Scott [Univ. of California, Santa Cruz, CA (United States)

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decades the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file

  5. JNDC FP decay data file

    International Nuclear Information System (INIS)

    Yamamoto, Tohru; Akiyama, Masatsugu

    1981-02-01

    The decay data file for fission product nuclides (FP DECAY DATA FILE) has been prepared for summation calculation of the decay heat of fission products. The average energies released in β- and γ-transitions have been calculated with computer code PROFP. The calculated results and necessary information have been arranged in tabular form together with the estimated results for 470 nuclides of which decay data are not available experimentally. (author)

  6. Titanium-II: an evaluated nuclear data file

    International Nuclear Information System (INIS)

    Philis, C.; Howerton, R.; Smith, A.B.

    1977-06-01

    A comprehensive evaluated nuclear data file for elemental titanium is outlined including definition of the data base, the evaluation procedures and judgments, and the final evaluated results. The file describes all significant neutron-induced reactions with elemental titanium and the associated photon-production processes to incident neutron energies of 20.0 MeV. In addition, isotopic-reaction files, consistent with the elemental file, are separately defined for those processes which are important to applied considerations of material-damage and neutron-dosimetry. The file is formulated in the ENDF format. This report formally documents the evaluation and, together with the numerical file, is submitted for consideration as a part of the ENDF/B-V evaluated file system. 20 figures, 9 tables

  7. Method for data compression by associating complex numbers with files of data values

    Science.gov (United States)

    Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur

    1998-02-10

    A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.

  8. Analyzing Log Files using Data-Mining

    Directory of Open Access Journals (Sweden)

    Marius Mihut

    2008-01-01

    Full Text Available Information systems (i.e. servers, applications and communication devices create a large amount of monitoring data that are saved as log files. For analyzing them, a data-mining approach is helpful. This article presents the steps which are necessary for creating an ‘analyzing instrument’, based on an open source software called Waikato Environment for Knowledge Analysis (Weka [1]. For exemplification, a system log file created by a Windows-based operating system, is used as input file.

  9. Evaluated nuclear data file of Th-232

    International Nuclear Information System (INIS)

    Meadows, J.; Poenitz, W.; Smith, A.; Smith, D.; Whalen, J.; Howerton, R.

    1977-09-01

    An evaluated nuclear data file for thorium is described. The file extends over the energy range 0.049 (i.e., the inelastic-scattering threshold) to 20.0 MeV and is formulated within the framework of the ENDF system. The input data base, the evaluation procedures and judgments, and ancillary experiments carried out in conjunction with the evaluation are outlined. The file includes: neutron total cross sections, neutron scattering processes, neutron radiative capture cross sections, fission cross sections, (n;2n) and (n;3n) processes, fission properties (e.g., nu-bar and delayed neutron emission) and photon production processes. Regions of uncertainty are pointed out particularly where new measured results would be of value. The file is extended to thermal energies using previously reported resonance evaluations thereby providing a complete file for neutronic calculations. Integral data tests indicated that the file was suitable for neutronic calculations in the MeV range

  10. Benchmark comparisons of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Resler, D.A.; Howerton, R.J.; White, R.M.

    1994-05-01

    With the availability and maturity of several evaluated nuclear data files, it is timely to compare the results of integral tests with calculations using these different files. We discuss here our progress in making integral benchmark tests of the following nuclear data files: ENDL-94, ENDF/B-V and -VI, JENDL-3, JEF-2, and BROND-2. The methods used to process these evaluated libraries in a consistent way into applications files for use in Monte Carlo calculations is presented. Using these libraries, we are calculating and comparing to experiment k eff for 68 fast critical assemblies of 233,235 U and 239 Pu with reflectors of various material and thickness

  11. DMFS: A Data Migration File System for NetBSD

    Science.gov (United States)

    Studenmund, William

    2000-01-01

    I have recently developed DMFS, a Data Migration File System, for NetBSD. This file system provides kernel support for the data migration system being developed by my research group at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal metadata in a flat file, which resides on a separate file system. This paper will first describe our data migration system to provide a context for DMFS, then it will describe DMFS. It also will describe the changes to NetBSD needed to make DMFS work. Then it will give an overview of the file archival and restoration procedures, and describe how some typical user actions are modified by DMFS. Lastly, the paper will present simple performance measurements which indicate that there is little performance loss due to the use of the DMFS layer.

  12. An evaluated neutronic data file for elemental zirconium

    International Nuclear Information System (INIS)

    Smith, A.B.; Chiba, S.

    1994-09-01

    A comprehensive evaluated neutronic data file for elemental zirconium is derived and presented in the ENDF/B-VI formats. The derivation is based upon measured microscopic nuclear data, augmented by model calculations as necessary. The primary objective is a quality contemporary file suitable for fission-reactor development extending from conventional thermal to fast and innovative systems. This new file is a significant improvement over previously available evaluated zirconium files, in part, as a consequence of extensive new experimental measurements reported elsewhere

  13. Status of the evaluated nuclear structure data file

    International Nuclear Information System (INIS)

    Martin, M.J.

    1991-01-01

    The structure, organization, and contents of the Evaluated Nuclear Structure Data File (ENSDF) are discussed in this paper. This file contains a summary of the state of experimental nuclear structure data for all nuclides as determined from consideration of measurements reported worldwide in the literature. Special emphasis is given to the data evaluation procedures, the consistency checks, and the quality control utilized at the input stage and to the retrieval capabilities of the system at the output stage. Recent enhancements of the on-line interaction with the file contents is addressed as well as procedural changes that will improve the currency of the file

  14. A basic evaluated neutronic data file for elemental scandium

    International Nuclear Information System (INIS)

    Smith, A.B.; Meadows, J.W.; Howerton, R.J.

    1992-01-01

    This report documents an evaluated neutronic data file for elemental scandium, presented in the ENDF/B-VI format. This file should provide basic nuclear data essential for neutronic calculations involving elemental scandium. No equivalent file was previously available

  15. Extracting the Data From the LCM vk4 Formatted Output File

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-29

    These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and compute laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.

  16. GABUNGAN ALGORITMA VERNAM CHIPER DAN END OF FILE UNTUK KEAMANAN DATA

    Directory of Open Access Journals (Sweden)

    Christy Atika Sari

    2014-10-01

    Full Text Available Adanya kesamaan fungsi pada  metode Kriptografi dan Steganografi untuk mengamankan data, maka makalah ini menggunakan algoritma Vernam Cipher sebagai salah satu algoritma yang popular pada Kriptografi dan End Of File (EOF pada metode Steganografi. Vernam Cipher mempunyai kemampuan untuk menyembunyikan data karena proses enkripsi dan dekripsi menggunakan sebuah kunci yang sama. Kunci ini berasal dari perhitungan XOR anatar bit plainteks dengan bit kunci. Sedangkan EOF dikenal sebagai pengembangan dari metode Least Significant Bit (LSB. EOF dapat digunakan untuk menyisipkan data yang ukurannya sesuai dengan kebutuhan. Pada penelitian ini digunakan file asli berformat .mp3 dan file spoofing berformat .pdf. file hasil stegano berhasil di ekstraksi menjadi file asli dan file spoofing. Ukuran file yang telah melalui proses penyisipan sama dengan ukuran file sebelum disisipkan data ditambah dengan ukuran data yang disisipkan ke dalam file tersebut. Kata Kunci: Vernam Chiper, End Of File, Kriptografi, Steganografi.

  17. ENSDF: The evaluated nuclear structure data file

    International Nuclear Information System (INIS)

    Martin, M.J.

    1986-01-01

    The structure, organization, and contents of the Evaluated Nuclear Structure Data File, ENSDF, will be discussed. This file summarizes the state of experimental nuclear structure data for all nuclei as determined from consideration of measurements reported world wide. Special emphasis will be given to the data evaluation procedures and consistency checks utilized at the input stage and to the retrieval capabilities of the system at the output stage

  18. An evaluated neutronic data file for elemental cobalt

    Energy Technology Data Exchange (ETDEWEB)

    Guenther, P.; Lawson, R.; Meadows, J.; Sugimoto, M.; Smith, A.; Smith, D.; Howerton, R.

    1988-08-01

    A comprehensive evaluated neutronic data file for elemental cobalt is described. The experimental data base, the calculational methods, the evaluation techniques and judgments, and the physical content are outlined. The file contains: neutron total and scattering cross sections and associated properties, (n,2n) and (n,3n) processes, neutron radiative capture processes, charged-particle-emission processes, and photon-production processes. The file extends from 10/sup /minus/5/ eV to 20 MeV, and is presented in the ENDF/B-VI format. Detailed attention is given to the uncertainties and correlations associated with the prominent neutron-induced processes. The numerical contents of the file have been transmitted to the National Nuclear Data Center, Brookhaven National Laboratory. 143 refs., 16 figs., 5 tabs.

  19. Sandia Data Archive (SDA) file specifications

    Energy Technology Data Exchange (ETDEWEB)

    Dolan, Daniel H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ao, Tommy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    The Sandia Data Archive (SDA) format is a specific implementation of the HDF5 (Hierarchal Data Format version 5) standard. The format was developed for storing data in a universally accessible manner. SDA files may contain one or more data records, each associated with a distinct text label. Primitive records provide basic data storage, while compound records support more elaborate grouping. External records allow text/binary files to be carried inside an archive and later recovered. This report documents version 1.0 of the SDA standard. The information provided here is sufficient for reading from and writing to an archive. Although the format was original designed for use in MATLAB, broader use is encouraged.

  20. DataNet: A flexible metadata overlay over file resources

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Managing and sharing data stored in files results in a challenge due to data amounts produced by various scientific experiments [1]. While solutions such as Globus Online [2] focus on file transfer and synchronization, in this work we propose an additional layer of metadata over file resources which helps to categorize and structure the data, as well as to make it efficient in integration with web-based research gateways. A basic concept of the proposed solution [3] is a data model consisting of entities built from primitive types such as numbers, texts and also from files and relationships among different entities. This allows for building complex data structure definitions and mix metadata and file data into a single model tailored for a given scientific field. A data model becomes actionable after being deployed as a data repository which is done automatically by the proposed framework by using one of the available PaaS (platform-as-a-service) platforms and is exposed to the world as a REST service, which...

  1. EVALUATED NUCLEAR STRUCTURE DATA FILE. A MANUAL FOR PREPARATION OF DATA SETS

    International Nuclear Information System (INIS)

    TULI, J.K.

    2001-01-01

    This manual describes the organization and structure of the Evaluated Nuclear Structure Data File (ENSDF). This computer-based file is maintained by the National Nuclear Data Center (NNDC) at Brookhaven National Laboratory for the international Nuclear Structure and Decay Data Network. For every mass number (presently, A ≤ 293), the Evaluated Nuclear Structure Data File (ENSDF) contains evaluated structure information. For masses A ≥ 44, this information is published in the Nuclear Data Sheets; for A < 44, ENSDF is based on compilations published in the journal Nuclear Physics. The information in ENSDF is updated by mass chain or by nuclide with a varying cycle time dependent on the availability of new information

  2. Evaluated nuclear data file ENDF/B-VI

    International Nuclear Information System (INIS)

    Dunford, C.L.

    1991-01-01

    For the past 25 years, the United States Department of Energy has sponsored a cooperative program among its laboratories, contractors and university research programs to produce an evaluated nuclear data library which would be application independent and universally accepted. The product of this cooperative activity is the ENDF/B evaluated nuclear data file. After approximately eight years of development, a new version of the data file, ENDF/B-VI has been released. The essential features of this evaluated data library are described in this paper. 7 refs

  3. Distributed PACS using distributed file system with hierarchical meta data servers.

    Science.gov (United States)

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  4. Nuclear plant fire incident data file

    International Nuclear Information System (INIS)

    Sideris, A.G.; Hockenbury, R.W.; Yeater, M.L.; Vesely, W.E.

    1979-01-01

    A computerized nuclear plant fire incident data file was developed by American Nuclear Insurers and was further analyzed by Rensselaer Polytechnic Institute with technical and monetary support provided by the Nuclear Regulatory Commission. Data on 214 fires that occurred at nuclear facilities have been entered in the file. A computer program has been developed to sort the fire incidents according to various parameters. The parametric sorts that are presented in this article are significant since they are the most comprehensive statistics presently available on fires that have occurred at nuclear facilities

  5. JENDL special purpose data files and related nuclear data

    International Nuclear Information System (INIS)

    Iijima, Shungo

    1989-01-01

    The objectives of JENDL Special Purpose Data Files under development are the applications of nuclear data to the evaluation of the fuel cycle, nuclear activation, and radiation damage. The files in plan consist of 9 types of data, viz., the actinide cross sections, the decay data, the activation cross sections, the (α,n) cross sections, the photo-reaction cross sections, the dosimetry cross sections, the gas production cross sections, the primary knock-on atom spectra and KERMA factors, and the data for standard. The status of the compilation and the evaluation of these data are briefly reviewed. In particular, the features of the data required for the evaluation of the activation cross sections, (α,n) cross sections, photo-reaction cross sections, and PKA data are discussed in some detail. The need for the realistic definition of the scope of the work is emphasized. (author)

  6. Analyzing data files in SWAN

    CERN Document Server

    Gajam, Niharika

    2016-01-01

    Traditionally analyzing data happens via batch-processing and interactive work on the terminal. The project aims to provide another way of analyzing data files: A cloud-based approach. It aims to make it a productive and interactive environment through the combination of FCC and SWAN software.

  7. DATA Act File C Award Financial - Social Security

    Data.gov (United States)

    Social Security Administration — The DATA Act Information Model Schema Reporting Submission Specification File C. File C includes the agency award information from the financial accounting system at...

  8. Development of EDFSRS: evaluated data files storage and retrieval system

    International Nuclear Information System (INIS)

    Hasegawa, Akira

    1985-07-01

    EDFSRS: Evaluated Data Files Storage and Retrieval System has been developed, which is a complete service system for the evaluated nuclear data files compiled in the major three formats: ENDF/B, UKNDL and KEDAK. This system intends to give efficient loading and maintenance of evaluated nuclear data files to the data base administrators and efficient retrievals to their users not only with the easiness but with the best confidence. It can give users all of the information available in these major three formats. The system consists of more than fifteen independent programs and some 150 Mega byte data files and index files (data-base) of the loaded data. In addition it is designed to be operated in the on-line TSS (Time Sharing System) mode, so that users can get any information from their desk top terminals. This report is prepared as a reference manual of the EDFSRS. (author)

  9. The file of evaluated decay data in ENDF/B

    International Nuclear Information System (INIS)

    Reich, C.W.

    1991-01-01

    One important application of nuclear decay data is the Evaluated Nuclear Data File/B (ENDF/B), the base of evaluated nuclear data used in reactor research and technology activities within the United States. The decay data in the Activation File (158 nuclides) and the Actinide File (108 nuclides) excellently represent the current status of this information. In particular, the half-lives and gamma and alpha emission probabilities, quantities that are so important for many applications, of the actinide nuclides represent a significant improvement over those in ENDF/B-V because of the inclusion of data produced by an International Atomic Energy Agency Coordinated Research Program. The Fission Product File contains experimental decay data on ∼510 nuclides, which is essentially all for which a meaningful number of data are available. For the first time, delayed-neutron spectra for the precursor nuclides are included. Some hint of problems in the fission product data base is provided by the gamma decay heat following a burst irradiation of 239 Pu

  10. Processing and validation of intermediate energy evaluated data files

    International Nuclear Information System (INIS)

    2000-01-01

    Current accelerator-driven and other intermediate energy technologies require accurate nuclear data to model the performance of the target/blanket assembly, neutron production, activation, heating and damage. In a previous WPEC subgroup, SG13 on intermediate energy nuclear data, various aspects of intermediate energy data, such as nuclear data needs, experiments, model calculations and file formatting issues were investigated and categorized to come to a joint evaluation effort. The successor of SG13, SG14 on the processing and validation of intermediate energy evaluated data files, goes one step further. The nuclear data files that have been created with the aforementioned information need to be processed and validated in order to be applicable in realistic intermediate energy simulations. We emphasize that the work of SG14 excludes the 0-20 MeV data part of the neutron evaluations, which is supposed to be covered elsewhere. This final report contains the following sections: section 2: a survey of the data files above 20 MeV that have been considered for validation in SG14; section 3: a summary of the review of the 150 MeV intermediate energy data files for ENDF/B-VI and, more briefly, the other libraries; section 4: validation of the data library against an integral experiment with MCNPX; section 5: conclusions. (author)

  11. Data File Standard for Flow Cytometry, version FCS 3.1.

    Science.gov (United States)

    Spidlen, Josef; Moore, Wayne; Parks, David; Goldberg, Michael; Bray, Chris; Bierre, Pierre; Gorombey, Peter; Hyun, Bill; Hubbard, Mark; Lange, Simon; Lefebvre, Ray; Leif, Robert; Novo, David; Ostruszka, Leo; Treister, Adam; Wood, James; Murphy, Robert F; Roederer, Mario; Sudar, Damir; Zigon, Robert; Brinkman, Ryan R

    2010-01-01

    The flow cytometry data file standard provides the specifications needed to completely describe flow cytometry data sets within the confines of the file containing the experimental data. In 1984, the first Flow Cytometry Standard format for data files was adopted as FCS 1.0. This standard was modified in 1990 as FCS 2.0 and again in 1997 as FCS 3.0. We report here on the next generation flow cytometry standard data file format. FCS 3.1 is a minor revision based on suggested improvements from the community. The unchanged goal of the standard is to provide a uniform file format that allows files created by one type of acquisition hardware and software to be analyzed by any other type.The FCS 3.1 standard retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. The major changes include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about originality of the data file, and support for plate and well identification in high throughput, plate based experiments. Please see the normative version of the FCS 3.1 specification in Supporting Information for this manuscript (or at http://www.isac-net.org/ in the Current standards section) for a complete list of changes.

  12. Reactor fuel performance data file, 1985 edition

    International Nuclear Information System (INIS)

    Harayama, Yasuo; Fujita, Misao; Watanabe, Kohji.

    1986-07-01

    In safety evaluation and integrity studies of reactor fuel, data on fuel performance are the most basic materials. The Fuel Reliability Laboratory No.1 has obtained the fuel performance data by joining in some international programs to study the safety and integrity of fuel. Those data have only used for the studies in the above two fields. However, if the data are rearranged and compiled in a easily usable form, they can be utilized in other field of studies. Then, a 'data file' on fuel performance is beeing compiled by adding data from open literatures to those obtained in international programs. The present report is prepared on the basis of the data file compiled by March in 1986. (author)

  13. Development of data file system for cardiovascular nuclear medicine

    International Nuclear Information System (INIS)

    Hayashida, Kohei; Nishimura, Tsunehiko; Uehara, Toshiisa; Nisawa, Yoshifumi.

    1985-01-01

    A computer-assisted filing system for storing and processing data from cardiac pool scintigraphy and myocardial scintigraphy has been developed. Individual patient data are stored with his (her) identification number (ID) into floppy discs successively in order of receiving scintigraphy. Data for 900 patients can be stored per floppy disc. Scintigraphic findings can be outputted in a uniform file format, and can be used as a reporting format. Output or retrieval of filed individual patient data is possible according to each examination, disease code or ID. This system seems to be used for prospective study in patients with cardiovascular diseases. (Namekawa, K.)

  14. An evaluated neutronic data file for bismuth

    International Nuclear Information System (INIS)

    Guenther, P.T.; Lawson, R.D.; Meadows, J.W.; Smith, A.B.; Smith, D.L.; Sugimoto, M.; Howerton, R.J.

    1989-11-01

    A comprehensive evaluated neutronic data file for bismuth, extending from 10 -5 eV to 20.0 MeV, is described. The experimental database, the application of the theoretical models, and the evaluation rationale are outlined. Attention is given to uncertainty specification, and comparisons are made with the prior ENDF/B-V evaluation. The corresponding numerical file, in ENDF/B-VI format, has been transmitted to the National Nuclear Data Center, Brookhaven National Laboratory. 106 refs., 10 figs., 6 tabs

  15. Distributing File-Based Data to Remote Sites Within the BABAR Collaboration

    International Nuclear Information System (INIS)

    Gowdy, Stephen J.

    2002-01-01

    BABAR [1] uses two formats for its data: Objectivity database and root [2] files. This poster concerns the distribution of the latter--for Objectivity data see [3]. The BABAR analysis data is stored in root files--one per physics run and analysis selection channel--maintained in a large directory tree. Currently BABAR has more than 4.5 TBytes in 200,000 root files. This data is (mostly) produced at SLAC, but is required for analysis at universities and research centers throughout the us and Europe. Two basic problems confront us when we seek to import bulk data from slac to an institute's local storage via the network. We must determine which files must be imported (depending on the local site requirements and which files have already been imported), and we must make the optimum use of the network when transferring the data. Basic ftp-like tools (ftp, scp, etc) do not attempt to solve the first problem. More sophisticated tools like rsync [4], the widely-used mirror/synchronization program, compare local and remote file systems, checking for changes (based on file date, size and, if desired, an elaborate checksum) in order to only copy new or modified files. However rsync allows for only limited file selection. Also when, as in BABAR, an extremely large directory structure must be scanned, rsync can take several hours just to determine which files need to be copied. Although rsync (and scp) provides on-the-fly compression, it does not allow us to optimize the network transfer by using multiple streams, adjusting the tcp window size, or separating encrypted authentication from unencrypted data channels

  16. Distributing file-based data to remote sites within the BABAR collaboration

    International Nuclear Information System (INIS)

    Adye, T.; Dorigo, A.; Forti, A.; Leonardi, E.

    2001-01-01

    BABAR uses two formats for its data: Objectivity database and ROOT files. This poster concerns the distribution of the latter--for Objectivity data see. The BABAR analysis data is stored in ROOT files--one per physics run and analysis selection channel-maintained in a large directory tree. Currently BABAR has more than 4.5 TBytes in 200,00- ROOT files. This data is (mostly) produced at SLAC, but is required for analysis at universities and research centres throughout the US and Europe. Two basic problems confront us when we seek to import bulk data from SLAC to an institute's local storage via the network. We must determine which files must be imported (depending on the local site requirements and which files have already been imported), and the authors must make the optimum use of the network when transferring the data. Basic ftp-like tools (ftp, scp, etc) do not attempt to solve the first problem. More sophisticated tools like rsync, the widely-used mirror/synchronisation program, compare local and remote file systems, checking for changes (based on file date, size and, if desired, an elaborate checksum) in order to only copy new or modified files. However rsync allows for only limited file selection. Also when, as in BABAR, an extremely large directory structure must be scanned, rsync can take several hours just to determine which files need to be copied. Although rsync (and scp) provides on-the-fly compression, it does not allow us to optimise the network transfer by using multiple streams, adjusting the TCP window size, or separating encrypted authentication from unencrypted data channels

  17. Nuclear structure data file. A manual for preparation of data sets

    International Nuclear Information System (INIS)

    Ewbank, W.B.; Schmorak, M.R.; Bertrand, F.E.; Feliciano, M.; Horen, D.J.

    1975-06-01

    The Nuclear Data Project at ORNL is building a computer-based file of nuclear structure data, which is intended for use by both basic and applied users. For every nucleus, the Nuclear Structure Data File contains evaluated nuclear structure information. This manual describes a standard input format for nuclear structure data. The format is sufficiently structured that bulk data can be entered efficiently. At the same time, the structure is open-ended and can accommodate most measured or deduced quantities that yield nuclear structure information. Computer programs have been developed at the Data Project to perform consistency checking and routine calculations. Programs are also used for preparing level scheme drawings. (U.S.)

  18. Data Conversion Tool For Tobii Pro Glasses 2 Live Data Files

    DEFF Research Database (Denmark)

    Wulff-Jensen, Andreas

    2017-01-01

    The data gathered through the Tobii Pro Glasses 2 is saved in a .json file called livedata.json. This format is convenient for the Tobii analysis software, but for any other analysis software packages it can be rather troublesome as the software packages do not know how to interpret the .json file...

  19. Supplemental Security Income Public-Use Microdata File, 2001 Data

    Data.gov (United States)

    Social Security Administration — The SSI Public-Use Microdata File contains an extract of data fields from SSA's Supplemental Security Record file and consists of a 5 percent random, representative...

  20. Nuclear decay data files of the Dosimetry Research Group

    International Nuclear Information System (INIS)

    Eckerman, K.F.; Westfall, R.J.; Ryman, J.C.; Cristy, M.

    1993-12-01

    This report documents the nuclear decay data files used by the Dosimetry Research Group at Oak Ridge National Laboratory and the utility DEXRAX which provides access to the files. The files are accessed, by nuclide, to extract information on the intensities and energies of the radiations associated with spontaneous nuclear transformation of the radionuclides. In addition, beta spectral data are available for all beta-emitting nuclides. Two collections of nuclear decay data are discussed. The larger collection contains data for 838 radionuclides, which includes the 825 radionuclides assembled during the preparation of Publications 30 and 38 of the International Commission on Radiological Protection (ICRP) and 13 additional nuclides evaluated in preparing a monograph for the Medical Internal Radiation Dose (MIRD) Committee of the Society of Nuclear Medicine. The second collection is composed of data from the MIRD monograph and contains information for 242 radionuclides. Abridged tabulations of these data have been published by the ICRP in Publication 38 and by the Society of Nuclear Medicine in a monograph entitled ''MIRD: Radionuclide Data and Decay Schemes.'' The beta spectral data reported here have not been published by either organization. Electronic copies of the files and the utility, along with this report, are available from the Radiation Shielding Information Center at Oak Ridge National Laboratory

  1. Activation cross section data file, (1)

    International Nuclear Information System (INIS)

    Yamamuro, Nobuhiro; Iijima, Shungo.

    1989-09-01

    To evaluate the radioisotope productions due to the neutron irradiation in fission of fusion reactors, the data for the activation cross sections ought to be provided. It is planning to file more than 2000 activation cross sections at final. In the current year, the neutron cross sections for 14 elements from Ni to W have been calculated and evaluated in the energy range 10 -5 to 20 MeV. The calculations with a simplified-input nuclear cross section calculation system SINCROS were described, and another method of evaluation which is consistent with the JENDL-3 were also mentioned. The results of cross section calculation are in good agreement with experimental data and they were stored in the file 8, 9 and 10 of ENDF/B format. (author)

  2. FORTRAN data files transference from VAX/VMS to ALPHA/UNIX

    International Nuclear Information System (INIS)

    Sanchez, E.; Milligen, B.Ph. van

    1997-01-01

    Several tools have been developed to access the TJ-I and TJ-IU databases, which currently reside in VAX/VMS servers, from the TJ-II Data Acquisition System DEC ALPHA 8400 server. The TJ-I/TJ-IU databases are not homogeneous and contain several types of data files, namely, SADE. CAMAC and FORTRAN un formatted files. The tools presented in this report allow one to transfer CAMAC and those FORTRAN un formatted files defined herein. from a VAX/VMS server, for data manipulation on the ALPHA/Digital UNIX server. (Author) 5 refs

  3. Kepler Data Validation Time Series File: Description of File Format and Content

    Science.gov (United States)

    Mullally, Susan E.

    2016-01-01

    The Kepler space mission searches its time series data for periodic, transit-like signatures. The ephemerides of these events, called Threshold Crossing Events (TCEs), are reported in the TCE tables at the NASA Exoplanet Archive (NExScI). Those TCEs are then further evaluated to create planet candidates and populate the Kepler Objects of Interest (KOI) table, also hosted at the Exoplanet Archive. The search, evaluation and export of TCEs is performed by two pipeline modules, TPS (Transit Planet Search) and DV (Data Validation). TPS searches for the strongest, believable signal and then sends that information to DV to fit a transit model, compute various statistics, and remove the transit events so that the light curve can be searched for other TCEs. More on how this search is done and on the creation of the TCE table can be found in Tenenbaum et al. (2012), Seader et al. (2015), Jenkins (2002). For each star with at least one TCE, the pipeline exports a file that contains the light curves used by TPS and DV to find and evaluate the TCE(s). This document describes the content of these DV time series files, and this introduction provides a bit of context for how the data in these files are used by the pipeline.

  4. Establishment of data base files of thermodynamic data developed by OECD/NEA. Pt. 1. Thermodynamic data of Np and Pu

    International Nuclear Information System (INIS)

    Yoshida, Yasushi; Sasamoto, Hiroshi

    2004-01-01

    Thermodynamic data base for compounds and complexes of actinides and fission products specialized in modeling requirements for safety assessments of radioactive waste disposal systems are being developed by NEA TDB project of OECD/NEA. In this project, relevant data bases for compounds and complexes of Np and Pu were published in 2001. JNC established the data base files available for geochemical calculation codes using these Np and Pu published data. And this procedure for establishment and contents of data base files are described in this report. These data base files were prepared as the formats of major geochemical codes PHREEQE, PHREEQC, EQ3/6 and Geochemist's workbench. Additionally modification for data in the thermodynamic data base files which had been already published by JNC was also done. This procedure and revised data bases are shown in the appendix of this report. (author)

  5. Fast probabilistic file fingerprinting for big data

    NARCIS (Netherlands)

    Tretjakov, K.; Laur, S.; Smant, G.; Vilo, J.; Prins, J.C.P.

    2013-01-01

    Background: Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily

  6. Recalling ISX shot data files from the off-line archive

    International Nuclear Information System (INIS)

    Stanton, J.S.

    1981-02-01

    This document describes a set of computer programs designed to allow access to ISX shot data files stored on off-line disk packs. The programs accept user requests for data files and build a queue of end requests. When an operator is available to mount the necessary disk packs, the system copies the requested files to an on-line disk area. The program runs on the Fusion Energy Division's DECsystem-10 computer. The request queue is implemented under the System 1022 data base management system. The support programs are coded in MACRO-10 and FORTRAN-10

  7. Data vaults: a database welcome to scientific file repositories

    NARCIS (Netherlands)

    Ivanova, M.; Kargın, Y.; Kersten, M.; Manegold, S.; Zhang, Y.; Datcu, M.; Espinoza Molina, D.

    2013-01-01

    Efficient management and exploration of high-volume scientific file repositories have become pivotal for advancement in science. We propose to demonstrate the Data Vault, an extension of the database system architecture that transparently opens scientific file repositories for efficient in-database

  8. The evaluated nuclear structure data file: Philosophy, content, and uses

    International Nuclear Information System (INIS)

    Burrows, T.W.

    1990-01-01

    The Evaulated Nuclear Structure Data File (ENSDF) is maintained by the National Nuclear Data Center (NNDC) on behalf of the international Nuclear Structure and Decay Data Network sponsored by the International Atomic Energy Agency, Vienna. Data for A=5 to 44 are extracted from the evaluations published in Nuclear Physics; for A≥45 the file is used to produce the Nuclear Data Sheets. The philosophy and methodology of ENSDF evaluations are outlined, along with the file contents of relevance to radionuclide metrologists; the service available at various nuclear data centers and the NNDC on-line capabilities are also discussed. Application codes have been developed for use with ENSDF, and the program RADLST is used as an example. The interaction of ENSDF evaluation with other evaluations is also discussed. (orig.)

  9. The version control service for the ATLAS data acquisition configuration files

    International Nuclear Information System (INIS)

    Soloviev, Igor

    2012-01-01

    The ATLAS experiment at the LHC in Geneva uses a complex and highly distributed Trigger and Data Acquisition system, involving a very large number of computing nodes and custom modules. The configuration of the system is specified by schema and data in more than 1000 XML files, with various experts responsible for updating the files associated with their components. Maintaining an error free and consistent set of XML files proved a major challenge. Therefore a special service was implemented; to validate any modifications; to check the authorization of anyone trying to modify a file; to record who had made changes, plus when and why; and to provide tools to compare different versions of files and to go back to earlier versions if required. This paper provides details of the implementation and exploitation experience, that may be interesting for other applications using many human-readable files maintained by different people, where consistency of the files and traceability of modifications are key requirements.

  10. JENDL FP decay data file 2000 and the beta-decay theory

    International Nuclear Information System (INIS)

    Yoshida, Tadashi; Katakura, Jun Ichi; Tachibana, Takahiro

    2002-01-01

    JENDL FP Decay Data File 2000 has been developed as one of the special purpose files of the Japanese Evaluated Nuclear Data Library (JENDL), which constitutes a versatile nuclear data basis for science and technology. In the format of ENDF-6 this file includes the decay data for 1087 unstable fission product (FP) nuclides and 142 stable nuclides as their daughters. The primary purpose of this file is to use in the summation calculation of FP decay heat, which plays a critical role in nuclear safety analysis; the loss-of-coolant accident analysis of reactors, for example. The data for a given nuclide are its decay modes, the Q value, the branching ratios, the average energies released in the form of beta- and gamma-rays per decay, and their spectral data. The primary source of the decay data adopted here is the ENSDF (Evaluated Nuclear Structure Data File). The data in ENSDF, however, cover only the measured values. The data of the short-lived nuclides, which are essential for the decay heat calculations at short cooling times, are often fully lacking or incomplete even if they exist. This is mainly because of their short half-life nature. For such nuclides a theoretical model calculation is applied in order to fill the gaps between the true and the experimentally known decay schemes. In practice we have to predict the average decay energies and the spectral data for a lot of short-lived FPs by use of beta-decay theories. Thus the beta-decay theory plays a very important role in generating the FP decay data file

  11. The version control service for ATLAS data acquisition configuration files

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files [1]. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications pro...

  12. Access to DIII-D data located in multiple files and multiple locations

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1993-10-01

    The General Atomics DIII-D tokamak fusion experiment is now collecting over 80 MB of data per discharge once every 10 min, and that quantity is expected to double within the next year. The size of the data files, even in compressed format, is becoming increasingly difficult to handle. Data is also being acquired now on a variety of UNIX systems as well as MicroVAX and MODCOMP computer systems. The existing computers collect all the data into a single shot file, and this data collection is taking an ever increasing amount of time as the total quantity of data increases. Data is not available to experimenters until it has been collected into the shot file, which is in conflict with the substantial need for data examination on a timely basis between shots. The experimenters are also spread over many different types of computer systems (possibly located at other sites). To improve data availability and handling, software has been developed to allow individual computer systems to create their own shot files locally. The data interface routine PTDATA that is used to access DIII-D data has been modified so that a user's code on any computer can access data from any computer where that data might be located. This data access is transparent to the user. Breaking up the shot file into separate files in multiple locations also impacts software used for data archiving, data management, and data restoration

  13. Distributed Data Management and Distributed File Systems

    CERN Document Server

    Girone, Maria

    2015-01-01

    The LHC program has been successful in part due to the globally distributed computing resources used for collecting, serving, processing, and analyzing the large LHC datasets. The introduction of distributed computing early in the LHC program spawned the development of new technologies and techniques to synchronize information and data between physically separated computing centers. Two of the most challenges services are the distributed file systems and the distributed data management systems. In this paper I will discuss how we have evolved from local site services to more globally independent services in the areas of distributed file systems and data management and how these capabilities may continue to evolve into the future. I will address the design choices, the motivations, and the future evolution of the computing systems used for High Energy Physics.

  14. Skyshine analysis using various nuclear data files

    International Nuclear Information System (INIS)

    Zharkov, V.P.; Dikareva, O.F.; Kartashev, I.A.; Kiselev, A.N.; Nomura, Y.; Tsubosaka, A.

    2000-01-01

    The calculations of the spacial distributions of dose rate for neutron and secondary photons, thermal neutron fluxes and space-energy distributions of neutron and photons near the air-ground interface were performed by MCNP and DORT codes. Different nuclear data files were used (ENDF/B-IV, ENDF/B-VI, FENDL-2, JENDL-3.2). Either the standard pointwise libraries (MCNP) or special libraries prepared by NJOY code from ENDF/B and others' files were used. Prepared multigroup coupled neutron and photon cross sections libraries for DORT code had CASK-40 group energy structures. The libraries contain pointwise or multigroup cross sections data for all elements included in the atmosphere and ground composition. The validation of the calculated results was performed with using the experimental data obtained for the series of measurements at RA reactor. (author)

  15. Status of transactinium nuclear data in the Evaluated Nuclear Structure Data File

    International Nuclear Information System (INIS)

    Ewbank, W.B.

    1979-01-01

    The organization and program of the Nuclear Data Project are described. An Evaluated Nuclear Structure Data File (ENSDF) was designed to contain most of the data of nuclear structure physics. ENSDF includes adopted level information for all 1950 known nuclei, and detailed data for approximately 1500 decay schemes. File organization, management, and retrieval are reviewed. An international network of data evaluation centers has been organized to provide for a four-year cycle of ENSDF revisions. Standard retrieval and display programs can prepare various tables of specific data, which can serve as a good first approximation to a complete up-to-date compilation. Appendixes list, for A > 206, nuclear levels with lifetimes > or = 1 s, strong γ rays from radioisotopes (ordered by nuclide and energy), and strong α particle emissions (similarly ordered). 8 figures

  16. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  17. Skyshine analysis using various nuclear data files

    Energy Technology Data Exchange (ETDEWEB)

    Zharkov, V.P.; Dikareva, O.F.; Kartashev, I.A.; Kiselev, A.N. [Research and Development Inst. of Power Engineering, Moscow (Russian Federation); Nomura, Y.; Tsubosaka, A. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan)

    2000-03-01

    The calculations of the spacial distributions of dose rate for neutron and secondary photons, thermal neutron fluxes and space-energy distributions of neutron and photons near the air-ground interface were performed by MCNP and DORT codes. Different nuclear data files were used (ENDF/B-IV, ENDF/B-VI, FENDL-2, JENDL-3.2). Either the standard pointwise libraries (MCNP) or special libraries prepared by NJOY code from ENDF/B and others' files were used. Prepared multigroup coupled neutron and photon cross sections libraries for DORT code had CASK-40 group energy structures. The libraries contain pointwise or multigroup cross sections data for all elements included in the atmosphere and ground composition. The validation of the calculated results was performed with using the experimental data obtained for the series of measurements at RA reactor. (author)

  18. Identifiable Data Files - Medicare Provider Analysis and ...

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Provider Analysis and Review (MEDPAR) File contains data from claims for services provided to beneficiaries admitted to Medicare certified inpatient...

  19. NJOY99, Data Processing System of Evaluated Nuclear Data Files ENDF Format

    International Nuclear Information System (INIS)

    2000-01-01

    1 - Description of program or function: The NJOY nuclear data processing system is a modular computer code used for converting evaluated nuclear data in the ENDF format into libraries useful for applications calculations. Because the Evaluated Nuclear Data File (ENDF) format is used all around the world (e.g., ENDF/B-VI in the US, JEF-2.2 in Europe, JENDL-3.2 in Japan, BROND-2.2 in Russia), NJOY gives its users access to a wide variety of the most up-to-date nuclear data. NJOY provides comprehensive capabilities for processing evaluated data, and it can serve applications ranging from continuous-energy Monte Carlo (MCNP), through deterministic transport codes (DANT, ANISN, DORT), to reactor lattice codes (WIMS, EPRI). NJOY handles a wide variety of nuclear effects, including resonances, Doppler broadening, heating (KERMA), radiation damage, thermal scattering (even cold moderators), gas production, neutrons and charged particles, photo-atomic interactions, self shielding, probability tables, photon production, and high-energy interactions (to 150 MeV). Output can include printed listings, special library files for applications, and Postscript graphics (plus color). More information on NJOY is available from the developer's home page at http://t2.lanl.gov/tour/tourbus.html. Follow the Tourbus section of the Tour area to find notes from the ICTP lectures held at Trieste in March 2000 on the ENDF format and on the NJOY code. NJOY contains the following modules: NJOY directs the flow of data through the other modules and contains a library of common functions and subroutines used by the other modules. RECONR reconstructs pointwise (energy-dependent) cross sections from ENDF resonance parameters and interpolation schemes. BROADR Doppler broadens and thins pointwise cross sections. UNRESR computes effective self-shielded pointwise cross sections in the unresolved energy range. HEATR generates pointwise heat production cross sections (KERMA coefficients) and radiation

  20. Library of files of evaluated neutron data

    International Nuclear Information System (INIS)

    Blokhin, A.I.; Ignatyuk, A.V.; Koshcheev, V.N.; Kuz'minov, B.D.; Manokhin, V.N.; Manturov, G.N.; Nikolaev, M.N.

    1988-01-01

    It is reported about development of the evaluated neutron data files library which was recommended by the GKAE Nuclear Data Commission as the base of improving constant systems in neutron engeneering calculations. A short description of the library content is given and status of the library is pointed out

  1. Silvabase: A flexible data file management system

    Science.gov (United States)

    Lambing, Steven J.; Reynolds, Sandra J.

    1991-01-01

    The need for a more flexible and efficient data file management system for mission planning in the Mission Operations Laboratory (EO) at MSFC has spawned the development of Silvabase. Silvabase is a new data file structure based on a B+ tree data structure. This data organization allows for efficient forward and backward sequential reads, random searches, and appends to existing data. It also provides random insertions and deletions with reasonable efficiency, utilization of storage space well but not at the expense of speed, and performance of these functions on a large volume of data. Mission planners required that some data be keyed and manipulated in ways not found in a commercial product. Mission planning software is currently being converted to use Silvabase in the Spacelab and Space Station Mission Planning Systems. Silvabase runs on a Digital Equipment Corporation's popular VAX/VMS computers in VAX Fortran. Silvabase has unique features involving time histories and intervals such as in operations research. Because of its flexibility and unique capabilities, Silvabase could be used in almost any government or commercial application that requires efficient reads, searches, and appends in medium to large amounts of almost any kind of data.

  2. Photon and decay data libraries for ORIGEN2 code based on JENDL FP decay data file 2000

    CERN Document Server

    Katakura, J I

    2002-01-01

    Photon and decay data libraries for the ORIGEN2 code have been updated by using JENDL FP Decay Data File 2000 (JENDL/FPD-00). As for the decay data, half-lives, branching ratios and recoverable energy values have been replaced with those of the JENDL/FPD-00 file. The data of the photon library has been also replaced with those of the JENDL/FPD-00 file in which photon data of the nuclides without measured data are calculated with a theoretical method. Using the updated photon library, the calculation of photon spectrum at a short time after fission event is able to be made.

  3. Identifiable Data Files - Health Outcomes Survey (HOS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Health Outcomes Survey (HOS) identifiable data files are comprised of the entire national sample for a given 2-year cohort (including both respondents...

  4. PHOBINS: an index file of photon production cross section data and its utility code system

    International Nuclear Information System (INIS)

    Hasegawa, Akira; Koyama, Kinji; Ido, Masaru; Hotta, Masakazu; Miyasaka, Shun-ichi

    1978-08-01

    The code System PHOBINS developed for reference of photon production cross sections is described in detail. The system is intended to grasp the present status of photon production data and present the information of available data. It consists of four utility routines, CREA, UP-DT, REF and BACK, and data files. These utility routines are used for making an index file of the photon production cross sections, updating the index file, searching the index file and producing a back-up file of the index file. In the index file of the photon production cross sections, a data base system is employed for efficient data management in economical storage, ease of updating and efficient reference. The present report is a reference manual of PHOBINS. (author)

  5. Data Vaults: a Database Welcome to Scientific File Repositories

    NARCIS (Netherlands)

    M.G. Ivanova (Milena); Y. Kargin (Yagiz); M.L. Kersten (Martin); S. Manegold (Stefan); Y. Zhang (Ying); M. Datcu (Mihai); D. Espinoza Molina

    2013-01-01

    textabstractEfficient management and exploration of high-volume scientific file repositories have become pivotal for advancement in science. We propose to demonstrate the Data Vault, an extension of the database system architecture that transparently opens scientific file repositories for efficient

  6. DATA Act File B Object Class and Program Activity - Social Security

    Data.gov (United States)

    Social Security Administration — The DATA Act Information Model Schema Reporting Submission Specification File B. File B includes the agency object class and program activity detail obligation and...

  7. GEODOC: the GRID document file, record structure and data element description

    Energy Technology Data Exchange (ETDEWEB)

    Trippe, T.; White, V.; Henderson, F.; Phillips, S.

    1975-11-06

    The purpose of this report is to describe the information structure of the GEODOC file. GEODOC is a computer based file which contains the descriptive cataloging and indexing information for all documents processed by the National Geothermal Information Resource Group. This file (along with other GRID files) is managed by DBMS, the Berkeley Data Base Management System. Input for the system is prepared using the IRATE Text Editing System with its extended (12 bit) character set, or punched cards.

  8. Penyembunyian Data pada File Video Menggunakan Metode LSB dan DCT

    Directory of Open Access Journals (Sweden)

    Mahmuddin Yunus

    2014-01-01

    Full Text Available Abstrak Penyembunyian data pada file video dikenal dengan istilah steganografi video. Metode steganografi yang dikenal diantaranya metode Least Significant Bit (LSB dan Discrete Cosine Transform (DCT. Dalam penelitian ini dilakukan penyembunyian data pada file video dengan menggunakan metode LSB, metode DCT, dan gabungan metode LSB-DCT. Sedangkan kualitas file video yang dihasilkan setelah penyisipan dihitung dengan menggunakan Mean Square Error (MSE dan Peak Signal to Noise Ratio (PSNR.Uji eksperimen dilakukan berdasarkan ukuran file video, ukuran file berkas rahasia yang disisipkan, dan resolusi video. Hasil pengujian menunjukkan tingkat keberhasilan steganografi video dengan menggunakan metode LSB adalah 38%, metode DCT adalah 90%, dan gabungan metode LSB-DCT adalah 64%. Sedangkan hasil perhitungan MSE, nilai MSE metode DCT paling rendah dibandingkan metode LSB dan gabungan metode LSB-DCT. Sedangkan metode LSB-DCT mempunyai nilai yang lebih kecil dibandingkan metode LSB. Pada pengujian PSNR diperoleh databahwa nilai PSNR metode DCTlebih tinggi dibandingkan metode LSB dan gabungan metode LSB-DCT. Sedangkan nilai PSNR metode gabungan LSB-DCT lebih tinggi dibandingkan metode LSB.   Kata Kunci— Steganografi, Video, Least Significant Bit (LSB, Discrete Cosine Transform (DCT, Mean Square Error (MSE, Peak Signal to Noise Ratio (PSNR                             Abstract Hiding data in video files is known as video steganography. Some of the well known steganography methods areLeast Significant Bit (LSB and Discrete Cosine Transform (DCT method. In this research, data will be hidden on the video file with LSB method, DCT method, and the combined method of LSB-DCT. While the quality result of video file after insertion is calculated using the Mean Square Error (MSE and Peak Signal to Noise Ratio (PSNR. The experiments were conducted based on the size of the video file, the file size of the inserted secret files, and

  9. Data and code files for co-occurrence modeling project

    Data.gov (United States)

    U.S. Environmental Protection Agency — Files included are original data inputs on stream fishes (fish_data_OEPA_2012.csv), water chemistry (OEPA_WATER_2012.csv), geographic data (NHD_Plus_StreamCat);...

  10. Benchmark test of evaluated nuclear data files for fast reactor neutronics application

    International Nuclear Information System (INIS)

    Chiba, Go; Hazama, Taira; Iwai, Takehiko; Numata, Kazuyuki

    2007-07-01

    A benchmark test of the latest evaluated nuclear data files, JENDL-3.3, JEFF-3.1 and ENDF/B-VII.0, has been carried out for fast reactor neutronics application. For this benchmark test, experimental data obtained at fast critical assemblies and fast power reactors are utilized. In addition to comparing of numerical solutions with the experimental data, we have extracted several cross sections, in which differences between three nuclear data files affect significantly numerical solutions, by virtue of sensitivity analyses. This benchmark test concludes that ENDF/B-VII.0 predicts well the neutronics characteristics of fast neutron systems rather than the other nuclear data files. (author)

  11. Data_files_Reyes_EHP_phthalates

    Data.gov (United States)

    U.S. Environmental Protection Agency — The file contains three files in comma separated values (.csv) format. “Reyes_EHP_Phthalates_US_metabolites.csv” contains information about the National Health and...

  12. Transmission of the environmental radiation data files on the internet

    International Nuclear Information System (INIS)

    Yamaguchi, Yoshiaki; Saito, Tadashi; Yamamoto, Takayoshi; Matsumoto, Atsushi; Kyoh, Bunkei

    1999-01-01

    Recently, any text or data file has come to be transportable through the Internet with a personal computer. It is, however, restricted by selection of monitoring point to lay the cable because the personal circuit is generally used in case of continuous type environmental monitors. This is the reason why we have developed an environmental monitoring system that can transmit radiation data files on the Internet. Both 3''φ x 3'' NaI(Tl) detector and Thermo-Hydrometer are installed in the monitoring post of this system, and the data files of those detectors are transmitted from a personal computer at the monitoring point to Radioisotope Research Center of Osaka University. Environmental monitoring data from remote places have easily been obtained due to the data transmission through the Internet. Moreover, the system brings a higher precision of the environmental monitoring data because it includes the energy information of γ-rays. If it is possible to maintain the monitors at remote places, this system could execute the continuous environmental monitoring over the wide area. (author)

  13. Transmission of the environmental radiation data files on the internet

    Energy Technology Data Exchange (ETDEWEB)

    Yamaguchi, Yoshiaki; Saito, Tadashi; Yamamoto, Takayoshi [Osaka Univ., Suita (Japan). Radioisotope Research Center; Matsumoto, Atsushi; Kyoh, Bunkei

    1999-01-01

    Recently, any text or data file has come to be transportable through the Internet with a personal computer. It is, however, restricted by selection of monitoring point to lay the cable because the personal circuit is generally used in case of continuous type environmental monitors. This is the reason why we have developed an environmental monitoring system that can transmit radiation data files on the Internet. Both 3``{phi} x 3`` NaI(Tl) detector and Thermo-Hydrometer are installed in the monitoring post of this system, and the data files of those detectors are transmitted from a personal computer at the monitoring point to Radioisotope Research Center of Osaka University. Environmental monitoring data from remote places have easily been obtained due to the data transmission through the Internet. Moreover, the system brings a higher precision of the environmental monitoring data because it includes the energy information of {gamma}-rays. If it is possible to maintain the monitors at remote places, this system could execute the continuous environmental monitoring over the wide area. (author)

  14. Status of transactinium nuclear data in the evaluated nuclear structure data file

    International Nuclear Information System (INIS)

    Ewbank, W.B.

    1980-01-01

    The structure and organization of the Evaluated Nuclear Structure Data File (ENSDF) which serves as the source data base for the production of drawings and tables for the ''Nuclear Data Sheets'' journal is described. The updating and output features of ENSDF are described with emphasis on nuclear structure and decay data of the transactinium isotopes. (author)

  15. 75 FR 24718 - Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability

    Science.gov (United States)

    2010-05-05

    ...] Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability AGENCY... documenting statistical analyses and data files submitted to the Center for Veterinary Medicine (CVM) for the... on Documenting Statistical Analysis Programs and Data Files; Availability'' giving interested persons...

  16. Use of DBMS-10 for storage and retrieval of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Dunford, C.L.

    1977-01-01

    The use of a data base management system (DBMS) for storage of, and retrieval from, the many scientific data bases maintained by the National Nuclear Data Center is currently being investigated. It would appear that a commercially available DBMS package would save the Center considerable money and manpower when adding new data files to the library and in the long-term maintenance of current data files. Current DBMS technology and experience with an internal DBMS system suggests an inherent inefficiency in processing large data networks where significant portions are accessed in a sequential manner. Such a file is the Evaluated Nuclear Data File (ENDF/B), which contains many large data tables, each one normally accessed in a sequential manner. After gaining some experience and success in small applications of the commercially available DBMS package, DBMS-10, on the Center's DECsystem-10 computer, it was decided to select a large data base as a test case before making a final decision on the implementation of DBMS-10 for all data bases. The obvious approach is to utilize the DBMS to index a random-access file. In this way one is able to increase the storage and retrieval efficiency at the one-time cost of additional programing effort. 2 figures

  17. Use of DBMS-10 for storage and retrieval of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Dunford, C.L.

    1978-01-01

    The use of a data base management system (DBMS) for storage of, and retrieval from, the many scientific data bases maintained by the National Nuclear Data Center is currently being investigated. It would appear that a commercially available DBMS package would save the Center considerable money and manpower when adding new data files to our library and in the long-term maintenance of our current data files. Current DBMS technology and experience with our internal DBMS system suggests an inherent inefficiency in processing large data networks where significant portions are accessed in a sequential manner. Such a file is the Evaluated Nuclear Data File (ENDF/B) which contains many large data tables, each one normally accessed in a sequential manner. After gaining some experience and success in small applications of the commercially available DBMS package, DBMS-10, on the Center's DECsystem-10 computer, it was decided to select one of our large data bases as a test case before making a final decision on the implementation of DBMS-10 for all our data bases. The obvious approach is to utilize the DBMS to index a random access file. In this way one is able to increase the storage and retrieval efficiency at the one-time cost of additional programming effort

  18. Data formats and procedures for the Evaluated Nuclear Data File, ENDF

    International Nuclear Information System (INIS)

    Garber, D.; Dunford, C.; Pearlstein, S.

    1975-10-01

    This report describes the philosophy of the Evaluated Nuclear Data File (ENDF) and the data formats and procedures that have been developed for it. The ENDF system was designed for the storage and retrieval of the evaluated nuclear data that are required for neutronics, photonics and decay heat calculations. This system is composed of several parts that include a series of data processing codes and neutron and photon cross section nuclear structure libraries

  19. Data formats and procedures for the Evaluated Nuclear Data File, ENDF

    Energy Technology Data Exchange (ETDEWEB)

    Garber, D.; Dunford, C.; Pearlstein, S.

    1975-10-01

    This report describes the philosophy of the Evaluated Nuclear Data File (ENDF) and the data formats and procedures that have been developed for it. The ENDF system was designed for the storage and retrieval of the evaluated nuclear data that are required for neutronics, photonics and decay heat calculations. This system is composed of several parts that include a series of data processing codes and neutron and photon cross section nuclear structure libraries.

  20. Data Analysis of Minima Total Cross-sections of Nitrogen-14 on JENDL-3.2Nuclear Data File

    International Nuclear Information System (INIS)

    Suwoto; Pandiangan, Tumpal; Ferhat-Aziz

    2000-01-01

    The integral tests of neutron cross-section for shielding material suchas nitrogen-14 contained in JENDL-3.2 file have been performed. Analysis ofthe calculation for nitrogen-14 was based on the MAEKER's ORNL-BroomstickExperiment at ORNL-USA. For the data comparison, the calculation analysiswith JENDL-3.1 file, ENDF/B-IV file, ENDF/B-VI file and JEF2.2 have also beencarried out. The overall calculation results by using JENDL-3.2 evaluationshowed good agreement with the experimental data, as well as those with theENDF/B-VI evaluation. In particular, the JENDL-3.2 evaluation gave betterresults than JENDL-3.1 evaluation and ENDF/B-IV. It was been concluded thatthe total cross-sections of Nitrogen-14 contained in JENDL-3.2 file is invery good agreement with the experimental results, although the totalcross-section in the energy range between 0.5 MeV and 0.9 MeV on fileJENDL-3.2 was small (about 4% lower), and minima of total cross-sections wasdeeper. (author)

  1. Evaluated Nuclear Structure Data File (ENSDF)

    International Nuclear Information System (INIS)

    Bhat, M.R.

    1991-01-01

    The Evaluated Nuclear Structure Data File (ENSDF), is maintained by the National Nuclear Data Center (NNDC) on behalf of the international Nuclear Structure and Decay Data (NSDD) network organized under the auspices of the International Atomic Energy Agency. ENSDF provides evaluated experimental nuclear structure and decay data for basic and applied research. The activities of the NSDD network, the publication of the evaluations, and their use in different applications are described. Since 1986, the ENSDF and related numeric and bibliographic data bases have been made available for on-line access. The current status of these data bases, and future plans to improve the on-line access to their contents are discussed. 8 refs., 4 tabs

  2. ENDF-UTILITY-CODES, codes to check and standardize data in the Evaluated Nuclear Data File (ENDF)

    International Nuclear Information System (INIS)

    Dunford, Charles L.

    2007-01-01

    1 - Description of program or function: The ENDF Utility Codes include 9 codes to check and standardize data in the Evaluated Nuclear Data File (ENDF). Four programs of this release, GETMAT, LISTEF, PLOTEF and SETMDC are no more maintained since release 6.13. The suite of ENDF utility codes includes: - CHECKR (version 7.01) is a program for checking that an evaluated data file conforms to the ENDF format. - FIZCON (version 7.02) is a program for checking that an evaluated data file has valid data and conforms to recommended procedures. - GETMAT (version 6.13) is designed to retrieve one or more materials from an ENDF formatted data file. The output will contain only the selected materials. - INTER (version 7.01) calculates thermal cross sections, g-factors, resonance integrals, fission spectrum averaged cross sections and 14.0 MeV (or other energy) cross sections for major reactions in an ENDF-6 or ENDF-5 format data file. - LISTEF (version 6.13) is designed to produce summary and annotated listings of a data file in either ENDF-6 or ENDF-5 format. - PLOTEF (version 6.13) is designed to produce graphical displays of a data file in either ENDF-5 or ENDF-6 format. The form of graphical output depends on the graphical devices available at the installation where this code will be used. - PSYCHE (version 7.02) is a program for checking the physics content of an evaluated data file. It can recognise the difference between ENDF-5 or ENDF-6 formats and performs its tests accordingly. - SETMDC (version 6.13) is a utility program that converts the source decks of programs to different computers (DOS, UNIX, LINUX, VMS, Windows). - STANEF (version 7.01) performs bookkeeping operations on a data file containing one or more material evaluations in ENDF format. The version 7.02 of the ENDF Utility Codes corrects all bugs reported to NNDC as of April 1, 2005 and supersedes all previous releases. Three codes CHECKR, STANEF, and INTER were actually ported from the 7.01 release

  3. Hierarchical remote data possession checking method based on massive cloud files

    Directory of Open Access Journals (Sweden)

    Ma Haifeng

    2017-06-01

    Full Text Available Cloud storage service enables users to migrate their data and applications to the cloud, which saves the local data maintenance and brings great convenience to the users. But in cloud storage, the storage servers may not be fully trustworthy. How to verify the integrity of cloud data with lower overhead for users has become an increasingly concerned problem. Many remote data integrity protection methods have been proposed, but these methods authenticated cloud files one by one when verifying multiple files. Therefore, the computation and communication overhead are still high. Aiming at this problem, a hierarchical remote data possession checking (hierarchical-remote data possession checking (H-RDPC method is proposed, which can provide efficient and secure remote data integrity protection and can support dynamic data operations. This paper gives the algorithm descriptions, security, and false negative rate analysis of H-RDPC. The security analysis and experimental performance evaluation results show that the proposed H-RDPC is efficient and reliable in verifying massive cloud files, and it has 32–81% improvement in performance compared with RDPC.

  4. Overview of the contents of ENDF/B-VI [Evaluated Nuclear Data File

    International Nuclear Information System (INIS)

    Dunford, C.L.; Pearlstein, S.

    1989-01-01

    The sixth release of the Evaluated Nuclear Data File (ENDF/B-VI) is now being prepared for general distribution. This data file serves as the primary source of nuclear data for nuclear applications in the United States and Canada and in many other countries of the world. The data library is maintained and distributed by the National Nuclear Data Center at Brookhaven National Laboratory from evaluations provided by members of the Cross Section Evaluation Working Group (CSEWG). Unlike its predecessor, ENDF/B-V, this file will be available to all requesters without restrictions. Compared to ENDF/B-V, released more than 11 yr ago, the ENDF/B-VI data library contains significant improvements for both fission and fusion reaction design. Future work will continue with limited staffing and foreign cooperation to provide the data needed for future nuclear applications

  5. Air and Soil Data Files from Sumas Study

    Data.gov (United States)

    U.S. Environmental Protection Agency — The data are summarized in the manuscript, but users may wish to apply them from these files. This dataset is associated with the following publication: Wroble, J.,...

  6. An information retrieval system for research file data

    Science.gov (United States)

    Joan E. Lengel; John W. Koning

    1978-01-01

    Research file data have been successfully retrieved at the Forest Products Laboratory through a high-speed cross-referencing system involving the computer program FAMULUS as modified by the Madison Academic Computing Center at the University of Wisconsin. The method of data input, transfer to computer storage, system utilization, and effectiveness are discussed....

  7. Guidebook for the ENDF/B-V nuclear data files

    International Nuclear Information System (INIS)

    Magurno, B.A.; Kinsey, R.R.; Scheffel, F.M.

    1982-07-01

    The National Nuclear Data Center (NNDC) has provided the Electric Power Research Institute (EPRI) with a convenient reference/guidebook to nuclear data derived from the Evaluated Nuclear Data File, Version V (ENDF/B-V). The main part of the edition consists of plots of the major cross sections for each of the General Purpose Nuclides. These plots are reconstructed from the resonance parameters and background cross sections given in the library. The resolution and display format have been selected to show general trends in the data. Following the section for individual nuclides, an intercomparison of cross section ratios (plots of eta and α values) is provided for the major fissile nuclei. The final section contains a table of nuclide properties derived from the data files. Included are thermal (2200m/sec and maxwellian averaged) cross sections, g factors, infinitely dilute resonance integrals and fission spectrum averages

  8. An UI Layout Files Analyzer for Test Data Generation

    Directory of Open Access Journals (Sweden)

    Paul POCATILU

    2014-01-01

    Full Text Available Prevention actions (trainings, audits and inspections (tests, validations, code reviews are the crucial factors in achieving a high quality level for any software application simply because low investments in this area are leading to significant expenses in terms of corrective actions needed for defect fixing. Mobile applications testing involves the use of various tools and scenarios. An important process is represented by test data generation. This paper proposes a test data generator (TDG system for mobile applications using several sources for test data and it focuses on the UI layout files analyzer module. The proposed architecture aims to reduce time-to-market for mobile applications. The focus is on test data generators based on the source code, user interface layout files (using markup languages like XML or XAML and application specifications. In order to assure a common interface for test data generators, an XML or JSON-based language called Data Specification Language (DSL is proposed.

  9. Data management in large-scale collaborative toxicity studies: how to file experimental data for automated statistical analysis.

    Science.gov (United States)

    Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette

    2013-06-01

    High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. An attempt for revision of JNDC FP decay data file

    International Nuclear Information System (INIS)

    Katakura, Jun-ichi; Matsumoto, Zyun-itiro; Akiyama, Masatsugu; Yoshida, Tadashi; Nakasima, Ryozo.

    1984-06-01

    Some improvement of JNDC FP Decay Data File is tried by reexamining the decay scheme for several nuclides, since slight discrepancies are seen in detailed comparison of decay powers. As a results, it is found that the average beta- and gamma-energies should be modified for 88 Rb and 143 La among the nuclides reexamined in the present study. The JNDC file modified in 88 Rb and 143 La gives better agreement in most cases with experiments than the original JNDC file for cooling times longer than a few thousands seconds. However, the discrepancy for cooling times from a few hundreds to about 1500 seconds still remains. (author)

  11. National RCRA Hazardous Waste Biennial Report Data Files

    Science.gov (United States)

    The United States Environmental Protection Agency (EPA), in cooperation with the States, biennially collects information regarding the generation, management, and final disposition of hazardous wastes regulated under the Resource Conservation and Recovery Act of 1976 (RCRA), as amended. Collection, validation and verification of the Biennial Report (BR) data is the responsibility of RCRA authorized states and EPA regions. EPA does not modify the data reported by the states or regions. Any questions regarding the information reported for a RCRA handler should be directed to the state agency or region responsible for the BR data collection. BR data are collected every other year (odd-numbered years) and submitted in the following year. The BR data are used to support regulatory activities and provide basic statistics and trend of hazardous waste generation and management. BR data is available to the public through 3 mechanisms. 1. The RCRAInfo website includes data collected from 2001 to present-day (https://rcrainfo.epa.gov/rcrainfoweb/action/main-menu/view). Users of the RCRAInfo website can run queries and output reports for different data collection years at this site. All BR data collected from 2001 to present-day is stored in RCRAInfo, and is accessible through this website. 2. An FTP site allows users to access BR data files collected from 1999 - present day (ftp://ftp.epa.gov/rcrainfodata/). Zip files are available for download directly from this

  12. Establishment of data base files of thermodynamic data developed by OECD/NEA. Pt. 2. Thermodynamic data of Tc, U, Np, Pu and Am with auxiliary species

    International Nuclear Information System (INIS)

    Yoshida, Yasushi; Shibata, Masahiro

    2005-03-01

    Thermodynamic data base for compounds and complexes of actinides and fission products with auxiliary species specialized in modeling requirements for safety assessment of radioactive waste disposal systems are being developed by NEA TDB project of OECD/NEA. In this project, relevant data bases for compounds and complexes of U, Am, Tc, Np and Pu with auxiliary species were updated and published in 2003. JNC established the data base files available for geochemical calculation codes using these updated data. The procedure for establishment and contents of data base files are described in this report. These data base files were prepared as the formats of major geochemical codes PHREEQE, PHREEQC, EQ3/6 and Geochemist's workbench. Additionally modification for data in the thermodynamic data base files which had been already published by JNC was also done. This procedure and revised data bases are shown in the appendix of this report. (author)

  13. Report on the achievements in the Sunshine Project in fiscal 1986. Surveys on coal type selection and surveys on coal types (Data file); 1986 nendo tanshu sentei chosa tanshu chosa seika hokokusho. Data file

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1987-03-01

    This data file is a data file concerning coal types for liquefaction in the report on the achievements in the surveys on coal type selection and on coal types (JN0040843). Such items of information were filed as existence and production of coals, various kinds of analyses, and test values relative to data for liquefaction tests that have been collected and sent to date. The file consists of two files of a test sample information file related to existence and production of coals and coal mines, and an analysis and test file accommodating the results of different analyses and tests. However, the test sample information files (1) through (6) have not been put into order on such items of information as test samples and sample collection, geography, geology, ground beds, coal beds, coal mines, development and transportation. The analysis and test file contains (7) industrial analyses, (8) element analysis, (9) ash composition, (10) solubility of ash, (11) structure analysis, (12) liquefaction characteristics (standard version), (13) analysis of liquefaction produced gas, (14) distillation characteristics of liquefaction produced oil, (15) liquefaction characteristics (simplified version), (16) analysis of liquefaction produced gas (simplified version), and (17) distillation characteristics of liquefaction produced oil (simplified version). However, the information related to liquefaction test using a tubing reactor in (15) through (17) has not been put into order. (NEDO)

  14. ENDF/B-IV fission-product files: summary of major nuclide data

    International Nuclear Information System (INIS)

    England, T.R.; Schenter, R.E.

    1975-09-01

    The major fission-product parameters [sigma/sub th/, RI, tau/sub 1/2/, E-bar/sub β/, E-bar/sub γ/, E-bar/sub α/, decay and (n,γ) branching, Q, and AWR] abstracted from ENDF/B-IV files for 824 nuclides are summarized. These data are most often requested by users concerned with reactor design, reactor safety, dose, and other sundry studies. The few known file errors are corrected to date. Tabular data are listed by increasing mass number

  15. Nuclide identifier and grat data reader application for ORIGEN output file

    International Nuclear Information System (INIS)

    Arif Isnaeni

    2011-01-01

    ORIGEN is a one-group depletion and radioactive decay computer code developed at the Oak Ridge National Laboratory (ORNL). ORIGEN takes one-group neutronics calculation providing various nuclear material characteristics (the buildup, decay and processing of radioactive materials). ORIGEN output is a text-based file, ORIGEN output file contains only numbers in the form of group data nuclide, nuclide identifier and grat. This application was created to facilitate data collection nuclide identifier and grat, this application also has a function to acquire mass number data and calculate mass (gram) for each nuclide. Output from these applications can be used for computer code data input for neutronic calculations such as MCNP. (author)

  16. JENDL special purpose file

    International Nuclear Information System (INIS)

    Nakagawa, Tsuneo

    1995-01-01

    In JENDL-3,2, the data on all the reactions having significant cross section over the neutron energy from 0.01 meV to 20 MeV are given for 340 nuclides. The object range of application extends widely, such as the neutron engineering, shield and others of fast reactors, thermal neutron reactors and nuclear fusion reactors. This is a general purpose data file. On the contrary to this, the file in which only the data required for a specific application field are collected is called special purpose file. The file for dosimetry is a typical special purpose file. The Nuclear Data Center, Japan Atomic Energy Research Institute, is making ten kinds of JENDL special purpose files. The files, of which the working groups of Sigma Committee are in charge, are listed. As to the format of the files, ENDF format is used similarly to JENDL-3,2. Dosimetry file, activation cross section file, (α, n) reaction data file, fusion file, actinoid file, high energy data file, photonuclear data file, PKA/KERMA file, gas production cross section file and decay data file are described on their contents, the course of development and their verification. Dosimetry file and gas production cross section file have been completed already. As for the others, the expected time of completion is shown. When these files are completed, they are opened to the public. (K.I.)

  17. Old Age, Survivors, and Disability Insurance (OASDI) Public-Use Microdata File, 2001 Data

    Data.gov (United States)

    Social Security Administration — The OASDI Public-Use Microdata File contains an extract of data fields from SSA's Master Beneficiary Record file and consists of a 1 percent random, representative...

  18. Managing Variant Calling Files the Big Data Way: Using HDFS and Apache Parquet

    NARCIS (Netherlands)

    Boufea, Aikaterini; Finkers, H.J.; Kaauwen, van M.P.W.; Kramer, M.R.; Athanasiadis, I.N.

    2017-01-01

    Big Data has been seen as a remedy for the efficient management of the ever-increasing genomic data. In this paper, we investigate the use of Apache Spark to store and process Variant Calling Files (VCF) on a Hadoop cluster. We demonstrate Tomatula, a software tool for converting VCF files to Apache

  19. Storage of sparse files using parallel log-structured file system

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-11-07

    A sparse file is stored without holes by storing a data portion of the sparse file using a parallel log-structured file system; and generating an index entry for the data portion, the index entry comprising a logical offset, physical offset and length of the data portion. The holes can be restored to the sparse file upon a reading of the sparse file. The data portion can be stored at a logical end of the sparse file. Additional storage efficiency can optionally be achieved by (i) detecting a write pattern for a plurality of the data portions and generating a single patterned index entry for the plurality of the patterned data portions; and/or (ii) storing the patterned index entries for a plurality of the sparse files in a single directory, wherein each entry in the single directory comprises an identifier of a corresponding sparse file.

  20. ChemEngine: harvesting 3D chemical structures of supplementary data from PDF files.

    Science.gov (United States)

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2016-01-01

    Digital access to chemical journals resulted in a vast array of molecular information that is now available in the supplementary material files in PDF format. However, extracting this molecular information, generally from a PDF document format is a daunting task. Here we present an approach to harvest 3D molecular data from the supporting information of scientific research articles that are normally available from publisher's resources. In order to demonstrate the feasibility of extracting truly computable molecules from PDF file formats in a fast and efficient manner, we have developed a Java based application, namely ChemEngine. This program recognizes textual patterns from the supplementary data and generates standard molecular structure data (bond matrix, atomic coordinates) that can be subjected to a multitude of computational processes automatically. The methodology has been demonstrated via several case studies on different formats of coordinates data stored in supplementary information files, wherein ChemEngine selectively harvested the atomic coordinates and interpreted them as molecules with high accuracy. The reusability of extracted molecular coordinate data was demonstrated by computing Single Point Energies that were in close agreement with the original computed data provided with the articles. It is envisaged that the methodology will enable large scale conversion of molecular information from supplementary files available in the PDF format into a collection of ready- to- compute molecular data to create an automated workflow for advanced computational processes. Software along with source codes and instructions available at https://sourceforge.net/projects/chemengine/files/?source=navbar.Graphical abstract.

  1. Data Qualification Report For: Thermodynamic Data File, DATA0.YMP.R0 For Geochemical Code, EQ3/6?

    International Nuclear Information System (INIS)

    P.L. Cloke

    2000-09-01

    The objective of this work is to evaluate the adequacy of chemical thermodynamic data provided by Lawrence Livermore National Laboratory (LLNL) as DataO.ymp.ROA in response to an input request submitted under AP-3.14Q. This request specified that chemical thermodynamic data available in the file, Data0.com.R2, be updated, improved, and augmented for use in geochemical modeling used in Process Model Reports (PMRs) for Engineered Barrier Systems, Waste Form, Waste Package, Unsaturated Zone, and Near Field Environment, as well as for Performance Assessment. The data are qualified in the temperature range 0 to 100 C. Several Data Tracking Numbers (DTNs) associated with Analysis/Model Reports (AMR) addressing various aspects of the post-closure chemical behavior of the waste package and the Engineered Barrier System that rely on EQ316 outputs to which these data are used as input, are Principal Factor affecting. This qualification activity was accomplished in accordance with the AP-SIII.2Q using the Technical Assessment method. A development plan, TDP-EBS-MD-000044, was prepared in accordance with AP-2.13Q and approved by the Responsible Manager. In addition, a Process Control Evaluation was performed in accordance with AP-SV.1Q. The qualification method, selected in accordance with AP-SIII.2Q, was Technical Assessment. The rationale for this approach is that the data in File Data0.com.R2 are considered Handbook data and therefore do not themselves require qualification. Only changes to Data0.com.R2 required qualification. A new file has been produced which contains the database Data0.ymp.R0, which is recommended for qualification as a result of this action. Data0.ymp.R0 will supersede Data0.com.R2 for all Yucca Mountain Project (YMP) activities

  2. Processing of evaluated neutron data files in ENDF format on personal computers

    International Nuclear Information System (INIS)

    Vertes, P.

    1991-11-01

    A computer code package - FDMXPC - has been developed for processing evaluated data files in ENDF format. The earlier version of this package is supplemented with modules performing calculations using Reich-Moore and Adler-Adler resonance parameters. The processing of evaluated neutron data files by personal computers requires special programming considerations outlined in this report. The scope of the FDMXPC program system is demonstrated by means of numerical examples. (author). 5 refs, 4 figs, 4 tabs

  3. The structure and extent of data files for research management and planning

    International Nuclear Information System (INIS)

    Jankowski, L.

    1981-01-01

    The paper is concerned with the structure and extent of the data files which are necessary for the efficient planning and management of a research institute. An analysis is made of the interrelations between decision-making and the amount of information, its content and structure, including consequences to be drawn for planning an in-house data bank for an institute. Special emphasis is placed on the type and structure of data files. The interrelations of the individual data with each other, the frequency of access and the necessity of involving individual agencies and services providing research guidance. (author)

  4. EQPT, a data file preprocessor for the EQ3/6 software package: User's guide and related documentation (Version 7.0)

    International Nuclear Information System (INIS)

    Daveler, S.A.; Wolery, T.J.

    1992-01-01

    EQPT is a data file preprocessor for the EQ3/6 software package. EQ3/6 currently contains five primary data files, called datao files. These files comprise alternative data sets. These data files contain both standard state and activity coefficient-related data. Three (com, sup, and nea) support the use of the Davies or B equations for the activity coefficients; the other two (hmw and pit) support the use of Pitzer's (1973, 1975) equations. The temperature range of the thermodynamic data on these data files varies from 25 degrees C only to 0-300 degrees C. The principal modeling codes in EQ3/6, EQ3NR and EQ6, do not read a data0 file, however. Instead, these codes read an unformatted equivalent called a data1 file. EQPT writes a datal file, using the corresponding data0 file as input. In processing a data0 file, EQPT checks the data for common errors, such as unbalanced reactions. It also conducts two kinds of data transformation. Interpolating polynomials are fit to data which are input on temperature adds. The coefficients of these polynomials are then written on the datal file in place of the original temperature grids. A second transformation pertains only to data files tied to Pitzer's equations. The commonly reported observable Pitzer coefficient parameters are mapped into a set of primitive parameters by means of a set of conventional relations. These primitive form parameters are then written onto the datal file in place of their observable counterparts. Usage of the primitive form parameters makes it easier to evaluate Pitzer's equations in EQ3NR and EQ6. EQPT and the other codes in the EQ3/6 package are written in FORTRAN 77 and have been developed to run under the UNIX operating system on computers ranging from workstations to supercomputers

  5. The self-describing data sets file protocol and Toolkit

    International Nuclear Information System (INIS)

    Borland, M.; Emery, L.

    1995-01-01

    The Self-Describing Data Sets (SDDS) file protocol continues to be used extensively in commissioning the Advanced Photon Source (APS) accelerator complex. SDDS protocol has proved useful primarily due to the existence of the SDDS Toolkit, a growing set of about 60 generic commandline programs that read and/or write SDDS files. The SDDS Toolkit is also used extensively for simulation postprocessing, giving physicists a single environment for experiment and simulation. With the Toolkit, new SDDS data is displayed and subjected to complex processing without developing new programs. Data from EPICS, lab instruments, simulation, and other sources are easily integrated. Because the SDDS tools are commandline-based, data processing scripts are readily written using the user's preferred shell language. Since users work within a UNIX shell rather than an application-specific shell or GUI, they may add SDDS-compliant programs and scripts to their personal toolkits without restriction or complication. The SDDS Toolkit has been run under UNIX on SUN OS4, HP-UX, and LINUX. Application of SDDS to accelerator operation is being pursued using Tcl/Tk to provide a GUI

  6. Parallel checksumming of data chunks of a shared data object using a log-structured file system

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-09-06

    Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.

  7. The version control service for ATLAS data acquisition configuration filesDAQ ; configuration ; OKS ; XML

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications providi...

  8. Coexistence of graph-oriented and relational data file organisations in a data bank system

    International Nuclear Information System (INIS)

    Engel, K.D.

    1980-01-01

    It is shown that a coexistence of hierarchical and relational data bank structures in computer networks in a common data bank system is possible. This coexistence model, first established by NIJSSEN, regards the graph theory CODASYL approach and CODD's relational model as graph-oriented, or rather table-oriented, data file organisation as presented to the user of a common logical structure of the data bank. (WB) [de

  9. Toolsets for Airborne Data (TAD): Improving Machine Readability for ICARTT Data Files

    Science.gov (United States)

    Northup, E. A.; Early, A. B.; Beach, A. L., III; Kusterer, J.; Quam, B.; Wang, D.; Chen, G.

    2015-12-01

    NASA has conducted airborne tropospheric chemistry studies for about three decades. These field campaigns have generated a great wealth of observations, including a wide range of the trace gases and aerosol properties. The ASDC Toolsets for Airborne Data (TAD) is designed to meet the user community needs for manipulating aircraft data for scientific research on climate change and air quality relevant issues. TAD makes use of aircraft data stored in the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) file format. ICARTT has been the NASA standard since 2010, and is widely used by NOAA, NSF, and international partners (DLR, FAAM). Its level of acceptance is due in part to it being generally self-describing for researchers, i.e., it provides necessary data descriptions for proper research use. Despite this, there are a number of issues with the current ICARTT format, especially concerning the machine readability. In order to overcome these issues, the TAD team has developed an "idealized" file format. This format is ASCII and is sufficiently machine readable to sustain the TAD system, however, it is not fully compatible with the current ICARTT format. The process of mapping ICARTT metadata to the idealized format, the format specifics, and the actual conversion process will be discussed. The goal of this presentation is to demonstrate an example of how to improve the machine readability of ASCII data format protocols.

  10. Contents of GPS Data Files

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, John P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Carver, Matthew Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Norman, Benjamin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-12-09

    There are no very detailed descriptions of most of these instruments in the literature – we will attempt to fix that problem in the future. The BDD instruments are described in [1]. One of the dosimeter instruments on CXD boxes is described in [2]. These documents (or web links to them) and a few others are in this directory tree. The cross calibration of the CXD electron data with RBSP is described in [3]. Each row in the data file contains the data from one time bin from a CXD or BDD instrument along with a variety of parameters derived from the data. Time steps are commandable but 4 minutes is a typical setting. These instruments are on many (but not all) GPS satellites which are currently in operation. The data come from either BDD instruments on GPS Block IIR satellites (SVN41 and 48), or else CXD-IIR instruments on GPS Block IIR and IIR-M satellites (SVN53-61) or CXD-IIF instruments on GPS block IIF satellites (SVN62-73). The CXD-IIR instruments on block IIR and IIR(M) satellites use the same design.

  11. Creating Customized Data Files in {E}-{P}rime: {A} Practical Tutorial

    Directory of Open Access Journals (Sweden)

    İyilikci, Osman

    2018-02-01

    Full Text Available There are packages that simplify experiment generation by taking advantage of graphical user interface. Typically, such packages create output files that are not in an appropriate format to be directly analyzed with a statistical package. For this reason, researchers must complete several time consuming steps, and use additional software to prepare data for a statistical analysis. The present paper suggests a particular E-Basic technique which is time saving in data analysis process and applicable to a wide range of experiments that measure reaction time and response accuracy. The technique demonstrated here makes it possible to create a customized and ready-to-analyze data file automatically while running an experiment designed in E-Prime environment.

  12. pTSC: Data file editing for the Tokamak Simulation Code

    International Nuclear Information System (INIS)

    Meiss, J.D.

    1987-09-01

    The code pTSC is an editor for the data files needed to run the Princeton Tokamak Simulation Code (TSC). pTSC utilizes the Macintosh interface to create a graphical environment for entering the data. As most of the data to run TSC consists of conductor positions, the graphical interface is especially appropriate

  13. Requirements for an evaluated nuclear data file for accelerator-based transmutation

    International Nuclear Information System (INIS)

    Koning, A.J.

    1993-06-01

    The importance of intermediate-energy nuclear data files as part of a global calculation scheme for accelerator-based transmutation of radioactive waste systems (for instance with an accelerator-driven subcritical reactor) is discussed. A proposal for three intermediate-energy data libraries for incident neutrons and protons is presented: - a data library from 0 to about 100 MeV (first priority), - a reference data library from 20 to 1500 MeV, - an activation/transmutation library from 0 to about 100 MeV. Furthermore, the proposed ENDF-6 structure of each library is given. The data needs for accelerator-based transmutation are translated in terms of the aforementioned intermediate-energy data libraries. This could be a starting point for an ''International Evaluated Nuclear Data File for Transmutation''. This library could also be of interest for other applications in science and technology. Finally, some conclusions and recommendations concerning future evaluation work are given. (orig.)

  14. SLIB77, Source Library Data Compression and File Maintenance System

    International Nuclear Information System (INIS)

    Lunsford, A.

    1989-01-01

    Description of program or function: SLIB77 is a source librarian program designed to maintain FORTRAN source code in a compressed form on magnetic disk. The program was prepared to meet program maintenance requirements for ongoing program development and continual improvement of very large programs involving many programmers from a number of different organizations. SLIB77 automatically maintains in one file the source of the current program as well as all previous modifications. Although written originally for FORTRAN programs, SLIB77 is suitable for use with data files, text files, operating systems, and other programming languages, such as Ada, C and COBOL. It can handle libraries with records of up to 160-characters. Records are grouped into DECKS and assigned deck names by the user. SLIB77 assigns a number to each record in each DECK. Records can be deleted or restored singly or as a group within each deck. Modification records are grouped and assigned modification identification names by the user. The program assigns numbers to each new record within the deck. The program has two modes of execution, BATCH and EDIT. The BATCH mode is controlled by an input file and is used to make changes permanent and create new library files. The EDIT mode is controlled by interactive terminal input and a built-in line editor is used for modification of single decks. Transferring of a library from one computer system to another is accomplished using a Portable Library File created by SLIB77 in a BATCH run

  15. Data Qualification Report For: Thermodynamic Data File, DATA0.YMP.R0 For Geochemical Code, EQ3/6 

    Energy Technology Data Exchange (ETDEWEB)

    P.L. Cloke

    2001-10-16

    The objective of this work is to evaluate the adequacy of chemical thermodynamic data provided by Lawrence Livermore National Laboratory (LLNL) as DataO.ymp.ROA in response to an input request submitted under AP-3.14Q. This request specified that chemical thermodynamic data available in the file, Data0.com.R2, be updated, improved, and augmented for use in geochemical modeling used in Process Model Reports (PMRs) for Engineered Barrier Systems, Waste Form, Waste Package, Unsaturated Zone, and Near Field Environment, as well as for Performance Assessment. The data are qualified in the temperature range 0 to 100 C. Several Data Tracking Numbers (DTNs) associated with Analysis/Model Reports (AMR) addressing various aspects of the post-closure chemical behavior of the waste package and the Engineered Barrier System that rely on EQ316 outputs to which these data are used as input, are Principal Factor affecting. This qualification activity was accomplished in accordance with the AP-SIII.2Q using the Technical Assessment method. A development plan, TDP-EBS-MD-000044, was prepared in accordance with AP-2.13Q and approved by the Responsible Manager. In addition, a Process Control Evaluation was performed in accordance with AP-SV.1Q. The qualification method, selected in accordance with AP-SIII.2Q, was Technical Assessment. The rationale for this approach is that the data in File Data0.com.R2 are considered Handbook data and therefore do not themselves require qualification. Only changes to Data0.com.R2 required qualification. A new file has been produced which contains the database Data0.ymp.R0, which is recommended for qualification as a result of this action. Data0.ymp.R0 will supersede Data0.com.R2 for all Yucca Mountain Project (YMP) activities.

  16. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    Science.gov (United States)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available

  17. NASIS data base management system: IBM 360 TSS implementation. Volume 6: NASIS message file

    Science.gov (United States)

    1973-01-01

    The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.

  18. Development of Indian cross section data files for Th-232 and U-233 and integral validation studies

    International Nuclear Information System (INIS)

    Ganesan, S.

    1988-01-01

    This paper presents an overview of the tasks performed towards the development of Indian cross section data files for Th-232 and U-233. Discrepancies in various neutron induced reaction cross sections in various available evaluated data files have been obtained by processing the basic data into multigroup form and intercomparison of the latter. Interesting results of integral validation studies for capture, fission and (n,2n) cross sections for Th-232 by analyses of selected integral measurements are presented. In the resonance range, energy regions where significant differences in the calculated self-shielding factors for Th-232 occur have been identified by a comparison of self-shielded multigroup cross sections derived from two recent evaluated data files, viz., ENDF/B-V (Rev.2) and JENDL-2, for several dilutions and temperatures. For U-233, the three different basic data files ENDF/B-IV, JENDL-2 and ENDL-84 were intercompared. Interesting observations on the predictional capability of these files for the criticality of the spherical metal U-233 system are given. The current status of Indian data file is presented. (author) 62 ref

  19. FEDGROUP - A program system for producing group constants from evaluated nuclear data of files disseminated by IAEA

    International Nuclear Information System (INIS)

    Vertes, P.

    1976-06-01

    A program system for calculating group constants from several evaluated nuclear data files has been developed. These files are distributed by the Nuclear Data Section of IAEA. Our program system - FEDGROUP - has certain advantage over the well-known similar codes such as: 1. it requires only a medium sized computer />or approximately equal to 20000 words memory/, 2. it is easily adaptable to any type of computer, 3. it is flexible to the input evaluated nuclear data file and to the output group constant file. Nowadays, FEDGROUP calculates practically all types of group constants needed for reactor physics calculations by using the most frequent representations of evaluated data. (author)

  20. Establishment of data base files of thermodynamic data developed by OECD/NEA. Part 4. Addition of thermodynamic data for iron, tin and thorium

    International Nuclear Information System (INIS)

    Yoshida, Yasushi; Kitamura, Akira

    2014-12-01

    Thermodynamic data for compounds and complexes of elements with auxiliary species specialized in modeling requirements for safety assessments of radioactive waste disposal systems have been developed by the Thermochemical Data Base (TDB) project of the Nuclear Energy Agency in the Organization for Economic Co-operation and Development (OECD/NEA). Recently, thermodynamic data for aqueous complexes, solids and gases of thorium, tin and iron (Part 1) have been published in 2008, 2012 and 2013, respectively. These thermodynamic data have been selected on the basis of NEA’s guidelines which describe peer review and data selection, extrapolation to zero ionic strength, assignment of uncertainty, and temperature correction; therefore the selected data are considered to be reliable. The reliability of selected thermodynamic data of TDB developed by Japan Atomic Energy Agency (JAEA-TDB) has been confirmed by comparing with selected data by the NEA. For this comparison, text files of the selected data on some geochemical calculation programs are required. In the present report, the database files for the NEA’s TDB with addition of selected data for iron, tin and thorium to the previous files have been established for use of PHREEQC, Geochemist’s Workbench and EQ3/6. In addition, as an example of confirmation of quality, dominant species in iron TDB were compared in Eh-pH diagram and differences between JAEA-TDB and NEA-TDB were shown. Data base files established in the present study will be at the Website of thermodynamic, sorption and diffusion database in JAEA (http://migrationdb.jaea.go.jp/). A CD-ROM is attached as an appendix. (J.P.N.)

  1. The Evaluated Nuclear Structure Data File (ENSDF). Its philosophy, content and uses

    International Nuclear Information System (INIS)

    Burrows, T.W.

    1989-04-01

    The Evaluated Nuclear Structure Data File (ENSDF) is maintained by the National Nuclear Data Center (NNDC) on behalf of international Nuclear Structure and Decay Data Network sponsored by the International Atomic Energy Agency, Vienna. For A≥44 the file is used to produce the Nuclear Data Sheets. Data for A=5 to 44 are extracted from the evaluations published in Nuclear Physics. The contents of ENSDF are briefly described as is the philosophy and methodology of ENSDF evaluations. Also discussed are the services available at various nuclear data centers and the on-line services of the NNDC. Application codes developed for use with ENSDF are described with the program RADLST used as an example. The interaction of ENSDF evaluations with other evaluations is also discussed. (author). 23 refs, 3 tabs

  2. Views of CMS Event Data Objects, Files, Collections, Virtual Data Products

    CERN Document Server

    Holtman, Koen

    2001-01-01

    The CMS data grid system will store many types of data maintained by the CMS collaboration. An important type of data is the event data, which is defined in this note as all data that directly represents simulated, raw, or reconstructed CMS physics events. Many views on this data will exist simultaneously. To a CMS physics code implementer this data will appear as C++ objects, to a tape robot operator the data will appear as files. This note identifies different views that can exist, describes each of them, and interrelates them by placing them into a vertical stack. This particular stack integrates several existing architectural structures, and is therefore a plausible basis for further prototyping and architectural work. This document is intended as a contribution to, and as common (terminological) reference material for, the CMS architectural efforts and for the Grid projects PPDG, GriPhyN, and the EU DataGrid.

  3. Student Achievement Study, 1970-1974. The IEA Six-Subject Data Bank [machine-readable data file].

    Science.gov (United States)

    International Association for the Evaluation of Educational Achievement, Stockholm (Sweden).

    The "Student Achievement Study" machine-readable data files (MRDF) (also referred to as the "IEA Six-Subject Survey") are the result of an international data collection effort during 1970-1974 by 21 designated National Centers, which had agreed to cooperate. The countries involved were: Australia, Belgium, Chile, England-Wales,…

  4. PC Graphic file programing

    International Nuclear Information System (INIS)

    Yang, Jin Seok

    1993-04-01

    This book gives description of basic of graphic knowledge and understanding and realization of graphic file form. The first part deals with graphic with graphic data, store of graphic data and compress of data, programing language such as assembling, stack, compile and link of program and practice and debugging. The next part mentions graphic file form such as Mac paint file, GEM/IMG file, PCX file, GIF file, and TIFF file, consideration of hardware like mono screen driver and color screen driver in high speed, basic conception of dithering and conversion of formality.

  5. Status and evaluation methods of JENDL fusion file and JENDL PKA/KERMA file

    International Nuclear Information System (INIS)

    Chiba, S.; Fukahori, T.; Shibata, K.; Yu Baosheng; Kosako, K.

    1997-01-01

    The status of evaluated nuclear data in the JENDL fusion file and PKA/KERMA file is presented. The JENDL fusion file was prepared in order to improve the quality of the JENDL-3.1 data especially on the double-differential cross sections (DDXs) of secondary neutrons and gamma-ray production cross sections, and to provide DDXs of secondary charged particles (p, d, t, 3 He and α-particle) for the calculation of PKA and KERMA factors. The JENDL fusion file contains evaluated data of 26 elements ranging from Li to Bi. The data in JENDL fusion file reproduce the measured data on neutron and charged-particle DDXs and also on gamma-ray production cross sections. Recoil spectra in PKA/KERMA file were calculated from secondary neutron and charged-particle DDXs contained in the fusion file with two-body reaction kinematics. The data in the JENDL fusion file and PKA/KERMA file were compiled in ENDF-6 format with an MF=6 option to store the DDX data. (orig.)

  6. Design and creation of a direct access nuclear data file

    International Nuclear Information System (INIS)

    Charpentier, P.

    1981-06-01

    General considerations on the structure of instructions and files are reviewed. Design, organization and mode of use of the different files: instruction file, index files, inverted files, automatic analysis and inquiry programs are examined [fr

  7. Software Library for Bruker TopSpin NMR Data Files

    Energy Technology Data Exchange (ETDEWEB)

    2016-10-14

    A software library for parsing and manipulating frequency-domain data files that have been processed using the Bruker TopSpin NMR software package. In the context of NMR, the term "processed" indicates that the end-user of the Bruker TopSpin NMR software package has (a) Fourier transformed the raw, time-domain data (the Free Induction Decay) into the frequency-domain and (b) has extracted the list of NMR peaks.

  8. Using NJOY to Create MCNP ACE Files and Visualize Nuclear Data

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, Albert Comstock [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-14

    We provide lecture materials that describe the input requirements to create various MCNP ACE files (Fast, Thermal, Dosimetry, Photo-nuclear and Photo-atomic) with the NJOY Nuclear Data Processing code system. Input instructions to visualize nuclear data with NJOY are also provided.

  9. Simulation of Thermal Neutron Transport Processes Directly from the Evaluated Nuclear Data Files

    Science.gov (United States)

    Androsenko, P. A.; Malkov, M. R.

    The main idea of the method proposed in this paper is to directly extract thetrequired information for Monte-Carlo calculations from nuclear data files. The met od being developed allows to directly utilize the data obtained from libraries and seehs to be the most accurate technique. Direct simulation of neutron scattering in themmal energy range using file 7 ENDF-6 format in terms of code system BRAND has beer achieved. Simulation algorithms have been verified using the criterion x2

  10. Migrant Student Record Transfer System (MSRTS) [machine-readable data file].

    Science.gov (United States)

    Arkansas State Dept. of Education, Little Rock. General Education Div.

    The Migrant Student Record Transfer System (MSRTS) machine-readable data file (MRDF) is a collection of education and health data on more than 750,000 migrant children in grades K-12 in the United States (except Hawaii), the District of Columbia, and the outlying territories of Puerto Rico and the Mariana and Marshall Islands. The active file…

  11. The design and analysis of salmonid tagging studies in the Columbia Basin. Volume 10: Instructional guide to using program CaptHist to create SURPH files for survival analysis using PTAGIS data files

    International Nuclear Information System (INIS)

    Westhagen, P.; Skalski, J.

    1997-12-01

    The SURPH program is a valuable tool for estimating survivals and capture probabilities of fish outmigrations on the Snake and Columbia Rivers. Using special data files, SURPH computes reach to reach statistics for any release group passing a system of detection sites. Because the data must be recorded for individual fish, PIT tag data is best suited for use as input. However, PIT tag data as available from PTAGIS comes in a form that is not ready for use as SURPH input. SURPH requires a capture history for each fish. A capture history consists of a series of fields, one for each detection site, that has a code for whether the fish was detected and returned to the river, detected and removed, or not detected. For the PTAGIS data to be usable by SURPH it must be pre-processed. The data must be condensed down to one line per fish with the relevant detection information from the PTAGIS file represented compactly on each line. In addition, the PTAGIS data file coil information must be passed through a series of logic algorithms to determine whether or not a fish is returned to the river after detection. Program CaptHist was developed to properly pre-process the PTAGIS data files for input to program SURPH. This utility takes PTAGIS data files as input and creates a SURPH data file as well as other output including travel time records, detection date records, and a data error file. CaptHist allows a user to download PTAGIS files and easily process the data for use with SURPH

  12. NASIS data base management system - IBM 360/370 OS MVT implementation. 6: NASIS message file

    Science.gov (United States)

    1973-01-01

    The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.

  13. High School and Beyond Transcripts Survey (1982). Data File User's Manual. Contractor Report.

    Science.gov (United States)

    Jones, Calvin; And Others

    This data file user's manual documents the procedures used to collect and process high school transcripts for a large sample of the younger cohort (1980 sophomores) in the High School and Beyond survey. The manual provides the user with the technical assistance needed to use the computer file and also discusses the following: (1) sample design for…

  14. ARM Data File Standards Version: 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Kehoe, Kenneth [University of Oklahoma; Beus, Sherman [Pacific Northwest National Laboratory; Cialella, Alice [Brookhaven National Laboratory; Collis, Scott [Argonne National Laboratory; Ermold, Brian [Pacific Northwest National Laboratory; Perez, Robin [State University of New York, Albany; Shamblin, Stefanie [Oak Ridge National Laboratory; Sivaraman, Chitra [Pacific Northwest National Laboratory; Jensen, Mike [Brookhaven National Laboratory; McCord, Raymond [Oak Ridge National Laboratory; McCoy, Renata [Sandia National Laboratories; Moore, Sean [Alliant Techsystems, Inc.; Monroe, Justin [University of Oklahoma; Perkins, Brad [Los Alamos National Laboratory; Shippert, Tim [Pacific Northwest National Laboratory

    2014-04-01

    The Atmospheric Radiation Measurement (ARM) Climate Research Facility performs routine in situ and remote-sensing observations to provide a detailed and accurate description of the Earth atmosphere in diverse climate regimes. The result is a diverse data sets containing observational and derived data, currently accumulating at a rate of 30 TB of data and 150,000 different files per month (http://www.archive.arm.gov/stats/storage2.html). Continuing the current processing while scaling this to even larger sizes is extremely important to the ARM Facility and requires consistent metadata and data standards. The standards described in this document will enable development of automated analysis and discovery tools for the ever-growing volumes of data. It also will enable consistent analysis of the multiyear data, allow for development of automated monitoring and data health status tools, and facilitate development of future capabilities for delivering data on demand that can be tailored explicitly to user needs. This analysis ability will only be possible if the data follows a minimum set of standards. This document proposes a hierarchy that includes required and recommended standards.

  15. EQPT, a data file preprocessor for the EQ3/6 software package: User`s guide and related documentation (Version 7.0); Part 2

    Energy Technology Data Exchange (ETDEWEB)

    Daveler, S.A.; Wolery, T.J.

    1992-12-17

    EQPT is a data file preprocessor for the EQ3/6 software package. EQ3/6 currently contains five primary data files, called datao files. These files comprise alternative data sets. These data files contain both standard state and activity coefficient-related data. Three (com, sup, and nea) support the use of the Davies or B-dot equations for the activity coefficients; the other two (hmw and pit) support the use of Pitzer`s (1973, 1975) equations. The temperature range of the thermodynamic data on these data files varies from 25{degrees}C only to 0-300{degrees}C. The principal modeling codes in EQ3/6, EQ3NR and EQ6, do not read a data0 file, however. Instead, these codes read an unformatted equivalent called a data1 file. EQPT writes a datal file, using the corresponding data0 file as input. In processing a data0 file, EQPT checks the data for common errors, such as unbalanced reactions. It also conducts two kinds of data transformation. Interpolating polynomials are fit to data which are input on temperature adds. The coefficients of these polynomials are then written on the datal file in place of the original temperature grids. A second transformation pertains only to data files tied to Pitzer`s equations. The commonly reported observable Pitzer coefficient parameters are mapped into a set of primitive parameters by means of a set of conventional relations. These primitive form parameters are then written onto the datal file in place of their observable counterparts. Usage of the primitive form parameters makes it easier to evaluate Pitzer`s equations in EQ3NR and EQ6. EQPT and the other codes in the EQ3/6 package are written in FORTRAN 77 and have been developed to run under the UNIX operating system on computers ranging from workstations to supercomputers.

  16. Procedures manual for the Evaluated Nuclear Structure Data File

    International Nuclear Information System (INIS)

    Bhat, M.R.

    1987-10-01

    This manual is a collection of various notes, memoranda and instructions on procedures for the evaluation of data in the Evaluated Nuclear Structure Data File (ENSDF). They were distributed at different times over the past few years to the evaluators of nuclear structure data and some of them were not readily avaialble. Hence, they have been collected in this manual for ease of reference by the evaluators of the international Nuclear Structure and Decay Data (NSDD) network contribute mass-chains to the ENSDF. Some new articles were written specifically for this manual and others are reivsions of earlier versions

  17. JENDL Dosimetry File

    International Nuclear Information System (INIS)

    Nakazawa, Masaharu; Iguchi, Tetsuo; Kobayashi, Katsuhei; Iwasaki, Shin; Sakurai, Kiyoshi; Ikeda, Yujiro; Nakagawa, Tsuneo.

    1992-03-01

    The JENDL Dosimetry File based on JENDL-3 was compiled and integral tests of cross section data were performed by the Dosimetry Integral Test Working Group of the Japanese Nuclear Data Committee. Data stored in the JENDL Dosimetry File are the cross sections and their covariance data for 61 reactions. The cross sections were mainly taken from JENDL-3 and the covariances from IRDF-85. For some reactions, data were adopted from other evaluated data files. The data are given in the neutron energy region below 20 MeV in both of point-wise and group-wise files in the ENDF-5 format. In order to confirm reliability of the data, several integral tests were carried out; comparison with the data in IRDF-85 and average cross sections measured in fission neutron fields, fast reactor spectra, DT neutron fields and Li(d, n) neutron fields. As a result, it has been found that the JENDL Dosimetry File gives better results than IRDF-85 but there are some problems to be improved in future. The contents of the JENDL Dosimetry File and the results of the integral tests are described in this report. All of the dosimetry cross sections are shown in a graphical form. (author) 76 refs

  18. JENDL Dosimetry File

    Energy Technology Data Exchange (ETDEWEB)

    Nakazawa, Masaharu; Iguchi, Tetsuo [Tokyo Univ. (Japan). Faculty of Engineering; Kobayashi, Katsuhei [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.; Iwasaki, Shin [Tohoku Univ., Sendai (Japan). Faculty of Engineering; Sakurai, Kiyoshi; Ikeda, Yujior; Nakagawa, Tsuneo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1992-03-15

    The JENDL Dosimetry File based on JENDL-3 was compiled and integral tests of cross section data were performed by the Dosimetry Integral Test Working Group of the Japanese Nuclear Data Committee. Data stored in the JENDL Dosimetry File are the cross sections and their covariance data for 61 reactions. The cross sections were mainly taken from JENDL-3 and the covariances from IRDF-85. For some reactions, data were adopted from other evaluated data files. The data are given in the neutron energy region below 20 MeV in both of point-wise and group-wise files in the ENDF-5 format. In order to confirm reliability of the data, several integral tests were carried out; comparison with the data in IRDF-85 and average cross sections measured in fission neutron fields, fast reactor spectra, DT neutron fields and Li(d,n) neutron fields. As a result, it has been found that the JENDL Dosimetry File gives better results than IRDF-85 but there are some problems to be improved in future. The contents of the JENDL Dosimetry File and the results of the integral tests are described in this report. All of the dosimetry cross sections are shown in a graphical form.

  19. Long term file migration. Part I: file reference patterns

    International Nuclear Information System (INIS)

    Smith, A.J.

    1978-08-01

    In most large computer installations, files are moved between on-line disk and mass storage (tape, integrated mass storage device) either automatically by the system or specifically at the direction of the user. This is the first of two papers which study the selection of algorithms for the automatic migration of files between mass storage and disk. The use of the text editor data sets at the Stanford Linear Accelerator Center (SLAC) computer installation is examined through the analysis of thirteen months of file reference data. Most files are used very few times. Of those that are used sufficiently frequently that their reference patterns may be examined, about a third show declining rates of reference during their lifetime; of the remainder, very few (about 5%) show correlated interreference intervals, and interreference intervals (in days) appear to be more skewed than would occur with the Bernoulli process. Thus, about two-thirds of all sufficiently active files appear to be referenced as a renewal process with a skewed interreference distribution. A large number of other file reference statistics (file lifetimes, interference distributions, moments, means, number of uses/file, file sizes, file rates of reference, etc.) are computed and presented. The results are applied in the following paper to the development and comparative evaluation of file migration algorithms. 17 figures, 13 tables

  20. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    Science.gov (United States)

    Chao, Tian-Jy; Kim, Younghun

    2015-01-06

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  1. Total cross-sections assessment of neutron reaction with stainless steel SUS-310 contained in various nuclear data files

    International Nuclear Information System (INIS)

    Suwoto

    2002-01-01

    The integral testing of neutron cross-sections for Stainless Steel SUS-310 contained in various nuclear data files have been performed. The shielding benchmark calculations for Stainless Steel SUS-310 has been analysed through ORNL-Broomstick Experiment calculation which performed by MAERKER, R.E. at ORNL - USA ( 1) . Assessment with JENDL-3.1, JENDL-3.2, ENDF/B-IV, ENDF/B-VI nuclear data files and data from GEEL have also been carried out. The overall calculation results SUS-310 show in a good agreement with the experimental data, although, underestimate results appear below 3 MeV for all nuclear data files. These underestimation tendencies clearly caused by presented of iron nuclide which more than half in Stainless Steel compound. The total neutron cross-sections of iron nuclide contained in various nuclear data files relatively lower on that energy ranges

  2. Solving data-at-rest for the storage and retrieval of files in ad hoc networks

    Science.gov (United States)

    Knobler, Ron; Scheffel, Peter; Williams, Jonathan; Gaj, Kris; Kaps, Jens-Peter

    2013-05-01

    Based on current trends for both military and commercial applications, the use of mobile devices (e.g. smartphones and tablets) is greatly increasing. Several military applications consist of secure peer to peer file sharing without a centralized authority. For these military applications, if one or more of these mobile devices are lost or compromised, sensitive files can be compromised by adversaries, since COTS devices and operating systems are used. Complete system files cannot be stored on a device, since after compromising a device, an adversary can attack the data at rest, and eventually obtain the original file. Also after a device is compromised, the existing peer to peer system devices must still be able to access all system files. McQ has teamed with the Cryptographic Engineering Research Group at George Mason University to develop a custom distributed file sharing system to provide a complete solution to the data at rest problem for resource constrained embedded systems and mobile devices. This innovative approach scales very well to a large number of network devices, without a single point of failure. We have implemented the approach on representative mobile devices as well as developed an extensive system simulator to benchmark expected system performance based on detailed modeling of the network/radio characteristics, CONOPS, and secure distributed file system functionality. The simulator is highly customizable for the purpose of determining expected system performance for other network topologies and CONOPS.

  3. Taming Log Files from Game/Simulation-Based Assessments: Data Models and Data Analysis Tools. Research Report. ETS RR-16-10

    Science.gov (United States)

    Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm

    2016-01-01

    Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…

  4. Cut-and-Paste file-systems : integrating simulators and file systems

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.

    1996-01-01

    We have implemented an integrated and configurable file system called the PFS and a trace-driven file-system simulator called Patsy. Patsy is used for off-line analysis of file-system algorithms, PFS is used for on-line file-system data storage. Algorithms are first analyzed in Patsy and when we are

  5. Efficient analysis and extraction of MS/MS result data from Mascot™ result files

    Directory of Open Access Journals (Sweden)

    Sickmann Albert

    2005-12-01

    Full Text Available Abstract Background Mascot™ is a commonly used protein identification program for MS as well as for tandem MS data. When analyzing huge shotgun proteomics datasets with Mascot™'s native tools, limits of computing resources are easily reached. Up to now no application has been available as open source that is capable of converting the full content of Mascot™ result files from the original MIME format into a database-compatible tabular format, allowing direct import into database management systems and efficient handling of huge datasets analyzed by Mascot™. Results A program called mres2x is presented, which reads Mascot™ result files, analyzes them and extracts either selected or all information in order to store it in a single file or multiple files in formats which are easier to handle downstream of Mascot™. It generates different output formats. The output of mres2x in tab format is especially designed for direct high-performance import into relational database management systems using native tools of these systems. Having the data available in database management systems allows complex queries and extensive analysis. In addition, the original peak lists can be extracted in DTA format suitable for protein identification using the Sequest™ program, and the Mascot™ files can be split, preserving the original data format. During conversion, several consistency checks are performed. mres2x is designed to provide high throughput processing combined with the possibility to be driven by other computer programs. The source code including supplement material and precompiled binaries is available via http://www.protein-ms.de and http://sourceforge.net/projects/protms/. Conclusion The database upload allows regrouping of the MS/MS results using a database management system and complex analyzing queries using SQL without the need to run new Mascot™ searches when changing grouping parameters.

  6. ARM Data File Standards Version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Palanisamy, Giri [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-05-01

    The U.S. Department of Energy (DOE)’s Atmospheric Radiation Measurement (ARM) Climate Research Facility performs routine in situ and remote-sensing observations to provide a detailed and accurate description of the Earth atmosphere in diverse climate regimes. The result is a huge archive of diverse data sets containing observational and derived data, currently accumulating at a rate of 30 terabytes (TB) of data and 150,000 different files per month (http://www.archive.arm.gov/stats/). Continuing the current processing while scaling this to even larger sizes is extremely important to the ARM Facility and requires consistent metadata and data standards. The standards described in this document will enable development of automated analysis and discovery tools for the ever growing data volumes. It will enable consistent analysis of the multiyear data, allow for development of automated monitoring and data health status tools, and allow future capabilities of delivering data on demand that can be tailored explicitly for the user needs. This analysis ability will only be possible if the data follows a minimum set of standards. This document proposes a hierarchy of required and recommended standards.

  7. National Household Education Surveys of 2003. Data File User's Manual, Volume II: Parent and Family Involvement in Education Survey. NCES 2004-102

    Science.gov (United States)

    Hagedorn, Mary; Montaquila, Jill; Vaden-Kiernan, Nancy; Kim, Kwang; Roth, Shelley Brock; Chapman, Christopher

    2004-01-01

    This manual provides documentation and guidance for users of the public-use data file for PFI-NHES: 2003. This volume contains a description of the content and organization of the data file, including useful information regarding questionnaire items and the various derived variables found on the file. Appended are the public-use data file layout,…

  8. Visual system of recovering and combination of information for ENDF (Evaluated Nuclear Data File) format libraries

    International Nuclear Information System (INIS)

    Ferreira, Claudia A.S. Velloso; Corcuera, Raquel A. Paviotti

    1997-01-01

    This report presents a data information retrieval and merger system for ENDF (Evaluated Nuclear Data File) format libraries, which can be run on personal computers under the Windows TM environment. The input is the name of an ENDF/B library, which can be chosen in a proper window. The system has a display function which allows the user to visualize the reaction data of a specific nuclide and to produce a printed copy of these data. The system allows the user to retrieve and/or combine evaluated data to create a single file of data in ENDF format, from a number of different files, each of which is in the ENDF format. The user can also create a mini-library from an ENDF/B library. This interactive and easy-to-handle system is a useful tool for Nuclear Data Centers and it is also of interest to nuclear and reactor physics researchers. (author)

  9. Transfer of numeric ASCII data files between Apple and IBM personal computers.

    Science.gov (United States)

    Allan, R W; Bermejo, R; Houben, D

    1986-01-01

    Listings for programs designed to transfer numeric ASCII data files between Apple and IBM personal computers are provided with accompanying descriptions of how the software operates. Details of the hardware used are also given. The programs may be easily adapted for transferring data between other microcomputers.

  10. A Metadata-Rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2009-01-07

    Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  11. Odysseus/DFS: Integration of DBMS and Distributed File System for Transaction Processing of Big Data

    OpenAIRE

    Kim, Jun-Sung; Whang, Kyu-Young; Kwon, Hyuk-Yoon; Song, Il-Yeol

    2014-01-01

    The relational DBMS (RDBMS) has been widely used since it supports various high-level functionalities such as SQL, schemas, indexes, and transactions that do not exist in the O/S file system. But, a recent advent of big data technology facilitates development of new systems that sacrifice the DBMS functionality in order to efficiently manage large-scale data. Those so-called NoSQL systems use a distributed file system, which support scalability and reliability. They support scalability of the...

  12. Evaluated nuclear data file libraries use in nuclear-physical calculations

    International Nuclear Information System (INIS)

    Gritsaj, O.O.; Kalach, N.Yi.; Kal'chenko, O.Yi.; Kolotij, V.V.; Vlasov, M.F.

    1994-01-01

    The necessity of nuclear updated usage is founded for neutron experiment modeling calculations, for preparation of suitable data for reactor calculations and for other applications that account of detail energetic structure of cross section is required. The scheme of system to coordinate the work to collect and to prepare evaluated nuclear data on an international scale is presented. Main updated and recommended nuclear data libraries and associated computer programs are reviewed. Total neutron cross sections for 28 energetic groups calculated on the base of natural mixture iron isotopes evaluated nuclear data file (BROND-2, 1991) have been compared with BNAB-78 data. (author). 7 refs., 1 tab., 4 figs

  13. Provider of Services File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The POS file consists of two data files, one for CLIA labs and one for 18 other provider types. The file names are CLIA and OTHER. If downloading the file, note it...

  14. A data compression algorithm for nuclear spectrum files

    International Nuclear Information System (INIS)

    Mika, J.F.; Martin, L.J.; Johnston, P.N.

    1990-01-01

    The total space occupied by computer files of spectra generated in nuclear spectroscopy systems can lead to problems of storage, and transmission time. An algorithm is presented which significantly reduces the space required to store nuclear spectra, without loss of any information content. Testing indicates that spectrum files can be routinely compressed by a factor of 5. (orig.)

  15. File Type Identification of File Fragments using Longest Common Subsequence (LCS)

    Science.gov (United States)

    Rahmat, R. F.; Nicholas, F.; Purnamawati, S.; Sitompul, O. S.

    2017-01-01

    Computer forensic analyst is a person in charge of investigation and evidence tracking. In certain cases, the file needed to be presented as digital evidence was deleted. It is difficult to reconstruct the file, because it often lost its header and cannot be identified while being restored. Therefore, a method is required for identifying the file type of file fragments. In this research, we propose Longest Common Subsequences that consists of three steps, namely training, testing and validation, to identify the file type from file fragments. From all testing results we can conlude that our proposed method works well and achieves 92.91% of accuracy to identify the file type of file fragment for three data types.

  16. ENDF-6 File 30: Data covariances obtained from parameter covariances and sensitivities

    International Nuclear Information System (INIS)

    Muir, D.W.

    1989-01-01

    File 30 is provided as a means of describing the covariances of tabulated cross sections, multiplicities, and energy-angle distributions that result from propagating the covariances of a set of underlying parameters (for example, the input parameters of a nuclear-model code), using an evaluator-supplied set of parameter covariances and sensitivities. Whenever nuclear data are evaluated primarily through the application of nuclear models, the covariances of the resulting data can be described very adequately, and compactly, by specifying the covariance matrix for the underlying nuclear parameters, along with a set of sensitivity coefficients giving the rate of change of each nuclear datum of interest with respect to each of the model parameters. Although motivated primarily by these applications of nuclear theory, use of File 30 is not restricted to any one particular evaluation methodology. It can be used to describe data covariances of any origin, so long as they can be formally separated into a set of parameters with specified covariances and a set of data sensitivities

  17. Protecting your files on the DFS file system

    CERN Multimedia

    Computer Security Team

    2011-01-01

    The Windows Distributed File System (DFS) hosts user directories for all NICE users plus many more data.    Files can be accessed from anywhere, via a dedicated web portal (http://cern.ch/dfs). Due to the ease of access to DFS with in CERN it is of utmost importance to properly protect access to sensitive data. As the use of DFS access control mechanisms is not obvious to all users, passwords, certificates or sensitive files might get exposed. At least this happened in past to the Andrews File System (AFS) - the Linux equivalent to DFS) - and led to bad publicity due to a journalist accessing supposedly "private" AFS folders (SonntagsZeitung 2009/11/08). This problem does not only affect the individual user but also has a bad impact on CERN's reputation when it comes to IT security. Therefore, all departments and LHC experiments agreed recently to apply more stringent protections to all DFS user folders. The goal of this data protection policy is to assist users in pro...

  18. First Use of LHC Run 3 Conditions Database Infrastructure for Auxiliary Data Files in ATLAS

    CERN Document Server

    Aperio Bella, Ludovica; The ATLAS collaboration

    2016-01-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF data is effectively read by the software as binary objects, makes this class of data ideal for testing the proposed Run 3 Conditions data infrastructure now in development. This paper will describe this implementation as well as describe the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  19. First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081940; The ATLAS collaboration; Barberis, Dario; Gallas, Elizabeth; Rybkin, Grigori; Rinaldi, Lorenzo; Aperio Bella, Ludovica; Buttinger, William

    2017-01-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF are effectively read by the software as binary objects, this class of data appears ideal for testing the proposed Run 3 conditions data infrastructure now in development. This paper describes this implementation as well as the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  20. INDXENDF: A PC code for indexing nuclear data files in ENDF-6 format

    International Nuclear Information System (INIS)

    Silva, O.O. de; Corcuera, R.P.; Ferreira, P.A.; Moraes Cunha, M. de.

    1992-01-01

    The PC code INDXENDF which creates visual or printed indexes of nuclear data files in ENDF-6 format, is available from the IAEA Nuclear Data Section on a PC diskette, free of charge upon request. The present document describes the features of this code. (author). 11 refs, 9 figs

  1. School Survey on Crime and Safety (SSOCS) 2000 Public-Use Data Files, User's Manual, and Detailed Data Documentation. [CD-ROM].

    Science.gov (United States)

    National Center for Education Statistics (ED), Washington, DC.

    This CD-ROM contains the raw, public-use data from the 2000 School Survey on Crime and Safety (SSOCS) along with a User's Manual and Detailed Data Documentation. The data are provided in SAS, SPSS, STATA, and ASCII formats. The User's Manual and the Detailed Data Documentation are provided as .pdf files. (Author)

  2. Pengembangan Algoritma Fast Inversion dalam Membentuk Inverted File untuk Text Retrieval dengan Data Skala Besar

    Directory of Open Access Journals (Sweden)

    Derwin Suhartono

    2012-06-01

    Full Text Available The rapid development of information systems generates new needs for indexing and retrieval of various kinds of media. The need for documents in the form of multimedia is increasing currently. Thus, the need to store or retrieve now becomes a primary problem. The multimedia type commonly used is text types, as widely seen as the main option in the search engines like Yahoo, Google or others. Essentially, search does not just want to get results, but also a more efficient process. For the purposes of indexing and retrieval, inverted file is used to provide faster results. However, there will be a problem if the making of an inverted file is related to a large amount of data. This study describes an algorithm called Fast Inversion as the development of base inverted file making method to address the needs related to the amount of data.

  3. The Polls-Review: Inaccurate Age and Sex Data in the Census Pums Files: Evidence and Implications.

    Science.gov (United States)

    Alexander, J Trent; Davern, Michael; Stevenson, Betsey

    2010-01-01

    We discover and document errors in public-use microdata samples ("PUMS files") of the 2000 Census, the 2003-2006 American Community Survey, and the 2004-2009 Current Population Survey. For women and men age 65 and older, age- and sex-specific population estimates generated from the PUMS files differ by as much as 15 percent from counts in published data tables. Moreover, an analysis of labor-force participation and marriage rates suggests the PUMS samples are not representative of the population at individual ages for those age 65 and over. PUMS files substantially underestimate labor-force participation of those near retirement age and overestimate labor-force participation rates of those at older ages. These problems were an unintentional byproduct of the misapplication of a newer generation of disclosure-avoidance procedures carried out on the data. The resulting errors in the public-use data could significantly impact studies of people age 65 and older, particularly analyses of variables that are expected to change by age.

  4. Reconstruction of point cross-section from ENDF data file for Monte Carlo applications

    International Nuclear Information System (INIS)

    Kumawat, H.; Saxena, A.; Carminati, F.; )

    2016-12-01

    Monte Carlo neutron transport codes are one of the best tools to simulate complex systems like fission and fusion reactors, Accelerator Driven Sub-critical systems, radio-activity management of spent fuel and waste, optimization and characterization of neutron detectors, optimization of Boron Neutron Capture Therapy, imaging etc. The neutron cross-section and secondary particle emission properties are the main input parameters of such codes. The fission, capture and elastic scattering cross-sections have complex resonating structures. Evaluated Nuclear Data File (ENDF) contains these cross-sections and secondary parameters. We report the development of reconstruction procedure to generate point cross-sections and probabilities from ENDF data file. The cross-sections are compared with the values obtained from PREPRO and in some cases NJOY codes. The results are in good agreement. (author)

  5. ENDF-102 DATA FORMATS AND PROCEDURES FOR THE EVALUATED NUCLEAR DATA FILE ENDF-6

    International Nuclear Information System (INIS)

    MCLANE, V.

    2001-01-01

    The Evaluated Nuclear Data File (ENDF) formats and libraries are decided by the Cross Section Evaluation Working Group (CSEWG), a cooperative effort of national laboratories, industry, and universities in the U.S. and Canada, and are maintained by the National Nuclear Data Center (NNDC). Earlier versions of the ENDF format provided representations for neutron cross sections and distributions, photon production from neutron reactions, a limited amount of charged-particle production from neutron reactions, photo-atomic interaction data, thermal neutron scattering data, and radionuclide production and decay data (including fission products). Version 6 (ENDF-6) allows higher incident energies, adds more complete descriptions of the distributions of emitted particles, and provides for incident charged particles and photonuclear data by partitioning the ENDF library into sub-libraries. Decay data, fission product yield data, thermal scattering data, and photo-atomic data have also been formally placed in sub-libraries. In addition, this rewrite represents an extensive update to the Version V manual

  6. ENDF-102 DATA FORMATS AND PROCEDURES FOR THE EVALUATION NUCLEAR DATA FILE ENDF-6.

    Energy Technology Data Exchange (ETDEWEB)

    MCLANE,V.

    2001-05-15

    The Evaluated Nuclear Data File (ENDF) formats and libraries are decided by the Cross Section Evaluation Working Group (CSEWG), a cooperative effort of national laboratories, industry, and universities in the U.S. and Canada, and are maintained by the National Nuclear Data Center (NNDC). Earlier versions of the ENDF format provided representations for neutron cross sections and distributions, photon production from neutron reactions, a limited amount of charged-particle production from neutron reactions, photo-atomic interaction data, thermal neutron scattering data, and radionuclide production and decay data (including fission products). Version 6 (ENDF-6) allows higher incident energies, adds more complete descriptions of the distributions of emitted particles, and provides for incident charged particles and photonuclear data by partitioning the ENDF library into sub-libraries. Decay data, fission product yield data, thermal scattering data, and photo-atomic data have also been formally placed in sub-libraries. In addition, this rewrite represents an extensive update to the Version V manual.

  7. Application of the Levenshtein Distance Metric for the Construction of Longitudinal Data Files

    Science.gov (United States)

    Doran, Harold C.; van Wamelen, Paul B.

    2010-01-01

    The analysis of longitudinal data in education is becoming more prevalent given the nature of testing systems constructed for No Child Left Behind Act (NCLB). However, constructing the longitudinal data files remains a significant challenge. Students move into new schools, but in many cases the unique identifiers (ID) that should remain constant…

  8. PDB Editor: a user-friendly Java-based Protein Data Bank file editor with a GUI.

    Science.gov (United States)

    Lee, Jonas; Kim, Sung Hou

    2009-04-01

    The Protein Data Bank file format is the format most widely used by protein crystallographers and biologists to disseminate and manipulate protein structures. Despite this, there are few user-friendly software packages available to efficiently edit and extract raw information from PDB files. This limitation often leads to many protein crystallographers wasting significant time manually editing PDB files. PDB Editor, written in Java Swing GUI, allows the user to selectively search, select, extract and edit information in parallel. Furthermore, the program is a stand-alone application written in Java which frees users from the hassles associated with platform/operating system-dependent installation and usage. PDB Editor can be downloaded from http://sourceforge.net/projects/pdbeditorjl/.

  9. Consistency between data from the ENDF/B-V dosimetry file and corresponding experimental data for some fast neutron reference spectra

    International Nuclear Information System (INIS)

    Nolthenius, H.J.; Zijp, W.L.

    1981-11-01

    Results are given of a study on the consistency between 'integral' and 'differential' cross sections data for four benchmark neutron spectra and 36 neutron reactions of importance for reactor neutron metrology. The energy dependent cross section data and their uncertainty data are obtained from the ENDF/B-V dosimetry file. The reactions have been considered with respect to the following quantities: 1. the precision of the averaged cross sections, for a specified spectrum; 2. the discrepancy between the measured and the calculated average cross section values; 3. the consistency between the measured and calculated average cross section values, described by the chi 2 -parameter. It was possible to take into account the available cross section covariance information present in the ENDF/B-V dosimetry file. Covariance information on the benchmark flux density spectra was not taken into account in this study

  10. KENO2MCNP, Version 5L, Conversion of Input Data between KENOV.a and MCNP File Formats

    International Nuclear Information System (INIS)

    2008-01-01

    1 - Description of program or function: The KENO2MCNP program was written to convert KENO V.a input files to MCNP Format. This program currently only works with KENO Va geometries and will not work with geometries that contain more than a single array. A C++ graphical user interface was created that was linked to Fortran routines from KENO V.a that read the material library and Fortran routines from the MCNP Visual Editor that generate the MCNP input file. Either SCALE 5.0 or SCALE 5.1 cross section files will work with this release. 2 - Methods: The C++ binary executable reads the KENO V.a input file, the KENO V.a material library and SCALE data libraries. When an input file is read in, the input is stored in memory. The converter goes through and loads different sections of the input file into memory including parameters, composition, geometry information, array information and starting information. Many of the KENO V.a materials represent compositions that must be read from the KENO V.a material library. KENO2MCNP includes the KENO V.a FORTRAN routines used to read this material file for creating the MCNP materials. Once the file has been read in, the user must select 'Convert' to convert the file from KENO V.a to MCNP. This will generate the MCNP input file along with an output window that lists the KENO V.a composition information for the materials contained in the KENO V.a input file. The program can be run interactively by clicking on the executable or in batch mode from the command prompt. 3 - Restrictions on the complexity of the problem: Not all KENO V.a input files are supported. Only one array is allowed in the input file. Some of the more complex material descriptions also may not be converted

  11. Development of a utility system for nuclear reaction data file: WinNRDF

    International Nuclear Information System (INIS)

    Aoyama, Shigeyoshi; Ohbayasi, Yosihide; Masui, Hiroshi; Chiba, Masaki; Kato, Kiyoshi; Ohnishi, Akira

    2000-01-01

    A utility system, WinNRDF, is developed for charged particle nuclear reaction data of NRDF (Nuclear Reaction Data File) on the Windows interface. By using this system, we can easily search the experimental data of a charged particle nuclear reaction in NRDF than old retrieval systems on the mainframe and also see graphically the experimental data on GUI (Graphical User Interface). We adopted a mechanism of making a new index of keywords to put to practical use of the time dependent properties of the NRDF database. (author)

  12. The method to set up file-6 in neutron data library of light nuclei below 20 MeV

    International Nuclear Information System (INIS)

    Zhang Jingshang; Han Yinlu

    2001-01-01

    So far there is no file-6 (double differential cross section data, DDX) of the light nuclei in the main evaluated neutron nuclear data libraries in the world. Therefore, locating a proper description on the double differential cross section of all kinds of outgoing particles from neutron induced light nucleus reaction below 20 MeV is necessary. The motivation for this work is to introduce a way to set up file-6 in the neutron data library

  13. Mixed-Media File Systems

    NARCIS (Netherlands)

    Bosch, H.G.P.

    1999-01-01

    This thesis addresses the problem of implementing mixed-media storage systems. In this work a mixed-media file system is defined to be a system that stores both conventional (best-effort) file data and real-time continuous-media data. Continuous-media data is usually bulky, and servers storing and

  14. Xbox one file system data storage: A forensic analysis

    OpenAIRE

    Gravel, Caitlin Elizabeth

    2015-01-01

    The purpose of this research was to answer the question, how does the file system of the Xbox One store data on its hard disk? This question is the main focus of the exploratory research and results sought. The research is focused on digital forensic investigators and experts. An out of the box Xbox One gaming console was used in the research. Three test cases were created as viable scenarios an investigator could come across in a search and seizure of evidence. The three test cases were then...

  15. Protecting your files on the AFS file system

    CERN Multimedia

    2011-01-01

    The Andrew File System is a world-wide distributed file system linking hundreds of universities and organizations, including CERN. Files can be accessed from anywhere, via dedicated AFS client programs or via web interfaces that export the file contents on the web. Due to the ease of access to AFS it is of utmost importance to properly protect access to sensitive data in AFS. As the use of AFS access control mechanisms is not obvious to all users, passwords, private SSH keys or certificates have been exposed in the past. In one specific instance, this also led to bad publicity due to a journalist accessing supposedly "private" AFS folders (SonntagsZeitung 2009/11/08). This problem does not only affect the individual user but also has a bad impact on CERN's reputation when it comes to IT security. Therefore, all departments and LHC experiments agreed in April 2010 to apply more stringent folder protections to all AFS user folders. The goal of this data protection policy is to assist users in...

  16. A file of reference data for multiple-element neutron activation analysis

    International Nuclear Information System (INIS)

    Kabina, L.P.; Kondurov, I.A.; Shesterneva, I.M.

    1983-12-01

    Data needed for planning neutron activation analysis experiments and processing their results are given. The decay schemes of radioactive nuclei formed in irradiation with thermal neutrons during the (n,γ) reaction taken from the international ENSDF file are used for calculating the activities of nuclei and for drawing up an optimum table for identifying gamma lines in the spectra measured. (author)

  17. Instructions for preparation of data entry sheets for Licensee Event Report (LER) file. Revision 1. Instruction manual

    International Nuclear Information System (INIS)

    1977-07-01

    The manual provides instructions for the preparation of data entry sheets for the licensee event report (LER) file. It is a revision to an interim manual published in October 1974 in 00E-SS-001. The LER file is a computer-based data bank of information using the data entry sheets as input. These data entry sheets contain pertinent information in regard to those occurrences required to be reported to the NRC. The computer-based data bank provides a centralized source of data that may be used for qualitative assessment of the nature and extent of off-normal events in the nuclear industry and as an index of source information to which users may refer for more detail

  18. The crystallographic information file (CIF): A new standard archive file for crystallography

    International Nuclear Information System (INIS)

    Hall, S.R.; Allen, F.H.; Brown, I.D.

    1991-01-01

    The specification of a new standard Crystallographic Information File (CIF) is described. Its development is based on the Self-Defining Text Archieve and Retrieval (STAR) procedure. The CIF is a general, flexible and easily extensible free-format archive file; it is human and machine readable and can be edited by a simple editor. The CIF is designed for the electronic transmission of crystallographic data between individual laboratories, journals and databases: It has been adopted by the International Union of Crystallography as the recommended medium for this purpose. The file consists of data names and data items, together with a loop facility for repeated items. The data names, constructed hierarchically so as to form data categories, are self-descriptive within a 32-character limit. The sorted list of data names, together with their precise definitions, constitutes the CIF dictionary (core version 1991). The CIF core dictionary is presented in full and covers the fundamental and most commonly used data items relevant to crystal structure analysis. The dictionary is also available as an electronic file suitable for CIF computer applications. Future extensions to the dictionary will include data items used in more specialized areas of crystallography. (orig.)

  19. Ground-Based Global Navigation Satellite System (GNSS) GPS Broadcast Ephemeris Data (daily files) from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset consists of ground-based Global Navigation Satellite System (GNSS) GPS Broadcast Ephemeris Data (daily files) from the NASA Crustal Dynamics Data...

  20. ENDF/B-V 7 Standards Data File (EN5-ST Library)

    International Nuclear Information System (INIS)

    DayDay, N.; Lemmel, H.D.

    1980-10-01

    This document summarizes the contents and documentation of the ENDF/B-V 7 Standards Data File (EN5-ST Library) released in September 1979. The library contains complete evaluations for all significant neutron reactions in the energy range 10 -5 eV to 20 MeV for H-1, He-3, Li-6, B-10, C-12, Au-197 and U-235 isotopes. The entire library or selective retrievals from it can be obtained free of charge from the IAEA Nuclear Data Section. (author)

  1. Zebra: A striped network file system

    Science.gov (United States)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  2. Ground-Based Global Navigation Satellite System Mixed Broadcast Ephemeris Data (sub-hourly files) from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset consists of ground-based Global Navigation Satellite System (GNSS) Mixed Broadcast Ephemeris Data (sub-hourly files) from the NASA Crustal Dynamics Data...

  3. What Happens When Persons Leave Welfare: Data from the SIPP Panel File.

    Science.gov (United States)

    Lamas, Enrique; McNeil, John

    This document reports on a study of the likelihood of individuals participating in the Federal food stamp program and the Medicaid program and the likelihood of exiting those programs. Data were analyzed from the first panel file of the Survey of Income and Program Participation (SIPP). Special problems with representativeness and measurement…

  4. Status of data testing of ENDF/B-V reactor dosimetry file

    International Nuclear Information System (INIS)

    Magurno, B.A.

    1979-01-01

    The ENDF/B-V Reactor Dosimetry File was released August 1979, and Phase II data testing started. The results presented here are from Brookhaven National Laboratory only, and are considered preliminary. The tests include calculated spectrum-averaged cross sections using 235 U fission spectrum (Watt), 252 Cf spontaneous fission spectrum (Watt and Maxwellian), and the Coupled Fast Reactor Measurement Facility (CFRMF) spectrum. 6 tables

  5. Huygens file service and storage architecture

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.; Stabell-Kulo, Tage; Stabell-Kulo, Tage

    1993-01-01

    The Huygens file server is a high-performance file server which is able to deliver multi-media data in a timely manner while also providing clients with ordinary “Unix” like file I/O. The file server integrates client machines, file servers and tertiary storage servers in the same storage

  6. Huygens File Service and Storage Architecture

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.; Stabell-Kulo, Tage; Stabell-Kulo, Tage

    1993-01-01

    The Huygens file server is a high-performance file server which is able to deliver multi-media data in a timely manner while also providing clients with ordinary “Unix” like file I/O. The file server integrates client machines, file servers and tertiary storage servers in the same storage

  7. NURE [National Uranium Resource Evaluation] HSSR [Hydrogeochemical and Stream Sediment Reconnaissance] Introduction to Data Files, United States: Volume 1

    International Nuclear Information System (INIS)

    1985-01-01

    One product of the Hydrogeochemical and Stream Sediment Reconnaissance (HSSR) program, a component of the National Uranium Resource Evaluation (NURE), is a data-base of interest to scientists and professionals in the academic, business, industrial, and governmental communities. This database contains individual records for water and sediment samples taken during the reconnaissance survey of the entire United States, excluding Hawaii. The purpose of this report is to describe the NURE HSSR data by highlighting its key characteristics and providing user guides to the data. A companion report, ''A Technical History of the NURE HSSR Program,'' summarizes those aspects of the HSSR Program which are likely to be important in helping users understand the database. Each record on the database contains varying information on general field or site characteristics and analytical results for elemental concentrations in the sample; the database is potentially valuable for describing the geochemistry of specified locations and addressing issues or questions in other areas such as water quality, geoexploration, and hydrologic studies. This report is organized in twelve volumes. This first volume presents a brief history of the NURE HSSR program, a description of the data files produced by ISP, a Users' Dictionary for the Analysis File and graphs showing the distribution of elemental concentrations for sediments at the US level. Volumes 2 through 12 are comprised of Data Summary Tables displaying the percentile distribution of the elemental concentrations on the file. Volume 2 contains data for the individual states. Volumes 3 through 12 contain data for the 1 0 x 2 0 quadrangles, organized into eleven regional files; the data for the two regional files for Alaska (North and South) are bound together as Volume 12

  8. Parallel file system performances in fusion data storage

    International Nuclear Information System (INIS)

    Iannone, F.; Podda, S.; Bracco, G.; Manduchi, G.; Maslennikov, A.; Migliori, S.; Wolkersdorfer, K.

    2012-01-01

    High I/O flow rates, up to 10 GB/s, are required in large fusion Tokamak experiments like ITER where hundreds of nodes store simultaneously large amounts of data acquired during the plasma discharges. Typical network topologies such as linear arrays (systolic), rings, meshes (2-D arrays), tori (3-D arrays), trees, butterfly, hypercube in combination with high speed data transports like Infiniband or 10G-Ethernet, are the main areas in which the effort to overcome the so-called parallel I/O bottlenecks is most focused. The high I/O flow rates were modelled in an emulated testbed based on the parallel file systems such as Lustre and GPFS, commonly used in High Performance Computing. The test runs on High Performance Computing–For Fusion (8640 cores) and ENEA CRESCO (3392 cores) supercomputers. Message Passing Interface based applications were developed to emulate parallel I/O on Lustre and GPFS using data archival and access solutions like MDSPLUS and Universal Access Layer. These methods of data storage organization are widely diffused in nuclear fusion experiments and are being developed within the EFDA Integrated Tokamak Modelling – Task Force; the authors tried to evaluate their behaviour in a realistic emulation setup.

  9. Parallel file system performances in fusion data storage

    Energy Technology Data Exchange (ETDEWEB)

    Iannone, F., E-mail: francesco.iannone@enea.it [Associazione EURATOM-ENEA sulla Fusione, C.R.ENEA Frascati, via E.Fermi, 45 - 00044 Frascati, Rome (Italy); Podda, S.; Bracco, G. [ENEA Information Communication Tecnologies, Lungotevere Thaon di Revel, 76 - 00196 Rome (Italy); Manduchi, G. [Associazione EURATOM-ENEA sulla Fusione, Consorzio RFX, Corso Stati Uniti, 4 - 35127 Padua (Italy); Maslennikov, A. [CASPUR Inter-University Consortium for the Application of Super-Computing for Research, via dei Tizii, 6b - 00185 Rome (Italy); Migliori, S. [ENEA Information Communication Tecnologies, Lungotevere Thaon di Revel, 76 - 00196 Rome (Italy); Wolkersdorfer, K. [Juelich Supercomputing Centre-FZJ, D-52425 Juelich (Germany)

    2012-12-15

    High I/O flow rates, up to 10 GB/s, are required in large fusion Tokamak experiments like ITER where hundreds of nodes store simultaneously large amounts of data acquired during the plasma discharges. Typical network topologies such as linear arrays (systolic), rings, meshes (2-D arrays), tori (3-D arrays), trees, butterfly, hypercube in combination with high speed data transports like Infiniband or 10G-Ethernet, are the main areas in which the effort to overcome the so-called parallel I/O bottlenecks is most focused. The high I/O flow rates were modelled in an emulated testbed based on the parallel file systems such as Lustre and GPFS, commonly used in High Performance Computing. The test runs on High Performance Computing-For Fusion (8640 cores) and ENEA CRESCO (3392 cores) supercomputers. Message Passing Interface based applications were developed to emulate parallel I/O on Lustre and GPFS using data archival and access solutions like MDSPLUS and Universal Access Layer. These methods of data storage organization are widely diffused in nuclear fusion experiments and are being developed within the EFDA Integrated Tokamak Modelling - Task Force; the authors tried to evaluate their behaviour in a realistic emulation setup.

  10. FORTRAN data files transference from VAX/VMS to ALPHA/UNIX; Traspaso de ficheros FORTRAN de datos de VAX/VMS a ALPHA/UNIX

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, E.; Milligen, B. Ph van [CIEMAT (Spain)

    1997-09-01

    Several tools have been developed to access the TJ-IU databases, which currently reside in VAX/VMS servers, from the TJ-II Data Acquisition System DEC ALPHA 8400 server. The TJ-I/TJ-IU databases are not homogeneous and contain several types of data files, namely, SADE, CAMAC and FORTRAN unformatted files. The tools presented in this report allow one to transfer CAMAC and those FORTRAN unformatted files defined herein, from a VAX/VMS server, for data manipulation on the ALPHA/Digital UNIX server. (Author)

  11. High School and Beyond. 1980 Senior Coort. Third-Follow-Up (1986). Data File User's Manual. Volume II: Survey Instruments. Contractor Report.

    Science.gov (United States)

    Sebring, Penny; And Others

    Survey instruments used in the collection of data for the High School and Beyond base year (1980) through the third follow-up surveys (1986) are provided as Volume II of a user's manual for the senior cohort data file. The complete user's manual is designed to provide the extensive documentation necessary for using the cohort data files. Copies of…

  12. Ground-Based Global Navigation Satellite System Combined Broadcast Ephemeris Data (daily files) from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset consists of ground-based Global Navigation Satellite System (GNSS) Combined Broadcast Ephemeris Data (daily files of all distinct navigation messages...

  13. Parallel compression of data chunks of a shared data object using a log-structured file system

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storage node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.

  14. Index files for Belle II - very small skim containers

    Science.gov (United States)

    Sevior, Martin; Bloomfield, Tristan; Kuhr, Thomas; Ueda, I.; Miyake, H.; Hara, T.

    2017-10-01

    The Belle II experiment[1] employs the root file format[2] for recording data and is investigating the use of “index-files” to reduce the size of data skims. These files contain pointers to the location of interesting events within the total Belle II data set and reduce the size of data skims by 2 orders of magnitude. We implement this scheme on the Belle II grid by recording the parent file metadata and the event location within the parent file. While the scheme works, it is substantially slower than a normal sequential read of standard skim files using default root file parameters. We investigate the performance of the scheme by adjusting the “splitLevel” and “autoflushsize” parameters of the root files in the parent data files.

  15. Comparison of data file and storage configurations for efficient temporal access of satellite image data

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-01-01

    Full Text Available . Traditional storage formats store such a series of images as a sequence of individual files, with each file internally storing the pixels in their spatial order. Consequently, the construction of a time series profile of a single pixel requires reading from...

  16. The ASCO Oncology Composite Provider Utilization File: New Data, New Insights.

    Science.gov (United States)

    Barr, Thomas R; Towle, Elaine L; Barr, Thomas R; Towle, Elaine L

    2016-01-01

    As we seek to understand the changing practice environment in oncology, the need for accurate information about demand for services, distribution of the delivery system in this sector of the health economy, and other practice trends is apparent. In this article, we present analysis of the sector using one of the public use files from the Centers for Medicare & Medicaid Services in combination with other publicly available data. Medicare data are particularly useful for this analysis because cancer is associated with aging and Medicare is the primary payer in the United States for patients older than age 65. As a result, nearly all oncologists who serve adult populations are represented in these data. By combining publicly available datasets into what we call the ASCO Provider Utilization File,we can investigate a wide range of supply, demand, and practice issues. We calculate the average work performed per physician, observe regional differences in work production,and quantify the downside risk and upside potential associated with the provision of chemotherapy drugs. Comparing the supply of oncologists by state with physician work relative value units and with estimates of cancer incidence by state reveals intriguing differences in the distribution of physicians and the demand for oncology services. In addition, our analysis demonstrates significant downside practice risk associated with the provision of drug therapy to Medicare beneficiaries. The economic risk associated with the purchase and delivery of chemotherapy is of particular concern as pressure for value increases. This article provides a description of a new dataset and interesting observations from these data.

  17. Search across Different Media: Numeric Data Sets and Text Files

    Directory of Open Access Journals (Sweden)

    Michael Buckland

    2006-12-01

    Full Text Available Digital technology encourages the hope of searching across and between different media forms (text, sound, image, numeric data. Topic searches are described in two different media: text files and socioeconomic numeric databases and also for transverse searching, whereby retrieved text is used to find topically related numeric data and vice versa. Direct transverse searching across different media is impossible. Descriptive metadata provide enabling infrastructure, but usually require mappings between different vocabularies and a search-term recommender system. Statistical association techniques and natural-language processing can help. Searches in socioeconomic numeric databases ordinarily require that place and time be specified.

  18. Developing a File System Structure to Solve Healthy Big Data Storage and Archiving Problems Using a Distributed File System

    Directory of Open Access Journals (Sweden)

    Atilla Ergüzen

    2018-06-01

    Full Text Available Recently, the use of internet has become widespread, increasing the use of mobile phones, tablets, computers, Internet of Things (IoT devices and other digital sources. In the health sector with the help of new generation digital medical equipment, this digital world also has tended to grow in an unpredictable way in that it has nearly 10% of the global wide data itself and continues to keep grow beyond what the other sectors have. This progress has greatly enlarged the amount of produced data which cannot be resolved with conventional methods. In this work, an efficient model for the storage of medical images using a distributed file system structure has been developed. With this work, a robust, available, scalable, and serverless solution structure has been produced, especially for storing large amounts of data in the medical field. Furthermore, the security level of the system is extreme by use of static Internet protocol (IP, user credentials, and synchronously encrypted file contents. One of the most important key features of the system is high performance and easy scalability. In this way, the system can work with fewer hardware elements and be more robust than others that use name node architecture. According to the test results, it is seen that the performance of the designed system is better than 97% from a Not Only Structured Query Language (NoSQL system, 80% from a relational database management system (RDBMS, and 74% from an operating system (OS.

  19. Direct utilization of information from nuclear data files in Monte Carlo simulation of neutron and photon transport

    International Nuclear Information System (INIS)

    Androsenko, P.; Joloudov, D.; Kompaniyets, A.

    2001-01-01

    Questions, related to Monte-Carlo method for solution of neutron and photon transport equation, are discussed in the work concerned. Problems dealing with direct utilization of information from evaluated nuclear data files in run-time calculations are considered. ENDF-6 format libraries have been used for calculations. Approaches provided by the rules of ENDF-6 files 2, 3-6, 12-15, 23, 27 and algorithms for reconstruction of resolved and unresolved resonance region cross sections under preset energy are described. The comparison results of calculations made by NJOY and GRUCON programs and computed cross sections data are represented. Test computation data of neutron leakage spectra for spherical benchmark-experiments are also represented. (authors)

  20. Dynamic Non-Hierarchical File Systems for Exascale Storage

    Energy Technology Data Exchange (ETDEWEB)

    Long, Darrell E. [Univ. of California, Santa Cruz, CA (United States); Miller, Ethan L [Univ. of California, Santa Cruz, CA (United States)

    2015-02-24

    This constitutes the final report for “Dynamic Non-Hierarchical File Systems for Exascale Storage”. The ultimate goal of this project was to improve data management in scientific computing and high-end computing (HEC) applications, and to achieve this goal we proposed: to develop the first, HEC-targeted, file system featuring rich metadata and provenance collection, extreme scalability, and future storage hardware integration as core design goals, and to evaluate and develop a flexible non-hierarchical file system interface suitable for providing more powerful and intuitive data management interfaces to HEC and scientific computing users. Data management is swiftly becoming a serious problem in the scientific community – while copious amounts of data are good for obtaining results, finding the right data is often daunting and sometimes impossible. Scientists participating in a Department of Energy workshop noted that most of their time was spent “...finding, processing, organizing, and moving data and it’s going to get much worse”. Scientists should not be forced to become data mining experts in order to retrieve the data they want, nor should they be expected to remember the naming convention they used several years ago for a set of experiments they now wish to revisit. Ideally, locating the data you need would be as easy as browsing the web. Unfortunately, existing data management approaches are usually based on hierarchical naming, a 40 year-old technology designed to manage thousands of files, not exabytes of data. Today’s systems do not take advantage of the rich array of metadata that current high-end computing (HEC) file systems can gather, including content-based metadata and provenance1 information. As a result, current metadata search approaches are typically ad hoc and often work by providing a parallel management system to the “main” file system, as is done in Linux (the locate utility), personal computers, and enterprise search

  1. Ground-Based Global Navigation Satellite System (GNSS) GLONASS Broadcast Ephemeris Data (hourly files) from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset consists of ground-based Global Navigation Satellite System (GNSS) GLObal NAvigation Satellite System (GLONASS) Broadcast Ephemeris Data (hourly files)...

  2. File-based data flow in the CMS Filter Farm

    Science.gov (United States)

    Andre, J.-M.; Andronidis, A.; Bawej, T.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.

  3. File-Based Data Flow in the CMS Filter Farm

    Energy Technology Data Exchange (ETDEWEB)

    Andre, J.M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.

  4. Manual on usage of the Nuclear Reaction Data File (NRDF)

    International Nuclear Information System (INIS)

    1984-10-01

    In the computer in the Institute for Nuclear Study, University of Tokyo, there is set up a Nuclear Reaction Data File (NRDF) which has been built in Hokkaido University. While the data base is growing year after year, its trial usage is for the purpose of joint utilization by educational institutions. In section 1, examples of the retrieval are presented to have the user familiarize with NRDF. In section 2, the terms used in retrieval are given in table. Then, in section 3, as a summary of the examples, structure of the retrieval commands is explained. In section 4, for the retrieval results on a CRT, cautions in reading are given. Finally, in section 5, general cautions in usage of NRDF are given. (Mori, K.)

  5. SIDS-toADF File Mapping Manual

    Science.gov (United States)

    McCarthy, Douglas; Smith, Matthew; Poirier, Diane; Smith, Charles A. (Technical Monitor)

    2002-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The SIDS-toADF File Mapping Manual specifies the exact manner in which, under CGNS conventions, CFD data structures (the SIDS) are to be stored in (i.e., mapped onto) the file structure provided by the database manager (ADF). The result is a conforming CGNS database. Adherence to the mapping conventions guarantees uniform meaning and location of CFD data within ADF files, and thereby allows the construction of

  6. Formalizing a hierarchical file system

    NARCIS (Netherlands)

    Hesselink, Wim H.; Lali, Muhammad Ikram

    An abstract file system is defined here as a partial function from (absolute) paths to data. Such a file system determines the set of valid paths. It allows the file system to be read and written at a valid path, and it allows the system to be modified by the Unix operations for creation, removal,

  7. Data file on retention and excretion of inhaled radionuclides calculated using ICRP dosimetric models

    International Nuclear Information System (INIS)

    Ishigure, Nobuhito; Nakano, Takashi; Enomoto, Hiroko; Shimo, Michikuni; Inaba, Jiro

    2000-01-01

    The authors have computed whole-body or a specific organ content and the daily urinary and faecal excretion rate of some selected radionuclides following acute intake by inhalation and ingestion, where the ICRP new respiratory tract model (ICRP Publication 66) and the latest ICRP biokinetic models were applied. The results were compiled in a file of MS Excel. The file was tentatively called MONDAI for reference. MONDAI contains the data for all radionuclides in ICRP Publications 54 and 78 and, in addition, some other radionuclides which are important from the viewpoint of occupational exposure in nuclear industry, research and medicine. They are H-3, P-32, Cr-51, Mn-54, Fe-59, Co-57, Co-58, Co-60, Zn-65, Rb-86, Sr-85, Sr-89, Sr-90, Zr-95, Ru-106, Ag-110m, Sb-124, Sb-125, I-125, I-129, I-131, Cs-134, Cs-137, Ba-140, Ce-141, Ce-144, Hg-203, Ra-226, Ra-228, Th-228, Th-232, U-234, U-235, U-238, Np-237, Pu-238, Pu-239, Pu-240, Am-241, Cm-242, Cm-244 and Cf-252. The day-by-day data up to 1000 days and the data at every 10 days up to 10000 days are presented. The following ICRP default values for the physical characteristics of the radioactive aerosols were used: AMAD=5 micron, geometric SD=2.5, particle density = 3 g/cm 3 , particle shape factor = 1.5. The subject exposed to the aerosols is the ICRP reference worker doing light work: light exercise with the ventilation rate of 1.5 m 3 /h for 5.5 h + sitting with the ventilation rate of 0.54 m 3 /h for 2.5 h. MONDAI was originally made by Version 7.0 of MS Excel for Windows 95, but the file was saved in the form of Ver. 4.0 as well as Ver. 7.0. Therefore, if the user has Ver. 4.0 or an upper version, he can open the file and operate it. With the graph-wizard of MS Excel the user can easily make a diagram for the retention or daily excretion of a radionuclide of interest. The dose coefficient (Sv/Bq intake) of each radionuclide for each absorption type given in ICRP Publication 68 was also written in each sheet. Therefore

  8. Use of error files in uncertainty analysis and data adjustment

    International Nuclear Information System (INIS)

    Chestnutt, M.M.; McCracken, A.K.; McCracken, A.K.

    1979-01-01

    Some results are given from uncertainty analyses on Pressurized Water Reactor (PWR) and Fast Reactor Theoretical Benchmarks. Upper limit estimates of calculated quantities are shown to be significantly reduced by the use of ENDF/B data covariance files and recently published few-group covariance matrices. Some problems in the analysis of single-material benchmark experiments are discussed with reference to the Winfrith iron benchmark experiment. Particular attention is given to the difficulty of making use of very extensive measurements which are likely to be a feature of this type of experiment. Preliminary results of an adjustment in iron are shown

  9. Summary remarks and recommended reactions for an international data file for dosimetry applications for LWR, FBR, and MFR reactor research, development and testing programs

    International Nuclear Information System (INIS)

    McElroy, W.N.; Lippincott, E.P.; Grundl, J.A.; Fabry, A.; Dierckx, R.; Farinelli, U.

    1979-01-01

    The need for the use of an internationally accepted data file for dosimetry applications for light water reactor (LWR), fast breeder reactor (FBR), and magnetic fusion reactor (MFR) research, development, and testing programs continues to exist for the Nuclear Industry. The work of this IAEA meeting, therefore, will be another important step in achieving consensus agreement on an internationally recommended file and its purpose, content, structure, selected reactions, and associated uncertainy files. Summary remarks and a listing of recommended reactions for consideration in the formulation of an ''International Data File for Dosimetry Applications'' are presented in subsequent sections of this report

  10. Evaluated neutronic file for indium

    International Nuclear Information System (INIS)

    Smith, A.B.; Chiba, S.; Smith, D.L.; Meadows, J.W.; Guenther, P.T.; Lawson, R.D.; Howerton, R.J.

    1990-01-01

    A comprehensive evaluated neutronic data file for elemental indium is documented. This file, extending from 10 -5 eV to 20 MeV, is presented in the ENDF/B-VI format, and contains all neutron-induced processes necessary for the vast majority of neutronic applications. In addition, an evaluation of the 115 In(n,n') 116m In dosimetry reaction is presented as a separate file. Attention is given in quantitative values, with corresponding uncertainty information. These files have been submitted for consideration as a part of the ENDF/B-VI national evaluated-file system. 144 refs., 10 figs., 4 tabs

  11. Archive Inventory Management System (AIMS) — A Fast, Metrics Gathering Framework for Validating and Gaining Insight from Large File-Based Data Archives

    Science.gov (United States)

    Verma, R. V.

    2018-04-01

    The Archive Inventory Management System (AIMS) is a software package for understanding the distribution, characteristics, integrity, and nuances of files and directories in large file-based data archives on a continuous basis.

  12. Formalizing a Hierarchical File System

    NARCIS (Netherlands)

    Hesselink, Wim H.; Lali, M.I.

    2009-01-01

    In this note, we define an abstract file system as a partial function from (absolute) paths to data. Such a file system determines the set of valid paths. It allows the file system to be read and written at a valid path, and it allows the system to be modified by the Unix operations for removal

  13. Design and Implementation of a Metadata-rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  14. Flat Files - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... Data file File name: jsnp_flat_files File URL: ftp://ftp.biosciencedbc.jp/archiv...his Database Database Description Download License Update History of This Database Site Policy | Contact Us Flat Files - JSNP | LSDB Archive ...

  15. Portable File Format (PFF) specifications

    Energy Technology Data Exchange (ETDEWEB)

    Dolan, Daniel H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Created at Sandia National Laboratories, the Portable File Format (PFF) allows binary data transfer across computer platforms. Although this capability is supported by many other formats, PFF files are still in use at Sandia, particularly in pulsed power research. This report provides detailed PFF specifications for accessing data without relying on legacy code.

  16. Securing the AliEn File Catalogue - Enforcing authorization with accountable file operations

    International Nuclear Information System (INIS)

    Schreiner, Steffen; Banerjee, Subho Sankar; Betev, Latchezar; Carminati, Federico; Vladimirovna Datskova, Olga; Furano, Fabrizio; Grigoras, Alina; Grigoras, Costin; Mendez Lorenzo, Patricia; Peters, Andreas Joachim; Saiz, Pablo; Bagnasco, Stefano; Zhu Jianlin

    2011-01-01

    The AliEn Grid Services, as operated by the ALICE Collaboration in its global physics analysis grid framework, is based on a central File Catalogue together with a distributed set of storage systems and the possibility to register links to external data resources. This paper describes several identified vulnerabilities in the AliEn File Catalogue access protocol regarding fraud and unauthorized file alteration and presents a more secure and revised design: a new mechanism, called LFN Booking Table, is introduced in order to keep track of access authorization in the transient state of files entering or leaving the File Catalogue. Due to a simplification of the original Access Envelope mechanism for xrootd-protocol-based storage systems, fundamental computational improvements of the mechanism were achieved as well as an up to 50% reduction of the credential's size. By extending the access protocol with signed status messages from the underlying storage system, the File Catalogue receives trusted information about a file's size and checksum and the protocol is no longer dependent on client trust. Altogether, the revised design complies with atomic and consistent transactions and allows for accountable, authentic, and traceable file operations. This paper describes these changes as part and beyond the development of AliEn version 2.19.

  17. Renewal and maintenance of a nuclear structure data file used for the calculations of dose conversion factors

    International Nuclear Information System (INIS)

    Togawa, Orihiko; Yamaguchi, Yukichi

    1996-02-01

    The ENSDF decay data are used as fundamental data to compute radiation data in the DOSDAC code system, which was developed at JAERI, for the calculation of dose conversion factors. The ENSDF decay data have been periodically revised by reviewing new experimental data in the literature under an international network. The use of this data file enables us to calculate radiation data from information which is the newest and internationally recognized. In spite of this advantage, the decay data file is seldom used in applied fields. This is due to some problems to be solved from a viewpoint of the calculation of radiation data, as well as its complicated structure. This report describes methods for renewal and maintenance of the ENSDF decay data used for the calculation of dose conversion factors. In case that the decay data are used directly, attention should be sometimes paid to some problems, for example defects in data. In renewing and using the ENSDF decay data, the DOSDAC code system tries to avoid wrong calculations of radiation data by check and modification of defects in data through four supporting computer codes. (author)

  18. Important comments on KERMA factors and DPA cross-section data in ACE files of JENDL-4.0, JEFF-3.2 and ENDF/B-VII.1

    Science.gov (United States)

    Konno, Chikara; Tada, Kenichi; Kwon, Saerom; Ohta, Masayuki; Sato, Satoshi

    2017-09-01

    We have studied reasons of differences of KERMA factors and DPA cross-section data among nuclear data libraries. Here the KERMA factors and DPA cross-section data included in the official ACE files of JENDL-4.0, ENDF/B-VII.1 and JEFF-3.2 are examined in more detail. As a result, it is newly found out that the KERMA factors and DPA cross-section data of a lot of nuclei are different among JENDL-4.0, ENDF/B-VII.1 and JEFF-3.2 and reasons of the differences are the followings: 1) large secondary particle production yield, 2) no secondary gamma data, 3) secondary gamma data in files12-15 mt = 3, 4) mt = 103-107 data without mt = 600 s-800 s data in file6. The issue 1) is considered to be due to nuclear data, while the issues 2)-4) seem to be due to NJOY. The ACE files of JENDL-4.0, ENDF/B-VII.1 and JEFF-3.2 with these problems should be revised after correcting wrong nuclear data and NJOY problems.

  19. High School and Beyond: Twins and Siblings' File Users' Manual, User's Manual for Teacher Comment File, Friends File Users' Manual.

    Science.gov (United States)

    National Center for Education Statistics (ED), Washington, DC.

    These three users' manuals are for specific files of the High School and Beyond Study, a national longitudinal study of high school sophomores and seniors in 1980. The three files are computerized databases that are available on magnetic tape. As one component of base year data collection, information identifying twins, triplets, and some non-twin…

  20. ORACL program file for acquisition, storage and analysis of data in radiation measurement and nondestructive measurement of nuclear material, vol. 2

    International Nuclear Information System (INIS)

    Yagi, Hideyuki; Takeuchi, Norio; Gotoh, Hiroshi

    1976-09-01

    The file contains 79 programs for radiation measurement and nondestructive measurement of nuclear material written in conversational language ORACL associated with the GAMMA-III system of ORTEC Incorporated. It deals with data transfers between disk/core/MCA/magnetic tape, edition of data in disks, calculation of the peak area, calculation of mean and standard deviation, reference to gamma-ray data files, accounting, calendar, etc. It also has a support system for micro-computer development. Usages of the built-in functions of ORACL are presented. (auth.)

  1. Virtual file system for PSDS

    Science.gov (United States)

    Runnels, Tyson D.

    1993-01-01

    This is a case study. It deals with the use of a 'virtual file system' (VFS) for Boeing's UNIX-based Product Standards Data System (PSDS). One of the objectives of PSDS is to store digital standards documents. The file-storage requirements are that the files must be rapidly accessible, stored for long periods of time - as though they were paper, protected from disaster, and accumulative to about 80 billion characters (80 gigabytes). This volume of data will be approached in the first two years of the project's operation. The approach chosen is to install a hierarchical file migration system using optical disk cartridges. Files are migrated from high-performance media to lower performance optical media based on a least-frequency-used algorithm. The optical media are less expensive per character stored and are removable. Vital statistics about the removable optical disk cartridges are maintained in a database. The assembly of hardware and software acts as a single virtual file system transparent to the PSDS user. The files are copied to 'backup-and-recover' media whose vital statistics are also stored in the database. Seventeen months into operation, PSDS is storing 49 gigabytes. A number of operational and performance problems were overcome. Costs are under control. New and/or alternative uses for the VFS are being considered.

  2. New remarks on KERMA factors and DPA cross section data in ACE files

    International Nuclear Information System (INIS)

    Konno, Chikara; Sato, Satoshi; Ohta, Masayuki; Kwon, Saerom; Ochiai, Kentaro

    2016-01-01

    KERMA factors and DPA cross section data are essential for nuclear heating and material damage estimation in fusion reactor designs. Recently we compared KERMA factors and DPA cross section data in the latest official ACE files of JENDL-4.0, ENDF/B-VII.1, JEFF-3.2 and FENDL-3.0 and it was found out that the KERMA factors and DPA cross section data of a lot of nuclei did not always agree among the nuclear data libraries. We investigated the nuclear data libraries and the nuclear data processing code NJOY and specified new reasons for the discrepancies; (1) incorrect nuclear data and NJOY bugs, (2) huge helium production cross section data, (3) gamma production data format in the nuclear data, (4) no detailed secondary particle data (energy–angular distribution data). These problems should be resolved based on this study.

  3. New remarks on KERMA factors and DPA cross section data in ACE files

    Energy Technology Data Exchange (ETDEWEB)

    Konno, Chikara, E-mail: konno.chikara@jaea.go.jp; Sato, Satoshi; Ohta, Masayuki; Kwon, Saerom; Ochiai, Kentaro

    2016-11-01

    KERMA factors and DPA cross section data are essential for nuclear heating and material damage estimation in fusion reactor designs. Recently we compared KERMA factors and DPA cross section data in the latest official ACE files of JENDL-4.0, ENDF/B-VII.1, JEFF-3.2 and FENDL-3.0 and it was found out that the KERMA factors and DPA cross section data of a lot of nuclei did not always agree among the nuclear data libraries. We investigated the nuclear data libraries and the nuclear data processing code NJOY and specified new reasons for the discrepancies; (1) incorrect nuclear data and NJOY bugs, (2) huge helium production cross section data, (3) gamma production data format in the nuclear data, (4) no detailed secondary particle data (energy–angular distribution data). These problems should be resolved based on this study.

  4. National Household Education Surveys Program of 2012: Data File User's Manual. Parent and Family Involvement in Education Survey. Early Childhood Program Participation Survey. NCES 2015-030

    Science.gov (United States)

    McPhee, C.; Bielick, S.; Masterton, M.; Flores, L.; Parmer, R.; Amchin, S.; Stern, S.; McGowan, H.

    2015-01-01

    The 2012 National Household Education Surveys Program (NHES:2012) Data File User's Manual provides documentation and guidance for users of the NHES:2012 data files. The manual provides information about the purpose of the study, the sample design, data collection procedures, data processing procedures, response rates, imputation, weighting and…

  5. Some aspects of the file organization and retrieval strategy in large data-bases

    International Nuclear Information System (INIS)

    Arnaudov, D.D.; Govorun, N.N.

    1977-01-01

    Methods of organizing a big information retrieval system are discribed. A special attention is paid to the file organization. An adapting file structure is described in more detail. The discussed method gives one the opportunity to organize large files in such a way that the response time of the system can be minimized, when the file is increasing. In connection with the retrieval strategy a method is proposed, which uses the frequencies of the descr/iptors and the couples of the descriptors to forecast the expected number of the relevant documents. Programmes are made, on the base of these methods, which are used in the information retrieval systems of JINR

  6. Operations Data Files, driving force behind International Space Station operations

    Science.gov (United States)

    Hoppenbrouwers, Tom; Ferra, Lionel; Markus, Michael; Wolff, Mikael

    2017-09-01

    Almost all tasks performed by the astronauts on-board the International Space Station (ISS) and by ground controllers in Mission Control Centre, from operation and maintenance of station systems to the execution of scientific experiments or high risk visiting vehicles docking manoeuvres, would not be possible without Operations Data Files (ODF). ODFs are the User Manuals of the Space Station and have multiple faces, going from traditional step-by-step procedures, scripts, cue cards, over displays, to software which guides the crew through the execution of certain tasks. Those key operational documents are standardized as they are used on-board the Space Station by an international crew constantly changing every 3 months. Furthermore this harmonization effort is paramount for consistency as the crew moves from one element to another in a matter of seconds, and from one activity to another. On ground, a significant large group of experts from all International Partners drafts, prepares reviews and approves on a daily basis all Operations Data Files, ensuring their timely availability on-board the ISS for all activities. Unavailability of these operational documents will halt the conduct of experiments or cancel milestone events. This paper will give an insight in the ground preparation work for the ODFs (with a focus on ESA ODF processes) and will present an overview on ODF formats and their usage within the ISS environment today and show how vital they are. Furthermore the focus will be on the recently implemented ODF features, which significantly ease the use of this documentation and improve the efficiency of the astronauts performing the tasks. Examples are short video demonstrations, interactive 3D animations, Execute Tailored Procedures (XTP-versions), tablet products, etc.

  7. LASIP-III, a generalized processor for standard interface files

    International Nuclear Information System (INIS)

    Bosler, G.E.; O'Dell, R.D.; Resnik, W.M.

    1976-03-01

    The LASIP-III code was developed for processing Version III standard interface data files which have been specified by the Committee on Computer Code Coordination. This processor performs two distinct tasks, namely, transforming free-field format, BCD data into well-defined binary files and providing for printing and punching data in the binary files. While LASIP-III is exported as a complete free-standing code package, techniques are described for easily separating the processor into two modules, viz., one for creating the binary files and one for printing the files. The two modules can be separated into free-standing codes or they can be incorporated into other codes. Also, the LASIP-III code can be easily expanded for processing additional files, and procedures are described for such an expansion. 2 figures, 8 tables

  8. Comparison of WIMS results using libraries based on new evaluated data files

    International Nuclear Information System (INIS)

    Trkov, A.; Ganesan, S.; Zidi, T.

    1996-01-01

    A number of selected benchmark experiments have been modelled with the WIMS-D/4 lattice code. Calculations were performed using multigroup libraries generated from a number of newly released evaluated data files. Data processing was done with the NJOY91.38 code. Since the data processing methods were the same in all cases, the results may serve to determine the impact on integral parameters due to differences in the basic data. The calculated integral parameters were also compared to the measured values. Observed differences were small, which means that there are no essential differences between the evaluated data libraries. The results of the analysis cannot serve to discriminate in terms of quality of the data between the evaluated data libraries considered. For the test cases considered the results with the new, unadjusted libraries are at least as good as those obtained with the old, adjusted WIMS library which is supplied with the code. (author). 16 refs, 3 tabs

  9. Algorithms and file structures for computational geometry

    International Nuclear Information System (INIS)

    Hinrichs, K.; Nievergelt, J.

    1983-01-01

    Algorithms for solving geometric problems and file structures for storing large amounts of geometric data are of increasing importance in computer graphics and computer-aided design. As examples of recent progress in computational geometry, we explain plane-sweep algorithms, which solve various topological and geometric problems efficiently; and we present the grid file, an adaptable, symmetric multi-key file structure that provides efficient access to multi-dimensional data along any space dimension. (orig.)

  10. mzML2ISA & nmrML2ISA: generating enriched ISA-Tab metadata files from metabolomics XML data.

    Science.gov (United States)

    Larralde, Martin; Lawson, Thomas N; Weber, Ralf J M; Moreno, Pablo; Haug, Kenneth; Rocca-Serra, Philippe; Viant, Mark R; Steinbeck, Christoph; Salek, Reza M

    2017-08-15

    Submission to the MetaboLights repository for metabolomics data currently places the burden of reporting instrument and acquisition parameters in ISA-Tab format on users, who have to do it manually, a process that is time consuming and prone to user input error. Since the large majority of these parameters are embedded in instrument raw data files, an opportunity exists to capture this metadata more accurately. Here we report a set of Python packages that can automatically generate ISA-Tab metadata file stubs from raw XML metabolomics data files. The parsing packages are separated into mzML2ISA (encompassing mzML and imzML formats) and nmrML2ISA (nmrML format only). Overall, the use of mzML2ISA & nmrML2ISA reduces the time needed to capture metadata substantially (capturing 90% of metadata on assay and sample levels), is much less prone to user input errors, improves compliance with minimum information reporting guidelines and facilitates more finely grained data exploration and querying of datasets. mzML2ISA & nmrML2ISA are available under version 3 of the GNU General Public Licence at https://github.com/ISA-tools. Documentation is available from http://2isa.readthedocs.io/en/latest/. reza.salek@ebi.ac.uk or isatools@googlegroups.com. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  11. RRDF-98. Russian reactor dosimetry file. Summary documentation

    International Nuclear Information System (INIS)

    Pashchenko, A.B.

    1999-01-01

    This document summarizes the contents and documentation of the new version of tile Russian Reactor Dosimetry File (RRDF-98) released in December 1998 by the Russian Center on Nuclear Data (CJD) at the Institute of Physics and Power Engineering, Russian Federation. This file contains the original evaluations of cross section data and covariance matrixes for 22 reactions which are used for neutron flux dosimetry by foil activation. The majority of the evaluations included in previous versions of the Russian Reactor Dosimetry Files (BOSPOR-80, RRGF-94 and RRDF-96) have been superseded by new evaluations. The evaluated cross sections of RRDF-98 averaged over 252-Cf and 235-U fission spectra are compared with relevant integral data. The data file is available from the IAEA Nuclear Data Section on diskette, cost free. (author)

  12. The International Evaluated Nuclear Structure Data File (ENSDF) in fundamental and applied photonuclear research

    International Nuclear Information System (INIS)

    Boboshin, I.N.; Varlamov, V.V.

    1989-04-01

    In order to provide the necessary nuclear physics data from the ENSDF file to those carrying out fundamental or applied photonuclear research a specialized software system was set up on an ES computer. A brief description of the block diagram of this software package and of one of the programs in this package (SUPER) is given. 4 refs, 6 figs

  13. Evaluation of sodium-23 neutron capture cross section data for the ENDF/B V-III file

    International Nuclear Information System (INIS)

    Paik, N.C.; Pitterle, T.A.

    1975-01-01

    The evaluation of neutron cross sections of 23 Na, material number 1156, for the ENDF/B File is described. Cross sections were evaluated between 10 -5 eV and 15 MeV. Experimental data available up to March 1971 were included in the evaluation

  14. RRDF-98. Russian reactor dosimetry file. Summary documentation

    Energy Technology Data Exchange (ETDEWEB)

    Pashchenko, A B

    1999-03-01

    This document summarizes the contents and documentation of the new version of tile Russian Reactor Dosimetry File (RRDF-98) released in December 1998 by the Russian Center on Nuclear Data (CJD) at the Institute of Physics and Power Engineering, Russian Federation. This file contains the original evaluations of cross section data and covariance matrixes for 22 reactions which are used for neutron flux dosimetry by foil activation. The majority of the evaluations included in previous versions of the Russian Reactor Dosimetry Files (BOSPOR-80, RRGF-94 and RRDF-96) have been superseded by new evaluations. The evaluated cross sections of RRDF-98 averaged over 252-Cf and 235-U fission spectra are compared with relevant integral data. The data file is available from the IAEA Nuclear Data Section on diskette, cost free. (author) 9 refs, 22 figs, 2 tabs

  15. Parallel file system with metadata distributed across partitioned key-value store c

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  16. XML Files

    Science.gov (United States)

    ... this page: https://medlineplus.gov/xml.html MedlinePlus XML Files To use the sharing features on this page, please enable JavaScript. MedlinePlus produces XML data sets that you are welcome to download ...

  17. User's guide for the implementation of level one of the proposed American National Standard Specifications for an information interchange data descriptive file on control data 6000/7000 series computers

    CERN Document Server

    Wiley, R A

    1977-01-01

    User's guide for the implementation of level one of the proposed American National Standard Specifications for an information interchange data descriptive file on control data 6000/7000 series computers

  18. Comparative evaluation of debris extruded apically by using, Protaper retreatment file, K3 file and H-file with solvent in endodontic retreatment

    Directory of Open Access Journals (Sweden)

    Chetna Arora

    2012-01-01

    Full Text Available Aim: The aim of this study was to evaluate the apical extrusion of debris comparing 2 engine driven systems and hand instrumentation technique during root canal retreatment. Materials and Methods: Forty five human permanent mandibular premolars were prepared using the step-back technique, obturated with gutta-percha/zinc oxide eugenol sealer and cold lateral condensation technique. The teeth were divided into three groups: Group A: Protaper retreatment file, Group B: K3, file Group C: H-file with tetrachloroethylene. All the canals were irrigated with 20ml distilled water during instrumentation. Debris extruded along with the irrigating solution during retreatment procedure was carefully collected in preweighed Eppendorf tubes. The tubes were stored in an incubator for 5 days, placed in a desiccator and then re-weighed. Weight of dry debris was calculated by subtracting the weight of the tube before instrumentation and from the weight of the tube after instrumentation. Data was analyzed using Two Way ANOVA and Post Hoc test. Results : There was statistically significant difference in the apical extrusion of debris between hand instrumentation and protaper retreatment file and K3 file. The amount of extruded debris caused by protaper retreatment file and K3 file instrumentation technique was not statistically significant. All the three instrumentation techniques produced apically extruded debris and irrigant. Conclusion: The best way to minimize the extrusion of debris is by adapting crown down technique therefore the use of rotary technique (Protaper retreatment file, K3 file is recommended.

  19. Image File - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ption of data contents Network diagrams (in PNG format) for each project. One project has one pathway file o...List Contact us TP Atlas Image File Data detail Data name Image File DOI 10.18908/lsdba.nbdc01161-004 Descri

  20. Extending DIRAC File Management with Erasure-Coding for efficient storage

    CERN Document Server

    Skipsey, Samuel Cadellin; Britton, David; Crooks, David; Roy, Gareth

    2015-01-01

    The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP\\cite{GridPP}, extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. ...

  1. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  2. Formatting data files for repeated-measures analyses in SPSS: Using the Aggregate and Restructure procedures

    Directory of Open Access Journals (Sweden)

    Gyslain Giguère

    2006-03-01

    Full Text Available In this tutorial, we demonstrate how to use the Aggregate and Restructure procedures available in SPSS (versions 11 and up to prepare data files for repeated-measures analyses. In the first two sections of the tutorial, we briefly describe the Aggregate and Restructure procedures. In the final section, we present an example in which the data from a fictional lexical decision task are prepared for analysis using a mixed-design ANOVA. The tutorial demonstrates that the presented method is the most efficient way to prepare data for repeated-measures analyses in SPSS.

  3. File Level Provenance Tracking in CMS

    CERN Document Server

    Jones, C D; Paterno, M; Sexton-Kennedy, L; Tanenbaum, W; Riley, D S

    2009-01-01

    The CMS off-line framework stores provenance information within CMS's standard ROOT event data files. The provenance information is used to track how each data product was constructed, including what other data products were read to do the construction. We will present how the framework gathers the provenance information, the efforts necessary to minimise the space used to store the provenance in the file and the tools that will be available to use the provenance.

  4. Review of ENDF/B-VI Fission-Product Cross Sections[Evaluated Nuclear Data File

    Energy Technology Data Exchange (ETDEWEB)

    Wright, R.Q.; MacFarlane, R.E.

    2000-04-01

    In response to concerns raised in the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 93-2, the US Department of Energy (DOE) developed a comprehensive program to help assure that the DOE maintain and enhance its capability to predict the criticality of systems throughout the complex. Tasks developed to implement the response to DNFSB recommendation 93-2 included Critical Experiments, Criticality Benchmarks, Training, Analytical Methods, and Nuclear Data. The Nuclear Data Task consists of a program of differential measurements at the Oak Ridge Electron Linear Accelerator (ORELA), precise fitting of the differential data with the generalized least-squares fitting code SAMMY to represent the data with resonance parameters using the Reich-Moore formalism along with covariance (uncertainty) information, and the development of complete evaluations for selected nuclides for inclusion in the Evaluated Nuclear Data File (ENDFB).

  5. Files synchronization from a large number of insertions and deletions

    Science.gov (United States)

    Ellappan, Vijayan; Kumari, Savera

    2017-11-01

    Synchronization between different versions of files is becoming a major issue that most of the applications are facing. To make the applications more efficient a economical algorithm is developed from the previously used algorithm of “File Loading Algorithm”. I am extending this algorithm in three ways: First, dealing with non-binary files, Second backup is generated for uploaded files and lastly each files are synchronized with insertions and deletions. User can reconstruct file from the former file with minimizing the error and also provides interactive communication by eliminating the frequency without any disturbance. The drawback of previous system is overcome by using synchronization, in which multiple copies of each file/record is created and stored in backup database and is efficiently restored in case of any unwanted deletion or loss of data. That is, to introduce a protocol that user B may use to reconstruct file X from file Y with suitably low probability of error. Synchronization algorithms find numerous areas of use, including data storage, file sharing, source code control systems, and cloud applications. For example, cloud storage services such as Drop box synchronize between local copies and cloud backups each time users make changes to local versions. Similarly, synchronization tools are necessary in mobile devices. Specialized synchronization algorithms are used for video and sound editing. Synchronization tools are also capable of performing data duplication.

  6. An exploratory discussion on business files compilation

    International Nuclear Information System (INIS)

    Gao Chunying

    2014-01-01

    Business files compilation for an enterprise is a distillation and recreation of its spiritual wealth, from which the applicable information can be available to those who want to use it in a fast, extensive and precise way. Proceeding from the effects of business files compilation on scientific researches, productive constructions and developments, this paper in five points discusses the way how to define topics, analyze historical materials, search or select data and process it to an enterprise archives collection. Firstly, to expound the importance and necessity of business files compilation in production, operation and development of an company; secondly, to present processing methods from topic definition, material searching and data selection to final examination and correction; thirdly, to define principle and classification in order to make different categories and levels of processing methods available to business files compilation; fourthly, to discuss the specific method how to implement a file compilation through a documentation collection upon principle of topic definition gearing with demand; fifthly, to address application of information technology to business files compilation in view point of widely needs for business files so as to level up enterprise archives management. The present discussion focuses on the examination and correction principle of enterprise historical material compilation and the basic classifications as well as the major forms of business files compilation achievements. (author)

  7. A study of existing experimental data and validation process for evaluated high energy nuclear data. Report of task force on integral test for JENDL High Energy File in Japanese Nuclear Data Committee

    International Nuclear Information System (INIS)

    Oyama, Yukio; Baba, Mamoru; Watanabe, Yukinobu

    1998-11-01

    JENDL High Energy File (JENDL-HE) is being produced by Japanese Nuclear Data Committee (JNDC) to provide common fundamental nuclear data in the intermediate energy region for many applications concerning a basic research, an accelerator-driven nuclear waste transmutation, a fusion material study, and medical applications like the radiation therapy. The first version of JENDL-HE, which contains the evaluated nuclear data up to 50 MeV, is planned to release in 1998. However, a method of integral test with which we can validate the high-energy nuclear data file has not been established. The validation of evaluated nuclear data through the integral tests is necessary to promote utilization of JENDL-HE. JNDC set up a task force in 1997 to discuss the problems concerning the integral tests of JENDL-HE. The task force members have surveyed and studied the current status of the problems for a year to obtain a guideline for development of the high-energy nuclear database. This report summarizes the results of the survey and study done by the task force for JNDC. (author)

  8. Accessing files in an Internet: The Jade file system

    Science.gov (United States)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  9. Accessing files in an internet - The Jade file system

    Science.gov (United States)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  10. pcircle - A Suite of Scalable Parallel File System Tools

    Energy Technology Data Exchange (ETDEWEB)

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  11. The U.S. Geological Survey Peak-Flow File Data Verification Project, 2008–16

    Science.gov (United States)

    Ryberg, Karen R.; Goree, Burl B.; Williams-Sether, Tara; Mason, Robert R.

    2017-11-21

    Annual peak streamflow (peak flow) at a streamgage is defined as the maximum instantaneous flow in a water year. A water year begins on October 1 and continues through September 30 of the following year; for example, water year 2015 extends from October 1, 2014, through September 30, 2015. The accuracy, characterization, and completeness of the peak streamflow data are critical in determining flood-frequency estimates that are used daily to design water and transportation infrastructure, delineate flood-plain boundaries, and regulate development and utilization of lands throughout the United States and are essential to understanding the implications of climate and land-use change on flooding and high-flow conditions.As of November 14, 2016, peak-flow data existed for 27,240 unique streamgages in the United States and its territories. The data, collectively referred to as the “peak-flow file,” are available as part of the U.S. Geological Survey (USGS) public web interface, the National Water Information System, at https://nwis.waterdata.usgs.gov/usa/nwis/peak. Although the data have been routinely subjected to periodic review by the USGS Office of Surface Water and screening at the USGS Water Science Center level, these data were not reviewed in a national, systematic manner until 2008 when automated scripts were developed and applied to detect potential errors in peak-flow values and their associated dates, gage heights, and peak-flow qualification codes, as well as qualification codes associated with the gage heights. USGS scientists and hydrographers studied the resulting output, accessed basic records and field notes, and corrected observed errors or, more commonly, confirmed existing data as correct.This report summarizes the changes in peak-flow file data at a national level, illustrates their nature and causation, and identifies the streamgages affected by these changes. Specifically, the peak-flow data were compared for streamgages with peak flow

  12. source files for manuscript in tex format

    Data.gov (United States)

    U.S. Environmental Protection Agency — Source tex files used to create the manuscript including original figure files and raw data used in tables and inline text. This dataset is associated with the...

  13. Ground-Based Global Navigation Satellite System (GNSS) Compact Observation Data (1-second sampling, sub-hourly files) from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset consists of ground-based Global Navigation Satellite System (GNSS) Observation Data (1-second sampling, sub-hourly files) from the NASA Crustal Dynamics...

  14. The design and development of GRASS file reservation system

    International Nuclear Information System (INIS)

    Huang Qiulan; Zhu Suijiang; Cheng Yaodong; Chen Gang

    2010-01-01

    GFRS (GRASS File Reservation System) is designed to improve the file access performance of GRASS (Grid-enabled Advanced Storage System) which is a Hierarchical Storage Management (HSM) system developed at Computing Center, Institute of High Energy Physics. GRASS can provide massive storage management and data migration, but the data migration policy is simply based factors such as pool water level, the intervals for migration and so on, so it is short of precise control over files. As for that, we design GFRS to implement user-based file reservation which is to reserve and keep the required files on disks for High Energy physicists. CFRS can improve file access speed for users by avoiding migrating frequently accessed files to tapes. In this paper we first give a brief introduction of GRASS system and then detailed architecture and implementation of GFRS. Experiments results from GFRS have shown good performance and a simple analysis is made based on it. (authors)

  15. Extending DIRAC File Management with Erasure-Coding for efficient storage.

    Science.gov (United States)

    Cadellin Skipsey, Samuel; Todev, Paulin; Britton, David; Crooks, David; Roy, Gareth

    2015-12-01

    The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP[1], extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. We expect this approach to be of most interest to smaller VOs, who have tighter bounds on the storage available to them, but larger (WLCG) VOs may be interested as their total data increases during Run 2. We provide an analysis of the costs and benefits of the approach, along with future development and implementation plans in this area. In general, overheads for multiple file transfers provide the largest issue for competitiveness of this approach at present.

  16. Strategy on review method for JENDL High Energy File

    Energy Technology Data Exchange (ETDEWEB)

    Yamano, Naoki [Sumitomo Atomic Energy Industries Ltd., Tokyo (Japan)

    1998-11-01

    Status on review method and problems for a High Energy File of Japanese Evaluated Nuclear Data Library (JENDL-HE File) has been described. Measurements on differential and integral data relevant to the review work for the JENDL-HE File have been examined from a viewpoint of data quality and applicability. In order to achieve the work effectively, strategy on development of standard review method has been discussed as well as necessity of tools to be used in the review scheme. (author)

  17. Grammar-Based Specification and Parsing of Binary File Formats

    Directory of Open Access Journals (Sweden)

    William Underwood

    2012-03-01

    Full Text Available The capability to validate and view or play binary file formats, as well as to convert binary file formats to standard or current file formats, is critically important to the preservation of digital data and records. This paper describes the extension of context-free grammars from strings to binary files. Binary files are arrays of data types, such as long and short integers, floating-point numbers and pointers, as well as characters. The concept of an attribute grammar is extended to these context-free array grammars. This attribute grammar has been used to define a number of chunk-based and directory-based binary file formats. A parser generator has been used with some of these grammars to generate syntax checkers (recognizers for validating binary file formats. Among the potential benefits of an attribute grammar-based approach to specification and parsing of binary file formats is that attribute grammars not only support format validation, but support generation of error messages during validation of format, validation of semantic constraints, attribute value extraction (characterization, generation of viewers or players for file formats, and conversion to current or standard file formats. The significance of these results is that with these extensions to core computer science concepts, traditional parser/compiler technologies can potentially be used as a part of a general, cost effective curation strategy for binary file formats.

  18. On-Board File Management and Its Application in Flight Operations

    Science.gov (United States)

    Kuo, N.

    1998-01-01

    In this paper, the author presents the minimum functions required for an on-board file management system. We explore file manipulation processes and demonstrate how the file transfer along with the file management system will be utilized to support flight operations and data delivery.

  19. CINDA 83 (1977-1983). The index to literature and computer files on microscopic neutron data

    International Nuclear Information System (INIS)

    1983-01-01

    CINDA, the Computer Index of Neutron Data, contains bibliographical references to measurements, calculations, reviews and evaluations of neutron cross-sections and other microscopic neutron data; it includes also index references to computer libraries of numerical neutron data exchanged between four regional neutron data centres. The present issue, CINDA 83, is an index to the literature on neutron data published after 1976. The basic volume, CINDA-A, together with the present issue, contains the full CINDA file as of 1 April 1983. A supplement to CINDA 83 is foreseen for fall 1983. Next year's issue, which is envisaged to be published in June 1984, will again cover all relevant literature that has appeared after 1976

  20. MR-AFS: a global hierarchical file-system

    International Nuclear Information System (INIS)

    Reuter, H.

    2000-01-01

    The next generation of fusion experiments will use object-oriented technology creating the need for world wide sharing of an underlying hierarchical file-system. The Andrew file system (AFS) is a well known and widely spread global distributed file-system. Multiple-resident-AFS (MR-AFS) combines the features of AFS with hierarchical storage management systems. Files in MR-AFS therefore may be migrated on secondary storage, such as roboted tape libraries. MR-AFS is in use at IPP for the current experiments and data originating from super-computer applications. Experiences and scalability issues are discussed

  1. The Global File System

    Science.gov (United States)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  2. USEEIO Satellite Files

    Data.gov (United States)

    U.S. Environmental Protection Agency — These files contain the environmental data as particular emissions or resources associated with a BEA sectors that are used in the USEEIO model. They are organized...

  3. Clockwise: A Mixed-Media File System

    NARCIS (Netherlands)

    Bosch, H.G.P.; Jansen, P.G.; Mullender, Sape J.

    This (short) paper presents the Clockwise, a mixed-media file system. The primary goal of the Clockwise is to provide a storage architecture that supports the storage and retrieval of best-effort and real-time file system data. Clockwise provides an abstraction called a dynamic partition that groups

  4. Sandia equation of state data base: seslan File

    Energy Technology Data Exchange (ETDEWEB)

    Kerley, G.I. [Sandia National Labs., Albuquerque, NM (US); Christian-Frear, T.L. [RE/SPEC Inc., Albuquerque, NM (US)

    1993-06-24

    Sandia National Laboratories maintains several libraries of equation of state tables, in a modified Sesame format, for use in hydrocode calculations and other applications. This report discusses one of those libraries, the seslan file, which contains 78 tables from the Los Alamos equation of state library. Minor changes have been made to these tables, making them more convenient for code users and reducing numerical difficulties that occasionally arise in hydrocode calculations.

  5. Status of the JENDL activation file

    International Nuclear Information System (INIS)

    Nakajima, Yutaka

    1996-01-01

    The preliminary JENDL activation file was accomplished in February 1995 and has been used in the Japanese Nuclear Data Committee and as one of the data sources for the Fusion Evaluated Nuclear Data Library in IAEA. Since there are already big activation libraries in western Europe and United States, we are aiming at more accurate evaluation of important reactions to application to nuclear energy development rather than aiming at as many reaction data as in these big libraries. In the preliminary file 1,158 reaction cross sections have been compiled for 225 nuclides up to 20 MeV. (author)

  6. Federating LHCb datasets using the DIRAC File catalog

    CERN Document Server

    Haen, Christophe; Frank, Markus; Tsaregorodtsev, Andrei

    2015-01-01

    In the distributed computing model of LHCb the File Catalog (FC) is a central component that keeps track of each file and replica stored on the Grid. It is federating the LHCb data files in a logical namespace used by all LHCb applications. As a replica catalog, it is used for brokering jobs to sites where their input data is meant to be present, but also by jobs for finding alternative replicas if necessary. The LCG File Catalog (LFC) used originally by LHCb and other experiments is now being retired and needs to be replaced. The DIRAC File Catalog (DFC) was developed within the framework of the DIRAC Project and presented during CHEP 2012. From the technical point of view, the code powering the DFC follows an Aspect oriented programming (AOP): each type of entity that is manipulated by the DFC (Users, Files, Replicas, etc) is treated as a separate 'concern' in the AOP terminology. Hence, the database schema can also be adapted to the needs of a Virtual Organization. LHCb opted for a highly tuned MySQL datab...

  7. Visualizing NetCDF Files by Using the EverVIEW Data Viewer

    Science.gov (United States)

    Conzelmann, Craig; Romañach, Stephanie S.

    2010-01-01

    Over the past few years, modelers in South Florida have started using Network Common Data Form (NetCDF) as the standard data container format for storing hydrologic and ecologic modeling inputs and outputs. With its origins in the meteorological discipline, NetCDF was created by the Unidata Program Center at the University Corporation for Atmospheric Research, in conjunction with the National Aeronautics and Space Administration and other organizations. NetCDF is a portable, scalable, self-describing, binary file format optimized for storing array-based scientific data. Despite attributes which make NetCDF desirable to the modeling community, many natural resource managers have few desktop software packages which can consume NetCDF and unlock the valuable data contained within. The U.S. Geological Survey and the Joint Ecosystem Modeling group, an ecological modeling community of practice, are working to address this need with the EverVIEW Data Viewer. Available for several operating systems, this desktop software currently supports graphical displays of NetCDF data as spatial overlays on a three-dimensional globe and views of grid-cell values in tabular form. An included Open Geospatial Consortium compliant, Web-mapping service client and charting interface allows the user to view Web-available spatial data as additional map overlays and provides simple charting visualizations of NetCDF grid values.

  8. Ground-Based Global Navigation Satellite System GLONASS (GLObal NAvigation Satellite System) Combined Broadcast Ephemeris Data (daily files) from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset consists of ground-based Global Navigation Satellite System (GNSS) GLONASS Combined Broadcast Ephemeris Data (daily files of all distinct navigation...

  9. Non-POSIX File System for LHCb Online Event Handling

    CERN Document Server

    Garnier, J C; Cherukuwada, S S

    2011-01-01

    LHCb aims to use its O(20000) CPU cores in the high level trigger (HLT) and its 120 TB Online storage system for data reprocessing during LHC shutdown periods. These periods can last a few days for technical maintenance or only a few hours during beam interfill gaps. These jobs run on files which are staged in from tape storage to the local storage buffer. The result are again one or more files. Efficient file writing and reading is essential for the performance of the system. Rather than using a traditional shared file-system such as NFS or CIFS we have implemented a custom, light-weight, non-Posix network file-system for the handling of these files. Streaming this file-system for the data-access allows to obtain high performance, while at the same time keep the resource consumption low and add nice features not found in NFS such as high-availability, transparent fail-over of the read and write service. The writing part of this streaming service is in successful use for the Online, real-time writing of the d...

  10. TFTR data management system

    International Nuclear Information System (INIS)

    Randerson, L.; Chu, J.; Ludescher, C.; Malsbury, J.; Stark, W.

    1986-01-01

    Developments in the tokamak fusion test reactor (TFTR) data-management system supporting data acquisition and off-line physics data reduction are described. Data from monitor points, timing channels, transient recorder channels, and other devices are acquired and stored for use by on-line tasks. Files are transferred off line automatically. A configuration utility determines data acquired and files transferred. An event system driven by file arrival activates off-line reduction processes. A post-run process transfers files not shipped during runs. Files are archived to tape and are retrievable by digraph and shot number. Automatic skimming based on most recent access, file type, shot numbers, and user-set protections maintains the files required for post-run data reduction

  11. TFTR data management system

    International Nuclear Information System (INIS)

    Randerson, L.; Chu, J.; Ludescher, C.; Malsbury, J.; Stark, W.

    1986-01-01

    Developments in the tokamak fusion test reactor (TFTR) data management system supporting data management system supporting data acquisition and off-line physics data reduction are described. Data from monitor points, timing channels, and transient recorder channels and other devices are acquired and stored for use by on-line tasks. Files are transferred off-line automatically. A configuration utility determines data acquired and files transferred. An event system driven by file arrival activates off-line reduction processes. A post-run process transfers files not shipped during runs. Files are archived to tape and are retrievable by digraph and shot number. Automatic skimming based on most recent access, file type, shot numbers, and user-set protection maintains the files required for post-run data reduction

  12. Joint evaluated file qualification for thermal neutron reactors

    International Nuclear Information System (INIS)

    Tellier, H.; Van der Gucht, C.; Vanuxeem, J.

    1986-09-01

    The neutron and nuclear data which are needed by reactor physicists to perform core calculations are brought together in the evaluated files. The files are processed to provide multigroup cross sections. The accuracy of the core calculations depends on the initial data, which is sometimes not accurate enough. Therefore the reactor physicists carry out integral experiments. We show, in this paper, how the use of these integral experiments and the application of a tendency research method can improve the accuracy of the neutron data. This technique was applied to the validation of the joint evaluated file. For this purpose, 56 buckling measurements and 42 isotopic analysis of irradiated fuel were used. Small modifications of the initial data are proposed. The final values are compared with recent recommended values or microscopic data. 8 refs

  13. Joint evaluated file qualification for thermal neutron reactors

    International Nuclear Information System (INIS)

    Tellier, H.; van der Gucht, C.; Vanuxeem, J.

    1986-01-01

    The neutron and nuclear data which are needed by reactor physicists to perform core calculations are brought together in the evaluated files. The files are processes to provide multigroup cross sections. The accuracy of the core calculations depends on the initial data, which is sometimes not accurate enough. Therefore the reactor physicists carry out integral experiments. The authors show, in this paper, how the use of these integral experiments and the application of a tendency research method can improve the accuracy of the neutron data. This technique was applied to the validation of the Joint evaluated file. For this purpose, 56 buckling measurements and 42 isotopic analysis of irradiated fuel were used. Small modifications of the initial data are proposed. The final values are compared with recent recommended values or microscopic data

  14. ACTIV87 Fast neutron activation cross section file 1987

    International Nuclear Information System (INIS)

    Manokhin, V.N.; Pashchenko, A.B.; Plyaskin, V.I.; Bychkov, V.M.; Pronyaev, V.G.; Schwerer, O.

    1989-10-01

    This document summarizes the content of the Fast Neutron Activation Cross Section File based on data from different evaluated data libraries and individual evaluations in ENDF/B-5 format. The entire file or selective retrievals from it are available on magnetic tape, free of charge, from the IAEA Nuclear Data Section. (author)

  15. Formulation of detailed consumables management models for the development (preoperational) period of advanced space transportation system. Volume 4: Flight data file contents

    Science.gov (United States)

    Zamora, M. A.

    1976-01-01

    The contents of the Flight Data File which constitute the data required by and the data generated by the Mission Planning Processor are presented for the construction of the timeline and the determination of the consumables requirements of a given mission.

  16. MMLEADS Public Use File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare-Medicaid Linked Enrollee Analytic Data Source (MMLEADS) Public Use File (PUF) contains demographic, enrollment, condition prevalence, utilization, and...

  17. Release of the ENDF/B-VII.1 Evaluated Nuclear Data File

    Energy Technology Data Exchange (ETDEWEB)

    Brown, David

    2012-06-30

    The Cross Section Evaluation Working Group (CSEWG) released the ENDF/B-VII.1 library on December 22, 2011. The ENDF/B-VII.1 library is CSEWG's latest recommended evaluated nuclear data file for use in nuclear science and technology applications, and incorporates advances made in the five years since the release of ENDF/B-VII.0, including: many new evaluation in the neutron sublibrary (423 in all and over 190 of these contain covariances), new fission product yields and a greatly improved decay data sublibrary. This summary barely touches on the five years worth of advances present in the ENDF/B-VII.1 library. We expect that these changes will lead to improved integral performance in reactors and other applications. Furthermore, the expansion of covariance data in this release will allow for better uncertainty quantification, reducing design margins and costs. The ENDF library is an ongoing and evolving effort. Currently, the ENDF data community embarking on several parallel efforts to improve library management: (1) The adoption of a continuous integration system to provide evaluators 'instant' feedback on the quality of their evaluations and to provide data users with working 'beta' quality libraries in between major releases. (2) The transition to new hierarchical data format - the Generalized Nuclear Data (GND) format. We expect GND to enable new kinds of evaluated data which cannot be accommodated in the legacy ENDF format. (3) The development of data assimilation and uncertainty propagation techniques to enable the consistent use of integral experimental data in the evaluation process.

  18. ENDF/B-5 Dosimetry Files, mod. 2 1979/81

    International Nuclear Information System (INIS)

    DayDay, N.; Lemmel, H.D.

    1981-09-01

    This document summarizes the contents and documentation of the ENDF/B-5 Dosimetry Files (Point or Group Data) released in October 1979 and modified in August 1981. The files contain data for 36 neutron reactions of 26 isotopes. The entire libraries or selective retrievals from them can be obtained free of charge from the IAEA Nuclear Data Section. (author)

  19. Configuration Management File Manager Developed for Numerical Propulsion System Simulation

    Science.gov (United States)

    Follen, Gregory J.

    1997-01-01

    One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.

  20. RAMA: A file system for massively parallel computers

    Science.gov (United States)

    Miller, Ethan L.; Katz, Randy H.

    1993-01-01

    This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.

  1. Controlling P2P File-Sharing Networks Traffic

    OpenAIRE

    García Pineda, Miguel; HAMMOUMI, MOHAMMED; Canovas Solbes, Alejandro; Lloret, Jaime

    2011-01-01

    Since the appearance of Peer-To-Peer (P2P) file-sharing networks some time ago, many Internet users have chosen this technology to share and search programs, videos, music, documents, etc. The total number of P2P file-sharing users has been increasing and decreasing in the last decade depending on the creation or end of some well known P2P file-sharing systems. P2P file-sharing networks traffic is currently overloading some data networks and it is a major headache for netw...

  2. APLIKASI KEAMANAN FILE AUDIO WAV (WAVEFORM DENGAN TERAPAN ALGORITMA RSA

    Directory of Open Access Journals (Sweden)

    Raja Nasrul Fuad

    2017-03-01

    Full Text Available The WAV file format that is widely used rough on various kinds of multimedia and gaming platforms. Ease of access and technological development with a variety of media to facilitate the exchange of information to various places. The data are important and need to be kept confidential secret for a wide range of security threats so that data can be intercepted and acknowledged by third parties during the shipping process. Of these problems led to the idea to create an application data security functions can secure the data using the RSA algorithm. The programming language is C # with Visual Studio software, the processed data is a sample each byte in WAV file, the header will be the same as that originally WAV files can be played even if the information has been withheld. RSA algorithm can be implemented into a programming language that WAV files can be processed and secured the data.

  3. PeakML/mzMatch : A File Format, Java Library, R Library, and Tool-Chain for Mass Spectrometry Data Analysis

    NARCIS (Netherlands)

    Scheltema, Richard A.; Jankevics, Andris; Jansen, Ritsert C.; Swertz, Morris A.; Breitling, Rainer

    2011-01-01

    The recent proliferation of high-resolution mass spectrometers has generated a wealth of new data analysis methods. However, flexible integration of these methods into configurations best suited to the research question is hampered by heterogeneous file formats and monolithic software development.

  4. Conversion software for ANSYS APDL 2 FLUENT MHD magnetic file

    International Nuclear Information System (INIS)

    Ghita, G.; Ionescu, S.; Prisecaru, I.

    2016-01-01

    The present paper describes the improvements made to the conversion software for ANSYS APDL 2 FLUENT MHD Magnetic File which is able to extract the data from ANSYS APDL file and write down a file containing the magnetic field data in FLUENT magneto hydro dynamics (MHD) format. The MHD module has some features for the uniform and non uniform magnetic field but it is limited for sinusoidal or pulsed, square wave, having a fixed duty cycle of 50%. The present software, ANSYS APDL 2 FLUENT MHD Magnetic File, suffered major modifications in comparison with the last one. The most important improvement consists in a new graphical interface, which has 3D graphical interface for the input file but also for the output file. Another improvement has been made for processing time, the new version is two times faster comparing with the old one. (authors)

  5. 11 CFR 100.19 - File, filed or filing (2 U.S.C. 434(a)).

    Science.gov (United States)

    2010-01-01

    ... a facsimile machine or by electronic mail if the reporting entity is not required to file..., including electronic reporting entities, may use the Commission's website's on-line program to file 48-hour... the reporting entity is not required to file electronically in accordance with 11 CFR 104.18. [67 FR...

  6. Research of Performance Linux Kernel File Systems

    Directory of Open Access Journals (Sweden)

    Andrey Vladimirovich Ostroukh

    2015-10-01

    Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.

  7. Grid collector: An event catalog with automated file management

    International Nuclear Information System (INIS)

    Wu, Kesheng; Zhang, Wei-Ming; Sim, Alexander; Gu, Junmin; Shoshani, Arie

    2003-01-01

    High Energy Nuclear Physics (HENP) experiments such as STAR at BNL and ATLAS at CERN produce large amounts of data that are stored as files on mass storage systems in computer centers. In these files, the basic unit of data is an event. Analysis is typically performed on a selected set of events. The files containing these events have to be located, copied from mass storage systems to disks before analysis, and removed when no longer needed. These file management tasks are tedious and time consuming. Typically, all events contained in the files are read into memory before a selection is made. Since the time to read the events dominate the overall execution time, reading the unwanted event needlessly increases the analysis time. The Grid Collector is a set of software modules that works together to address these two issues. It automates the file management tasks and provides ''direct'' access to the selected events for analyses. It is currently integrated with the STAR analysis framework. The users can select events based on tags, such as, ''production date between March 10 and 20, and the number of charged tracks > 100.'' The Grid Collector locates the files containing relevant events, transfers the files across the Grid if necessary, and delivers the events to the analysis code through the familiar iterators. There has been some research efforts to address the file management issues, the Grid Collector is unique in that it addresses the event access issue together with the file management issues. This makes it more useful to a large variety of users

  8. Grid collector an event catalog with automated file management

    CERN Document Server

    Ke Sheng Wu; Sim, A; Jun Min Gu; Shoshani, A

    2004-01-01

    High Energy Nuclear Physics (HENP) experiments such as STAR at BNL and ATLAS at CERN produce large amounts of data that are stored as files on mass storage systems in computer centers. In these files, the basic unit of data is an event. Analysis is typically performed on a selected set of events. The files containing these events have to be located, copied from mass storage systems to disks before analysis, and removed when no longer needed. These file management tasks are tedious and time consuming. Typically, all events contained in the files are read into memory before a selection is made. Since the time to read the events dominate the overall execution time, reading the unwanted event needlessly increases the analysis time. The Grid Collector is a set of software modules that works together to address these two issues. It automates the file management tasks and provides "direct" access to the selected events for analyses. It is currently integrated with the STAR analysis framework. The users can select ev...

  9. Utilizing HDF4 File Content Maps for the Cloud

    Science.gov (United States)

    Lee, Hyokyung Joe

    2016-01-01

    We demonstrate a prototype study that HDF4 file content map can be used for efficiently organizing data in cloud object storage system to facilitate cloud computing. This approach can be extended to any binary data formats and to any existing big data analytics solution powered by cloud computing because HDF4 file content map project started as long term preservation of NASA data that doesn't require HDF4 APIs to access data.

  10. 49 CFR 564.5 - Information filing; agency processing of filings.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Information filing; agency processing of filings... HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION REPLACEABLE LIGHT SOURCE INFORMATION (Eff. until 12-01-12) § 564.5 Information filing; agency processing of filings. (a) Each manufacturer...

  11. A secure file manager for UNIX

    Energy Technology Data Exchange (ETDEWEB)

    DeVries, R.G.

    1990-12-31

    The development of a secure file management system for a UNIX-based computer facility with supercomputers and workstations is described. Specifically, UNIX in its usual form does not address: (1) Operation which would satisfy rigorous security requirements. (2) Online space management in an environment where total data demands would be many times the actual online capacity. (3) Making the file management system part of a computer network in which users of any computer in the local network could retrieve data generated on any other computer in the network. The characteristics of UNIX can be exploited to develop a portable, secure file manager which would operate on computer systems ranging from workstations to supercomputers. Implementation considerations making unusual use of UNIX features, rather than requiring extensive internal system changes, are described, and implementation using the Cray Research Inc. UNICOS operating system is outlined.

  12. Program for shaping neutron microconstants for calculations by means of the Monte-Carlo method on the base of estimated data files (NEDAM)

    International Nuclear Information System (INIS)

    Zakharov, L.N.; Markovskij, D.V.; Frank-Kamenetskij, A.D.; Shatalov, G.E.

    1978-01-01

    The program for shaping neutron microconstants for calculations by means of the Monte-Carlo method, oriented on the detailed consideration of processes in the quick region. The initial information is files of the estimated datea within the UKNDL formate. The method combines the group approach to representation of the process probability and anisotropy of the elastic scattering with the individual description of the secondary neutron spectra of non-elastic processes. The NEDAM program is written in the FORTRAN language for BESM-6 computer and has the following characteristics: the initial file length of the evaluated data is 20000 words, the multigroup constant file length equals 8000 words, the MARK massive length equals 1000 words. The calculation time of a single variant equals 1-2 min

  13. Cut-and-Paste file-systems: integrating simulators and file systems

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.

    1995-01-01

    We have implemented an integrated and configurable file system called the Pegasus filesystem (PFS) and a trace-driven file-system simulator called Patsy. Patsy is used for off-line analysis of file-systemalgorithms, PFS is used for on-line file-systemdata storage. Algorithms are first analyzed in

  14. Generation of SCALE 6 Input Data File for Cross Section Library of PWR Spent Fuel

    International Nuclear Information System (INIS)

    Jeong, Chang Joon; Cho, Dong Keun

    2010-11-01

    In order to obtain the cross section libraries of the Korean Pressurized water reactor (PWR) spent fuel (SF), SCALE 6 code input files have been generated. The PWR fuel data were obtained from the nuclear design report (NDR) of the current operating PWRs. The input file were prepared for 16 fuel types such as 4 types of Westinghouse 14x14, 3 types of OPR-1000 16x16, 4 types of Westinghouse 16x16, and 6 types of Westinghouse 17x17. For each fuel type, 5 kinds of fuel enrichments have been considered such as 1.5, 2.0 ,3.0, 4.0 and 5.0 wt%. In the SCALE 6 calculation, a ENDF-V 44 group was used. The 25 burnup step until 72000 MWD/T was used. A 1/4 symmetry model was used for 16x16 and 17x17 fuel assembly, and 1/2 symmetry model was used for 14x14 fuel assembly The generated cross section libraries will be used for the source-term analysis of the PWR SF

  15. COMPOZ data guide

    International Nuclear Information System (INIS)

    Knight, J.R.

    1984-01-01

    The COMPOZ Data Guide used to create the Standard Composition Library is described. Of particular importance is documentation of the COMPOZ input data file structure. Knowledge of the file structure allows users to edit the data file and subsequently create their own site-specific composition library

  16. 40 CFR 716.25 - Adequate file search.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the required...

  17. Grid collector: An event catalog with automated file management

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Zhang, Wei-Ming; Sim, Alexander; Gu, Junmin; Shoshani, Arie

    2003-10-17

    High Energy Nuclear Physics (HENP) experiments such as STAR at BNL and ATLAS at CERN produce large amounts of data that are stored as files on mass storage systems in computer centers. In these files, the basic unit of data is an event. Analysis is typically performed on a selected set of events. The files containing these events have to be located, copied from mass storage systems to disks before analysis, and removed when no longer needed. These file management tasks are tedious and time consuming. Typically, all events contained in the files are read into memory before a selection is made. Since the time to read the events dominate the overall execution time, reading the unwanted event needlessly increases the analysis time. The Grid Collector is a set of software modules that works together to address these two issues. It automates the file management tasks and provides ''direct'' access to the selected events for analyses. It is currently integrated with the STAR analysis framework. The users can select events based on tags, such as, ''production date between March 10 and 20, and the number of charged tracks > 100.'' The Grid Collector locates the files containing relevant events, transfers the files across the Grid if necessary, and delivers the events to the analysis code through the familiar iterators. There has been some research efforts to address the file management issues, the Grid Collector is unique in that it addresses the event access issue together with the file management issues. This makes it more useful to a large variety of users.

  18. Detection Of Alterations In Audio Files Using Spectrograph Analysis

    Directory of Open Access Journals (Sweden)

    Anandha Krishnan G

    2015-08-01

    Full Text Available The corresponding study was carried out to detect changes in audio file using spectrograph. An audio file format is a file format for storing digital audio data on a computer system. A sound spectrograph is a laboratory instrument that displays a graphical representation of the strengths of the various component frequencies of a sound as time passes. The objectives of the study were to find the changes in spectrograph of audio after altering them to compare altering changes with spectrograph of original files and to check for similarity and difference in mp3 and wav. Five different alterations were carried out on each audio file to analyze the differences between the original and the altered file. For altering the audio file MP3 or WAV by cutcopy the file was opened in Audacity. A different audio was then pasted to the audio file. This new file was analyzed to view the differences. By adjusting the necessary parameters the noise was reduced. The differences between the new file and the original file were analyzed. By adjusting the parameters from the dialog box the necessary changes were made. The edited audio file was opened in the software named spek where after analyzing a graph is obtained of that particular file which is saved for further analysis. The original audio graph received was combined with the edited audio file graph to see the alterations.

  19. Hospital Service Area File

    Data.gov (United States)

    U.S. Department of Health & Human Services — This file is derived from the calendar year inpatient claims data. The records contain number of discharges, length of stay, and total charges summarized by provider...

  20. Impact of up-to-date evaluated nuclear data files on the Monte-Carlo analysis results of metallic fueled BFS critical assemblies

    International Nuclear Information System (INIS)

    Yoo, Jaewoon; Kim, Do-Heon; Kim, Sang-Ji; Kim, Yeong-Il

    2009-01-01

    Three metallic fueled BFS critical assemblies, BFS-73-1, BFS-75-1, and BFS-55-1 were analyzed by using the Monte-Carlo analysis code MCNP4C with five different evaluated data files, ENDF/B-VII.0, JEFF-3.1, JENDL-3.3, JENDL-AC and ENDF/B-VI.6. The impacts of microscopic cross sections in the up-to-date evaluated nuclear data files were clarified by the analyses. The update of Zr cross section leads to the calculated k-effective lower than that of ENDF/B-VI.6. The revision of U-238 inelastic scattering cross section makes large difference in the predicted k-effectives between the libraries, which depends on the amount of the contribution of the inelastic cross sections change and the compensation of other reaction types. The results of the spectral indices and reaction rate ratios shows the improvement of the up-to-date evaluated nuclear data files for the U-238, Np-237, Pu-240 fission reactions, however, there are still need of further improvement for other minor actinide cross sections. The heterogeneity effects involved on the k-effective and relative fission rate distribution were evaluated in this study, which can be used as the correction factor for constructing the homogeneous benchmark configuration while keeping the consistency with the actual critical experiment. (author)

  1. PeakML/mzMatch: a file format, Java library, R library, and tool-chain for mass spectrometry data analysis.

    Science.gov (United States)

    Scheltema, Richard A; Jankevics, Andris; Jansen, Ritsert C; Swertz, Morris A; Breitling, Rainer

    2011-04-01

    The recent proliferation of high-resolution mass spectrometers has generated a wealth of new data analysis methods. However, flexible integration of these methods into configurations best suited to the research question is hampered by heterogeneous file formats and monolithic software development. The mzXML, mzData, and mzML file formats have enabled uniform access to unprocessed raw data. In this paper we present our efforts to produce an equally simple and powerful format, PeakML, to uniformly exchange processed intermediary and result data. To demonstrate the versatility of PeakML, we have developed an open source Java toolkit for processing, filtering, and annotating mass spectra in a customizable pipeline (mzMatch), as well as a user-friendly data visualization environment (PeakML Viewer). The PeakML format in particular enables the flexible exchange of processed data between software created by different groups or companies, as we illustrate by providing a PeakML-based integration of the widely used XCMS package with mzMatch data processing tools. As an added advantage, downstream analysis can benefit from direct access to the full mass trace information underlying summarized mass spectrometry results, providing the user with the means to rapidly verify results. The PeakML/mzMatch software is freely available at http://mzmatch.sourceforge.net, with documentation, tutorials, and a community forum.

  2. Provider of Services File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The POS file contains data on characteristics of hospitals and other types of healthcare facilities, including the name and address of the facility and the type of...

  3. File System Virtual Appliances

    Science.gov (United States)

    2010-05-01

    4 KB of data is read or written, data is copied back and forth using trampoline buffers — pages that are shared during proxy initialization — because...in 2008. CIO Magazine. 104 · File system virtual appliances [64] Megiddo, N. and Modha, D. S. 2003. ARC: A Self-Tuning, Low Over- head Replacement

  4. Fast processing the film data file

    International Nuclear Information System (INIS)

    Abramov, B.M.; Avdeev, N.F.; Artemov, A.V.

    1978-01-01

    The problems of processing images obtained from three-meter magnetic spectrometer on a new PSP-2 automatic device are considered. A detailed description of the filtration program, which controls the correctness of operation connection line, as well as of scanning parameters and technical quality of information. The filtration process can be subdivided into the following main stages: search of fiducial marks binding of track to fiducial marks; plotting from sparks of track fragments in chambers. For filtration purposes the BESM-6 computer has been chosen. The complex of filtration programs is shaped as a RAM-file, the required version of the program is collected by the PATCHY program. The subprograms, performing the greater part of the calculations are written in the autocode MADLEN, the rest of the subprograms - in FORTRAN and ALGOL. The filtration time for one image makes 1,2-2 s of the calculation. The BESM-6 computer processes up to 12 thousand images a day

  5. File sharing

    NARCIS (Netherlands)

    van Eijk, N.

    2011-01-01

    File sharing’ has become generally accepted on the Internet. Users share files for downloading music, films, games, software etc. In this note, we have a closer look at the definition of file sharing, the legal and policy-based context as well as enforcement issues. The economic and cultural

  6. Validation of CENDL and JEFF evaluated nuclear data files for TRIGA calculations through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors

    International Nuclear Information System (INIS)

    Uddin, M.N.; Sarker, M.M.; Khan, M.J.H.; Islam, S.M.A.

    2009-01-01

    The aim of this paper is to present the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through the analysis of the integral parameters of TRX and BAPL benchmark lattices of thermal reactors for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. In this process, the 69-group cross-section library for lattice code WIMS was generated using the basic evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 with the help of nuclear data processing code NJOY99.0. Integral measurements on the thermal reactor lattices TRX-1, TRX-2, BAPL-UO 2 -1, BAPL-UO 2 -2 and BAPL-UO 2 -3 served as standard benchmarks for testing nuclear data files and have also been selected for this analysis. The integral parameters of the said lattices were calculated using the lattice transport code WIMSD-5B based on the generated 69-group cross-section library. The calculated integral parameters were compared to the measured values as well as the results of Monte Carlo Code MCNP. It was found that in most cases, the values of integral parameters show a good agreement with the experiment and MCNP results. Besides, the group constants in WIMS format for the isotopes U-235 and U-238 between two data files have been compared using WIMS library utility code WILLIE and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through benchmarking the integral parameters of TRX and BAPL lattices and can also be essential to implement further neutronic analysis of TRIGA Mark-II research reactor at AERE, Dhaka, Bangladesh.

  7. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, May 2015

    International Nuclear Information System (INIS)

    Wang, Wenming; Yokoyama, Kenji; Kim, Do Heon; Kodeli, Ivan-Alexander; Hursin, Mathieu; Pelloni, Sandro; Palmiotti, Giuseppe; Salvatores, Massimo; Touran, Nicholas; Cabellos De Francisco, Oscar; )

    2015-05-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the fourth Subgroup meeting, held at the NEA, Issy-les-Moulineaux, France, on 19-20 May 2015. It comprises a Summary Record of the meeting, two papers on deliverables and all the available presentations (slides) given by the participants: 1 - Status of Deliverables: '1. Methodology' (K. Yokoyama); 2 - Status of Deliverables: '2. Comments on covariance data' (K. Yokoyama); 3 - PROTEUS HCLWR Experiments (M. Hursin); 4 - Preliminary UQ Efforts for TWR Design (N. Touran); 5 - Potential use of beta-eff and other benchmark for adjustment (I. Kodeli); 6 - k_e_f_f uncertainties for a simple case of Am"2"4"1 using different codes and evaluated files (I. Kodeli); 7 - k_e_f_f uncertainties for a simple case of Am"2"4"1 using TSUNAMI (O. Cabellos); 8 - REWIND: Ranking Experiments by Weighting to Improve Nuclear Data (G. Palmiotti); 9 - Recent analysis on NUDUNA/MOCABA applications to reactor physics parameters (E. Castro); 10 - INL exploratory study for SEG (A. Hummel); 11 - The Development of Nuclear Data Adjustment Code at CNDC (H. Wu); 12 - SG39 Perspectives (M. Salvatores). A list of issues and actions conclude the document

  8. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    Science.gov (United States)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  9. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    International Nuclear Information System (INIS)

    Toor, S; Eerola, P; Kraemer, O; Lindén, T; Osmani, L; Tarkoma, S; White, J

    2014-01-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  10. Summary of JENDL-2 general purpose file

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, Tsuneo [ed.

    1984-06-15

    The general purpose file of the second version of Japanese Evaluated Nuclear Data Library (JENDL-2) was released in December 1982. Recently, descriptive data were added to JENDL-2 and at the same time the first revision of numerical data was performed. JENDL-2 (Rev.1) consists of the data for 89 nuclides and about 211,000 records in the ENDF/B-IV format. In this report, full listings of presently added descriptive data are given to summarize the JENDL-2 general purpose file. The 2200-m/sec and 14-MeV cross sections, resonance integrals, Maxwellian and fission spectrum averaged cross sections are given in a table. Average cross sections were also calculated in suitable energy intervals.

  11. Summary of JENDL-2 general purpose file

    International Nuclear Information System (INIS)

    Nakagawa, Tsuneo

    1984-06-01

    The general purpose file of the second version of Japanese Evaluated Nuclear Data Library (JENDL-2) was released in December 1982. Recently, descriptive data were added to JENDL-2 and at the same time the first revision of numerical data was performed. JENDL-2 (Rev1) consists of the data for 89 nuclides and about 211,000 records in the ENDF/B-IV format. In this report, full listings of presently added descriptive data are given to summarize the JENDL-2 general purpose file. The 2200-m/sec and 14-MeV cross sections, resonance integrals, Maxwellian and fission spectrum averaged cross sections are given in a table. Average cross sections were also calculated in suitable energy intervals. (author)

  12. Performance of the engineering analysis and data system 2 common file system

    Science.gov (United States)

    Debrunner, Linda S.

    1993-01-01

    The Engineering Analysis and Data System (EADS) was used from April 1986 to July 1993 to support large scale scientific and engineering computation (e.g. computational fluid dynamics) at Marshall Space Flight Center. The need for an updated system resulted in a RFP in June 1991, after which a contract was awarded to Cray Grumman. EADS II was installed in February 1993, and by July 1993 most users were migrated. EADS II is a network of heterogeneous computer systems supporting scientific and engineering applications. The Common File System (CFS) is a key component of this system. The CFS provides a seamless, integrated environment to the users of EADS II including both disk and tape storage. UniTree software is used to implement this hierarchical storage management system. The performance of the CFS suffered during the early months of the production system. Several of the performance problems were traced to software bugs which have been corrected. Other problems were associated with hardware. However, the use of NFS in UniTree UCFM software limits the performance of the system. The performance issues related to the CFS have led to a need to develop a greater understanding of the CFS organization. This paper will first describe the EADS II with emphasis on the CFS. Then, a discussion of mass storage systems will be presented, and methods of measuring the performance of the Common File System will be outlined. Finally, areas for further study will be identified and conclusions will be drawn.

  13. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  14. The rice growth image files - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us The Rice Growth Monitoring for The Phenotypic Functional Analysis The rice growth image file...s Data detail Data name The rice growth image files DOI 10.18908/lsdba.nbdc00945-004 Description of data contents The rice growth ima...ge files categorized based on file size. Data file File name: image files (director...y) File URL: ftp://ftp.biosciencedbc.jp/archive/agritogo-rice-phenome/LATEST/image...ite Policy | Contact Us The rice growth image files - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive ...

  15. Java facilities in processing XML files - JAXB and generating PDF reports

    Directory of Open Access Journals (Sweden)

    Danut-Octavian SIMION

    2008-01-01

    Full Text Available The paper presents the Java programming language facilities in working with XML files using JAXB (The Java Architecture for XML Binding technology and generating PDF reports from XML files using Java objects. The XML file can be an existing one and could contain the data about an entity (Clients for example or it might be the result of a SELECT-SQL statement. JAXB generates JAVA classes through xs rules and a Marshalling, Unmarshalling compiler. The PDF file is build from a XML file and uses XSL-FO formatting file and a Java ResultSet object.

  16. A technique for integrating remote minicomputers into a general computer's file system

    CERN Document Server

    Russell, R D

    1976-01-01

    This paper describes a simple technique for interfacing remote minicomputers used for real-time data acquisition into the file system of a central computer. Developed as part of the ORION system at CERN, this 'File Manager' subsystem enables a program in the minicomputer to access and manipulate files of any type as if they resided on a storage device attached to the minicomputer. Yet, completely transparent to the program, the files are accessed from disks on the central system via high-speed data links, with response times comparable to local storage devices. (6 refs).

  17. A novel platform for in vitro analysis of torque, forces, and three-dimensional file displacements during root canal preparations: application to ProTaper rotary files.

    Science.gov (United States)

    Diop, Amadou; Maurel, Nathalie; Oiknine, Michel; Patoor, Etienne; Machtou, Pierre

    2009-04-01

    We proposed a new testing setup and in vitro experimental procedure allowing the analysis of the forces, torque, and file displacements during the preparation of root canals using nickel-titanium rotary endodontic files. We applied it to the preparation of 20 fresh frozen cadaveric teeth using ProTaper files (Dentsply Maillefer, Ballaigues, Switzerland), according to a clinically used sequence. During the preparations, a clinical hand motion was performed by an endodontist, and we measured the applied torque around the file axis and also the involved three-dimensional forces and 3-dimensional file displacements. Such a biomechanical procedure is useful to better understand the working conditions of the files in terms of loads and displacements. It could be used to analyze the effects of various mechanical and geometric parameters on the files' behavior and to get data for modelling purposes. Finally, it could contribute to studies aiming to improve files design in order to reduce the risks of file fractures.

  18. Image files - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ftp://ftp.biosciencedbc.jp/archive/rpd/LATEST/rpd_gel_image.zip File size: 38.5 MB Simple search URL - Data ... License Update History of This Database Site Policy | Contact Us Image files - RPD | LSDB Archive ...

  19. 12 CFR Appendix F to Part 360 - Customer File Structure

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Customer File Structure F Appendix F to Part... POLICY RESOLUTION AND RECEIVERSHIP RULES Pt. 360, App. F Appendix F to Part 360—Customer File Structure This is the structure of the data file to provide to the FDIC information related to each customer who...

  20. [PVFS 2000: An operational parallel file system for Beowulf

    Science.gov (United States)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  1. A brief overview of the European Fusion File (EFF) Project

    International Nuclear Information System (INIS)

    Kellett, M.A.; Forrest, R.A.; Batistoni, P.

    2004-01-01

    The European Fusion File (EFF) Project is a collaborative project with work funded by the European Fusion Development Agreement (EFDA). The emphasis is on the pooling of resources and removal of duplication of effort, leading to the efficient development of two types of nuclear data libraries for use in fusion power plant design and operation studies. The two branches consist of, on the one hand, a general purpose file for modelling and design capabilities and, second, an activation file for the calculation and simulation of dose rates and energy release during operation of a future power plant. Efforts are directed towards a continued improvement of the quality of the nuclear data needed for these analyses. The OECD Nuclear Energy Agency's Data Bank acts as the central repository for the files and all information discussed during twice yearly meetings. It offers its services at no charge to the Project. (author)

  2. Thermal lattice benchmarks for testing basic evaluated data files, developed with MCNP4B

    International Nuclear Information System (INIS)

    Maucec, M.; Glumac, B.

    1996-01-01

    The development of unit cell and full reactor core models of DIMPLE S01A and TRX-1 and TRX-2 benchmark experiments, using Monte Carlo computer code MCNP4B is presented. Nuclear data from ENDF/B-V and VI version of cross-section library were used in the calculations. In addition, a comparison to results obtained with the similar models and cross-section data from the EJ2-MCNPlib library (which is based upon the JEF-2.2 evaluation) developed in IRC Petten, Netherlands is presented. The results of the criticality calculation with ENDF/B-VI data library, and a comparison to results obtained using JEF-2.2 evaluation, confirm the MCNP4B full core model of a DIMPLE reactor as a good benchmark for testing basic evaluated data files. On the other hand, the criticality calculations results obtained using the TRX full core models show less agreement with experiment. It is obvious that without additional data about the TRX geometry, our TRX models are not suitable as Monte Carlo benchmarks. (author)

  3. Renewal-anomalous-heterogeneous files

    International Nuclear Information System (INIS)

    Flomenbom, Ophir

    2010-01-01

    Renewal-anomalous-heterogeneous files are solved. A simple file is made of Brownian hard spheres that diffuse stochastically in an effective 1D channel. Generally, Brownian files are heterogeneous: the spheres' diffusion coefficients are distributed and the initial spheres' density is non-uniform. In renewal-anomalous files, the distribution of waiting times for individual jumps is not exponential as in Brownian files, yet obeys: ψ α (t)∼t -1-α , 0 2 >, obeys, 2 >∼ 2 > nrml α , where 2 > nrml is the MSD in the corresponding Brownian file. This scaling is an outcome of an exact relation (derived here) connecting probability density functions of Brownian files and renewal-anomalous files. It is also shown that non-renewal-anomalous files are slower than the corresponding renewal ones.

  4. Log files can and should be prepared for a functionalistic approach

    DEFF Research Database (Denmark)

    Bergenholtz, Henning; Johnsen, Mia

    2007-01-01

    -ups. However, log file analyses have also been characterised by a juggling of num­bers based on data calculations of limited direct relevance to practical and theoretical lexicography. This article proposes the development of lexicographically relevant log files for the use in log file analyses in order...

  5. Experiences on File Systems: Which is the best file system for you?

    CERN Document Server

    Blomer, J

    2015-01-01

    The distributed file system landscape is scattered. Besides a plethora of research file systems, there is also a large number of production grade file systems with various strengths and weaknesses. The file system, as an abstraction of permanent storage, is appealing because it provides application portability and integration with legacy and third-party applications, including UNIX utilities. On the other hand, the general and simple file system interface makes it notoriously difficult for a distributed file system to perform well under a variety of different workloads. This contribution provides a taxonomy of commonly used distributed file systems and points out areas of research and development that are particularly important for high-energy physics.

  6. HLYWD: a program for post-processing data files to generate selected plots or time-lapse graphics

    International Nuclear Information System (INIS)

    Munro, J.K. Jr.

    1980-05-01

    The program HLYWD is a post-processor of output files generated by large plasma simulation computations or of data files containing a time sequence of plasma diagnostics. It is intended to be used in a production mode for either type of application; i.e., it allows one to generate along with the graphics sequence, segments containing title, credits to those who performed the work, text to describe the graphics, and acknowledgement of funding agency. The current version is designed to generate 3D plots and allows one to select type of display (linear or semi-log scales), choice of normalization of function values for display purposes, viewing perspective, and an option to allow continuous rotations of surfaces. This program was developed with the intention of being relatively easy to use, reasonably flexible, and requiring a minimum investment of the user's time. It uses the TV80 library of graphics software and ORDERLIB system software on the CDC 7600 at the National Magnetic Fusion Energy Computing Center at Lawrence Livermore Laboratory in California

  7. Methods and apparatus for capture and storage of semantic information with sub-files in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-02-03

    Techniques are provided for storing files in a parallel computing system using sub-files with semantically meaningful boundaries. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a plurality of sub-files. The method comprises the steps of obtaining a user specification of semantic information related to the file; providing the semantic information as a data structure description to a data formatting library write function; and storing the semantic information related to the file with one or more of the sub-files in one or more storage nodes of the parallel computing system. The semantic information provides a description of data in the file. The sub-files can be replicated based on semantically meaningful boundaries.

  8. Can E-Filing Reduce Tax Compliance Costs in Developing Countries?

    OpenAIRE

    Yilmaz, Fatih; Coolidge, Jacqueline

    2013-01-01

    The purpose of this study is to investigate the association between electronic filing (e-filing) and the total tax compliance costs incurred by small and medium size businesses in developing countries, based on survey data from South Africa, Ukraine, and Nepal. A priori, most observers expect that use of e-filing should reduce tax compliance costs, but this analysis suggests that the assum...

  9. Storing files in a parallel computing system using list-based index to identify replica files

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Zhang, Zhenhua; Grider, Gary

    2015-07-21

    Improved techniques are provided for storing files in a parallel computing system using a list-based index to identify file replicas. A file and at least one replica of the file are stored in one or more storage nodes of the parallel computing system. An index for the file comprises at least one list comprising a pointer to a storage location of the file and a storage location of the at least one replica of the file. The file comprises one or more of a complete file and one or more sub-files. The index may also comprise a checksum value for one or more of the file and the replica(s) of the file. The checksum value can be evaluated to validate the file and/or the file replica(s). A query can be processed using the list.

  10. Review of uncertainty files and improved multigroup cross section files for FENDL

    International Nuclear Information System (INIS)

    Ganesan, S.

    1994-03-01

    The IAEA Nuclear Data Section, in co-operation with several national nuclear data centers and research groups, is creating an internationally available Fusion Evaluated Nuclear Data Library (FENDL), which will serve as a comprehensive source of processed and tested nuclear data tailored to the requirements of the Engineering and Development Activities (EDA) of the International Thermonuclear Experimental Reactor (ITER) Project and other fusion-related development projects. The FENDL project of the International Atomic Energy Agency has the task of coordination with the goal of assembling, processing and testing a comprehensive, fusion-relevant Fusion Evaluated Nuclear Data Library with unrestricted international distribution. The present report contains the summary of the IAEA Advisory Group Meeting on ''Review of Uncertainty Files and Improved Multigroup Cross Section Files for FENDL'', held during 8-12 November 1993 at the Tokai Research Establishment, JAERI, Japan, organized in cooperation with the Japan Atomic Energy Research Institute. The report presents the current status of the FENDL activity and the future work plans in the form of conclusions and recommendations of the four Working Groups of the Advisory Group Meeting on (1) experimental and calculational benchmarks, (2) preparation processed libraries for FENDL/ITER, (3) specifying procedures for improving FENDL and (4) selection of activation libraries for FENDL. (author). 1 tab

  11. Nuclear data and related services

    International Nuclear Information System (INIS)

    Tuli, J.K.

    1985-01-01

    National Nuclear Data Center (NNDC) maintains a number of data bases containing bibliographic information and evaluated as well as experimental nuclear properties. An evaluated computer file maintained by the NNDC, called the Evaluated Nuclear Structure Data File (ENSDF), contains nuclear structure information for all known nuclides. The ENSDF is the source for the journal Nuclear Data Sheets which is produced and edited by NNDC. The Evaluated Nuclear Data File (ENDF), on the other hand is designed for storage and retrieval of such evaluated nuclear data as are used in neutronic, photonic, and decay heat calculations in a large variety of applications. The NNDC maintains three bibliographic files: NSR - for nuclear structure and decay data related references, CINDA - a bibliographic file for neutron induced reactions, and CPBIB - for charged particle reactions. Selected retrievals from evaluated data and bibliographic files are possible on-line or on request from NNDC

  12. Australian comments on data catalogues

    Energy Technology Data Exchange (ETDEWEB)

    Symonds, J L [A.A.E.C. Research Establishment, Lucas Heights (Australia)

    1968-05-01

    Between the need for some neutron data and a final evaluated set of data, the need for an action file, a bibliographic and reference file of catalogue, and a data storage and retrieval file is discussed.

  13. A brief overview of the European Fusion File (EFF) project

    International Nuclear Information System (INIS)

    Kellett, M.A.; Forrest, R.A.; Batistoni, P.

    2003-01-01

    The European Fusion File (EFF) Project is a collaborative project with work funded by the European Fusion Development Agreement (EFDA). The emphasis is on the pooling of resources and removal of duplication of effort, leading to the efficient development of two types of nuclear data libraries for use in fusion power plant design and operation studies. The two branches consist of, on the one hand, a transport file for modelling and design capabilities and, secondly, an activation file for the calculation and simulation of dose rates and energy release during operation of a future power plant. The OECD Nuclear Energy Agency's Data Bank acts as the central repository for the files and all information discussed during twice yearly meetings. It offers its services at no charge to the Project. (author)

  14. A brief overview of the European Fusion File (EFF) project

    International Nuclear Information System (INIS)

    Kellett, M.A.

    2002-01-01

    The European Fusion File (EFF) Project is a collaborative project with work funded by the European Fusion Development Agreement (EFDA). The emphasis is on the pooling of resources and removal of duplication of effort, leading to the efficient development of two types of nuclear data libraries for use in fusion reactor design and operation work. The two branches consist of, on the one hand, a transport file for modelling and design capabilities and, secondly, an activation file for the calculation and simulation of dose rates and energy release during operation of a future reactor. The OECD Nuclear Energy Agency's Data Bank acts as the central repository for the files and all information discussed during twice yearly meetings, which it holds, offering its services at no charge to the Project. (author)

  15. A computer program for creating keyword indexes to textual data files

    Science.gov (United States)

    Moody, David W.

    1972-01-01

    A keyword-in-context (KWIC) or out-of-context (KWOC) index is a convenient means of organizing information. This keyword index program can be used to create either KWIC or KWOC indexes of bibliographic references or other types of information punched on. cards, typed on optical scanner sheets, or retrieved from various Department of Interior data bases using the Generalized Information Processing System (GIPSY). The index consists of a 'bibliographic' section and a keyword-section based on the permutation of. document titles, project titles, environmental impact statement titles, maps, etc. or lists of descriptors. The program can also create a back-of-the-book index to documents from a list of descriptors. By providing the user with a wide range of input and output options, the program provides the researcher, manager, or librarian with a means of-maintaining a list and index to documents in. a small library, reprint collection, or office file.

  16. Geothermal-energy files in computer storage: sites, cities, and industries

    Energy Technology Data Exchange (ETDEWEB)

    O' Dea, P.L.

    1981-12-01

    The site, city, and industrial files are described. The data presented are from the hydrothermal site file containing about three thousand records which describe some of the principal physical features of hydrothermal resources in the United States. Data elements include: latitude, longitude, township, range, section, surface temperature, subsurface temperature, the field potential, and well depth for commercialization. (MHR)

  17. PCF File Format.

    Energy Technology Data Exchange (ETDEWEB)

    Thoreson, Gregory G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    PCF files are binary files designed to contain gamma spectra and neutron count rates from radiation sensors. It is the native format for the GAmma Detector Response and Analysis Software (GADRAS) package [1]. It can contain multiple spectra and information about each spectrum such as energy calibration. This document outlines the format of the file that would allow one to write a computer program to parse and write such files.

  18. AliEnFS - a Linux File System for the AliEn Grid Services

    OpenAIRE

    Peters, Andreas J.; Saiz, P.; Buncic, P.

    2003-01-01

    Among the services offered by the AliEn (ALICE Environment http://alien.cern.ch) Grid framework there is a virtual file catalogue to allow transparent access to distributed data-sets using various file transfer protocols. $alienfs$ (AliEn File System) integrates the AliEn file catalogue as a new file system type into the Linux kernel using LUFS, a hybrid user space file system framework (Open Source http://lufs.sourceforge.net). LUFS uses a special kernel interface level called VFS (Virtual F...

  19. Land Boundary Conditions for the Goddard Earth Observing System Model Version 5 (GEOS-5) Climate Modeling System: Recent Updates and Data File Descriptions

    Science.gov (United States)

    Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.

    2015-01-01

    The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.

  20. Building analytical platform with Big Data solutions for log files of PanDA infrastructure

    Science.gov (United States)

    Alekseev, A. A.; Barreiro Megino, F. G.; Klimentov, A. A.; Korchuganova, T. A.; Maendo, T.; Padolski, S. V.

    2018-05-01

    The paper describes the implementation of a high-performance system for the processing and analysis of log files for the PanDA infrastructure of the ATLAS experiment at the Large Hadron Collider (LHC), responsible for the workload management of order of 2M daily jobs across the Worldwide LHC Computing Grid. The solution is based on the ELK technology stack, which includes several components: Filebeat, Logstash, ElasticSearch (ES), and Kibana. Filebeat is used to collect data from logs. Logstash processes data and export to Elasticsearch. ES are responsible for centralized data storage. Accumulated data in ES can be viewed using a special software Kibana. These components were integrated with the PanDA infrastructure and replaced previous log processing systems for increased scalability and usability. The authors will describe all the components and their configuration tuning for the current tasks, the scale of the actual system and give several real-life examples of how this centralized log processing and storage service is used to showcase the advantages for daily operations.

  1. LHCB: Non-POSIX File System for the LHCB Online Event Handling

    CERN Multimedia

    Garnier, J-C; Cherukuwada, S S

    2010-01-01

    LHCb aims to use its O(20000) CPU cores in the High Level Trigger (HLT) and its 120 TB Online storage system for data reprocessing during LHC shutdown periods. These periods can last between a few days and several weeks during the winter shutdown or even only a few hours during beam interfill gaps. These jobs run on files which are staged in from tape storage to the local storage buffer. The result are again one or more files. Efficient file writing and reading is essential for the performance of the system. Rather than using a traditional shared filesystem such as NFS or CIFS we have implemented a custom, light-weight, non-Posix file-system for the handling of these files. Streaming this filesystem for the data-access allows to obtain high performance, while at the same time keep the resource consumption low and add nice features not found in NFS such as high-availability, transparent failover of the read and write service. The writing part of this file-system is in successful use for the Online, real-time w...

  2. Verification of data files of TREF-computer program; TREF-ohjelmiston ohjaustiedostojen soveltuvuustutkimus

    Energy Technology Data Exchange (ETDEWEB)

    Ruottu, S.; Halme, A.; Ruottu, A. [Einco Oy, Karhula (Finland)

    1996-12-01

    Originally the aim of Y43 project was to verify TREF data files for several different processes. However, it appeared that deficient or missing coordination between experimental and theoretical works made meaningful verifications impossible in some cases. Therefore verification calculations were focused on catalytic cracking reactor which was developed by Neste. The studied reactor consisted of prefluidisation and reaction zones. Verification calculations concentrated mainly on physical phenomena like vaporization near oil injection zone. The main steps of the cracking process can be described as follows oil (liquid) -> oil (gas) -> oil (catal) -> product (catal) + char (catal) -> product (gas). Catalytic nature of cracking reaction was accounted by defining the cracking pseudoreaction into catalyst phase. This simplified reaction model was valid only for vaporization zone. Applied fluid dynamic theory was based on the results of EINCO`s earlier LIEKKI-projects. (author)

  3. Neutron capture cross-section of fission products in the European activation file EAF-3

    International Nuclear Information System (INIS)

    Kopecky, J.; Delfini, M.G.; Kamp, H.A.J. van der; Gruppelaar, H.; Nierop, D.

    1992-05-01

    This paper contains a description of the work performed to extend and revise the neutron capture data in the European Activation File (EAF-3) with emphasis on nuclides in the fission-product mass range. The starter was the EAF-1 data file from 1989. The present version, EAF/NG-3, contains (n,γ) excitation functions for all nuclides (729 targets) with half-lives exceeding 1/2 day in the mass range from H-1 to Cm-248. The data file is equipped with a preliminary uncertainty file, that will be improved in the near future. (author). 19 refs.; 5 figs.; 3 tabs

  4. The European activation file EAF-4. Summary documentation

    Energy Technology Data Exchange (ETDEWEB)

    Kopeckey, J.; Nierop, D.

    1995-12-01

    This report describes the contents of the fourth version of the European Activation File (EAF-4), containing cross-sections for neutron induced reactions (0-20 MeV energy range) primarily for use in fusion-reactor technology. However, it can be used in other applications as well. The starter was the file EAF-3.1. The present version contains cross section data for all target nuclides which have half-lives longer than 0.5 days extended by actinides up to and including fermium (Z=100). Corss sections to isomeric states are listed separately and if the isomers live longer than 0.5 day they are also included as targets. The library includes 764 target nuclides with 13,096 reactions with non-zero cross-sections (>10{sup -8} b) below 20 MeV. The library is available in point-wise data and multigroup constant data in four different energy group structures (GAM-2, VITAMIN-J, WIMS and XMAS). A complementary uncertainty file has been gereated for all reactions in one-energy group structure for threshold reactions and three-groups for (n, {gamma}) and (n, f) reactions. The error estimates for this file are adopted either form experimental information or from systematics. (orig.).

  5. 77 FR 12367 - Agency Information Collection and Reporting Activities; Electronic Filing of Bank Secrecy Act...

    Science.gov (United States)

    2012-02-29

    ... capability of electronically filing BSA reports through its system called BSA E-Filing. Effective August 2011... Accounts (FBAR) report. BSA E-Filing is a secure, web-based electronic filing system. It is a flexible... filing institutions or individuals, thereby providing a significant improvement in data quality. BSA E...

  6. Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.

    Science.gov (United States)

    Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory

    2016-06-13

    Bioinformatics software often requires human-generated tabular text files as input and has specific requirements for how those data are formatted. Users frequently manage these data in spreadsheet programs, which is convenient for researchers who are compiling the requisite information because the spreadsheet programs can easily be used on different platforms including laptops and tablets, and because they provide a familiar interface. It is increasingly common for many different researchers to be involved in compiling these data, including study coordinators, clinicians, lab technicians and bioinformaticians. As a result, many research groups are shifting toward using cloud-based spreadsheet programs, such as Google Sheets, which support the concurrent editing of a single spreadsheet by different users working on different platforms. Most of the researchers who enter data are not familiar with the formatting requirements of the bioinformatics programs that will be used, so validating and correcting file formats is often a bottleneck prior to beginning bioinformatics analysis. We present Keemei, a Google Sheets Add-on, for validating tabular files used in bioinformatics analyses. Keemei is available free of charge from Google's Chrome Web Store. Keemei can be installed and run on any web browser supported by Google Sheets. Keemei currently supports the validation of two widely used tabular bioinformatics formats, the Quantitative Insights into Microbial Ecology (QIIME) sample metadata mapping file format and the Spatially Referenced Genetic Data (SRGD) format, but is designed to easily support the addition of others. Keemei will save researchers time and frustration by providing a convenient interface for tabular bioinformatics file format validation. By allowing everyone involved with data entry for a project to easily validate their data, it will reduce the validation and formatting bottlenecks that are commonly encountered when human-generated data files are

  7. File Detection On Network Traffic Using Approximate Matching

    Directory of Open Access Journals (Sweden)

    Frank Breitinger

    2014-09-01

    Full Text Available In recent years, Internet technologies changed enormously and allow faster Internet connections, higher data rates and mobile usage. Hence, it is possible to send huge amounts of data / files easily which is often used by insiders or attackers to steal intellectual property. As a consequence, data leakage prevention systems (DLPS have been developed which analyze network traffic and alert in case of a data leak. Although the overall concepts of the detection techniques are known, the systems are mostly closed and commercial.Within this paper we present a new technique for network trac analysis based on approximate matching (a.k.a fuzzy hashing which is very common in digital forensics to correlate similar files. This paper demonstrates how to optimize and apply them on single network packets. Our contribution is a straightforward concept which does not need a comprehensive conguration: hash the file and store the digest in the database. Within our experiments we obtained false positive rates between 10-4 and 10-5 and an algorithm throughput of over 650 Mbit/s.

  8. Study and development of a document file system with selective access

    International Nuclear Information System (INIS)

    Mathieu, Jean-Claude

    1974-01-01

    The objective of this research thesis was to design and to develop a set of software aimed at an efficient management of a document file system by using methods of selective access to information. Thus, the three main aspects of file processing (creation, modification, reorganisation) have been addressed. The author first presents the main problems related to the development of a comprehensive automatic documentation system, and their conventional solutions. Some future aspects, notably dealing with the development of peripheral computer technology, are also evoked. He presents the characteristics of INIS bibliographic records provided by the IAEA which have been used to create the files. In the second part, he briefly describes the file system general organisation. This system is based on the use of two main files: an inverse file which contains for each descriptor a list of of numbers of files indexed by this descriptor, and a dictionary of descriptor or input file which gives access to the inverse file. The organisation of these both files is then describes in a detailed way. Other related or associated files are created, and the overall architecture and mechanisms integrated into the file data input software are described, as well as various processing applied to these different files. Performance and possible development are finally discussed

  9. Stochastic Petri net analysis of a replicated file system

    Science.gov (United States)

    Bechta Dugan, Joanne; Ciardo, Gianfranco

    1989-01-01

    A stochastic Petri-net model of a replicated file system is presented for a distributed environment where replicated files reside on different hosts and a voting algorithm is used to maintain consistency. Witnesses, which simply record the status of the file but contain no data, can be used in addition to or in place of files to reduce overhead. A model sufficiently detailed to include file status (current or out-of-date), as well as failure and repair of hosts where copies or witnesses reside, is presented. The number of copies and witnesses is a parameter of the model. Two different majority protocols are examined, one where a majority of all copies and witnesses is necessary to form a quorum, and the other where only a majority of the copies and witnesses on operational hosts is needed. The latter, known as adaptive voting, is shown to increase file availability in most cases.

  10. An internet-based teaching file on clinical nuclear medicine

    International Nuclear Information System (INIS)

    Jiang Zhong; Wu Jinchang

    2001-01-01

    Objective: The goal of this project was to develop an internet-based interactive digital teaching file on nuclide imaging in clinical nuclear medicine, with the capability of access to internet. Methods: On the basis of academic teaching contents in nuclear medicine textbook for undergraduates who major in nuclear medicine, Frontpage 2000, HTML language, and JavaScript language in some parts of the contents, were utilized in the internet-based teaching file developed in this study. Results: A practical and comprehensive teaching file was accomplished and may get access with acceptable speed to internet. Besides basic teaching contents of nuclide imagings, a large number of typical and rare clinical cases, questionnaire with answers and update data in the field of nuclear medicine were included in the file. Conclusion: This teaching file meets its goal of providing an easy-to-use and internet-based digital teaching file, characteristically with the contents instant and enriched, and with the modes diversified and colorful

  11. Secure-Network-Coding-Based File Sharing via Device-to-Device Communication

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2017-01-01

    Full Text Available In order to increase the efficiency and security of file sharing in the next-generation networks, this paper proposes a large scale file sharing scheme based on secure network coding via device-to-device (D2D communication. In our scheme, when a user needs to share data with others in the same area, the source node and all the intermediate nodes need to perform secure network coding operation before forwarding the received data. This process continues until all the mobile devices in the networks successfully recover the original file. The experimental results show that secure network coding is very feasible and suitable for such file sharing. Moreover, the sharing efficiency and security outperform traditional replication-based sharing scheme.

  12. Text File Comparator

    Science.gov (United States)

    Kotler, R. S.

    1983-01-01

    File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

  13. Next generation WLCG File Transfer Service (FTS)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    LHC experiments at CERN and worldwide utilize WLCG resources and middleware components to perform distributed computing tasks. One of the most important tasks is reliable file replication. It is a complex problem, suffering from transfer failures, disconnections, transfer duplication, server and network overload, differences in storage systems, etc. To address these problems, EMI and gLite have provided the independent File Transfer Service (FTS) and Grid File Access Library (GFAL) tools. Their development started almost a decade ago, in the meantime, requirements in data management have changed - the old architecture of FTS and GFAL cannot keep support easily these changes. Technology has also been progressing: FTS and GFAL do not fit into the new paradigms (cloud, messaging, for example). To be able to serve the next stage of LHC data collecting (from 2013), we need a new generation of  these tools: FTS 3 and GFAL 2. We envision a service requiring minimal configuration, which can dynamically adapt to the...

  14. Auto Draw from Excel Input Files

    Science.gov (United States)

    Strauss, Karl F.; Goullioud, Renaud; Cox, Brian; Grimes, James M.

    2011-01-01

    The design process often involves the use of Excel files during project development. To facilitate communications of the information in the Excel files, drawings are often generated. During the design process, the Excel files are updated often to reflect new input. The problem is that the drawings often lag the updates, often leading to confusion of the current state of the design. The use of this program allows visualization of complex data in a format that is more easily understandable than pages of numbers. Because the graphical output can be updated automatically, the manual labor of diagram drawing can be eliminated. The more frequent update of system diagrams can reduce confusion and reduce errors and is likely to uncover symmetric problems earlier in the design cycle, thus reducing rework and redesign.

  15. A model for optimizing file access patterns using spatio-temporal parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Boonthanome, Nouanesengsy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Ahrens, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible file access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.

  16. Reliability analysis of a replication with limited number of journaling files

    International Nuclear Information System (INIS)

    Kimura, Mitsutaka; Imaizumi, Mitsuhiro; Nakagawa, Toshio

    2013-01-01

    Recently, replication mechanisms using journaling files have been widely used for the server systems. We have already discussed the model of asynchronous replication system using journaling files [8]. This paper formulates a stochastic model of a server system with replication considering the number of transmitting journaling files. The server updates the storage database and transmits the journaling file when a client requests the data update. The server transmits the database content to a backup site either at a constant time or after a constant number of transmitting journaling files. We derive the expected number of the replication and of transmitting journaling files. Further, we calculate the expected cost and discuss optimal replication interval to minimize it. Finally, numerical examples are given

  17. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, December 2015

    International Nuclear Information System (INIS)

    Cabellos, Oscar; De Saint Jean, Cyrille; Hursin, Mathieu; Pelloni, Sandro; Ivanov, Evgeny; Kodeli, Ivan; Leconte, Pierre; Palmiotti, Giuseppe; Salvatores, Massimo; Sobes, Vladimir; Yokoyama, Kenji

    2015-12-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. This document is the proceedings of the fifth formal Subgroup 39 meeting held at the Institute Curie, Paris, France, on 4 December 2015. It comprises a Summary Record of the meeting, and all the available presentations (slides) given by the participants: A - Sensitivity methods: - 1: Short update on deliverables (K. Yokoyama); - 2: Does one shot Bayesian is equivalent to successive update? Bayesian inference: some matrix linear algebra (C. De Saint Jean); - 3: Progress in Methodology (G. Palmiotti); - SG39-3: Use of PIA approach. Possible application to neutron propagation experiments (S. Pelloni); - 4: Update on sensitivity coefficient methods (E. Ivanov); - 5: Stress test for U-235 fission (H. Wu); - 6: Methods and approaches development at ORNL for providing feedback from integral benchmark experiments for improvement of nuclear data files (V. Sobes); B - Integral experiments: - 7a: Update on SEG analysis (G. Palmiotti); - 7b:Status of MANTRA (G. Palmiotti); - 7c: Possible new experiments at NRAD (G. Palmiotti); - 8: B-eff experiments (I. Kodeli); - 9: On going CEA activities related to dedicated integral experiments for nuclear date validation in the Fast energy range (P. Leconte); - 10: PROTEUS Experiments: an update (M. Hursin); - 11: Short updates on neutron propagation experiments, STEK, CIELO status (O. Cabellos)

  18. Cooperative storage of shared files in a parallel computing system with dynamic block size

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  19. Nuclear data online

    International Nuclear Information System (INIS)

    McLane, V.

    1997-01-01

    The National Nuclear Data Center (NNDC) Online Data Service, available since 1986, is continually being upgraded and expanded. Most files are now available for access through the World Wide Web. Bibliographic, experimental, and evaluated data files are available containing information no neutron, charged-particle, and photon-induced nuclear reaction data, as well as nuclear decay and nuclear structure information. An effort is being made through the world-wide Nuclear Reaction Data Centers collaboration to make the charged-particle reaction data libraries as complete as possible. The data may be downloaded or viewed both as plots or as tabulated data. A variety of output formats are available for most files

  20. Summary report of the 3. research co-ordination meeting on development of reference input parameter library for nuclear model calculations of nuclear data (Phase 1: Starter File)

    International Nuclear Information System (INIS)

    Oblozinsky, P.

    1997-09-01

    The report contains the summary of the third and the last Research Co-ordination Meeting on ''Development of Reference Input Parameter Library for Nuclear Model Calculations of Nuclear Data (Phase I: Starter File)'', held at the ICTP, Trieste, Italy, from 26 to 29 May 1997. Details are given on the status of the Handbook and the Starter File - two major results of the project. (author)

  1. Segy-change: The swiss army knife for the SEG-Y files

    Directory of Open Access Journals (Sweden)

    Giuseppe Stanghellini

    2017-01-01

    Full Text Available Data collected during active and passive seismic surveys can be stored in many different, more or less standard, formats. One of the most popular is the SEG-Y format, developed since 1975 to store single-line seismic digital data on tapes, and now evolved to store them into hard-disk and other media as well. Unfortunately, sometimes, files that are claimed to be recorded in the SEG-Y format cannot be processed using available free or industrial packages. Aiming to solve this impasse we present segy-change, a pre-processing software program to view, analyze, change and fix errors present in SEG-Y data files. It is written in C language and it can be used also as a software library and is compatible with most operating systems. Segy-change allows the user to display and optionally change the values inside all parts of a SEG-Y file: the file header, the trace headers and the data blocks. In addition, it allows to do a quality check on the data by plotting the traces. We provide instructions and examples on how to use the software.

  2. Segy-change: The swiss army knife for the SEG-Y files

    Science.gov (United States)

    Stanghellini, Giuseppe; Carrara, Gabriela

    Data collected during active and passive seismic surveys can be stored in many different, more or less standard, formats. One of the most popular is the SEG-Y format, developed since 1975 to store single-line seismic digital data on tapes, and now evolved to store them into hard-disk and other media as well. Unfortunately, sometimes, files that are claimed to be recorded in the SEG-Y format cannot be processed using available free or industrial packages. Aiming to solve this impasse we present segy-change, a pre-processing software program to view, analyze, change and fix errors present in SEG-Y data files. It is written in C language and it can be used also as a software library and is compatible with most operating systems. Segy-change allows the user to display and optionally change the values inside all parts of a SEG-Y file: the file header, the trace headers and the data blocks. In addition, it allows to do a quality check on the data by plotting the traces. We provide instructions and examples on how to use the software.

  3. Storage Manager and File Transfer Web Services

    International Nuclear Information System (INIS)

    William A Watson III; Ying Chen; Jie Chen; Walt Akers

    2002-01-01

    Web services are emerging as an interesting mechanism for a wide range of grid services, particularly those focused upon information services and control. When coupled with efficient data transfer services, they provide a powerful mechanism for building a flexible, open, extensible data grid for science applications. In this paper we present our prototype work on a Java Storage Resource Manager (JSRM) web service and a Java Reliable File Transfer (JRFT) web service. A java client (Grid File Manager) on top of JSRM and is developed to demonstrate the capabilities of these web services. The purpose of this work is to show the extent to which SOAP based web services are an appropriate direction for building a grid-wide data management system, and eventually grid-based portals

  4. Survey on Security Issues in File Management in Cloud Computing Environment

    Science.gov (United States)

    Gupta, Udit

    2015-06-01

    Cloud computing has pervaded through every aspect of Information technology in past decade. It has become easier to process plethora of data, generated by various devices in real time, with the advent of cloud networks. The privacy of users data is maintained by data centers around the world and hence it has become feasible to operate on that data from lightweight portable devices. But with ease of processing comes the security aspect of the data. One such security aspect is secure file transfer either internally within cloud or externally from one cloud network to another. File management is central to cloud computing and it is paramount to address the security concerns which arise out of it. This survey paper aims to elucidate the various protocols which can be used for secure file transfer and analyze the ramifications of using each protocol.

  5. NASA work unit system file maintenance manual

    Science.gov (United States)

    1972-01-01

    The NASA Work Unit System is a management information system for research tasks (i.e., work units) performed under NASA grants and contracts. It supplies profiles on research efforts and statistics on fund distribution. The file maintenance operator can add, delete and change records at a remote terminal or can submit punched cards to the computer room for batch update. The system is designed for file maintenance by a person with little or no knowledge of data processing techniques.

  6. Angular deflection of rotary nickel titanium files: a comparative study

    Directory of Open Access Journals (Sweden)

    Gianluca Gambarini

    2009-12-01

    Full Text Available A new manufacturing method of twisting nickel titanium wire to produce rotary nickel titanium (RNT files has recently been developed. The aim of the present study was to evaluate whether the new manufacturing process increased the angular deflection of RNT files, by comparing instruments produced using the new manufacturing method (Twisted Files versus instruments produced with the traditional grinding process. Testing was performed on a total of 40 instruments of the following commercially available RNT files: Twisted Files (TF, Profile, K3 and M2 (NRT. All instruments tested had the same dimensions (taper 0.06 and tip size 25. Test procedures strictly followed ISO 3630-1. Data were collected and statistically analyzed by means ANOVA test. The results showed that TF demonstrated significantly higher average angular deflection levels (P<0.05, than RNT manufactured by a grinding process. Since angular deflection represent the amount of rotation (and consequently deformation that a RNT file can withstand before torsional failure, such a significant improvement is a favorable property for the clinical use of the tested RNT files.

  7. PLEXOS Input Data Generator

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-01

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  8. Cyclic fatigue resistance of R-Pilot, HyFlex EDM and PathFile nickel-titanium glide path files in artificial canals with double (S-shaped) curvature.

    Science.gov (United States)

    Uslu, G; Özyürek, T; Yılmaz, K; Gündoğar, M

    2018-05-01

    To examine the cyclic fatigue resistances of R-Pilot, HyFlex EDM and PathFile NiTi glide path files in S-shaped artificial canals. Twenty R-Pilot (12.5/.04), 20 HyFlex EDM (10/.05) and 20 PathFile (19/.02) single-file glide path files were included. Sixty files (n: 20/each) were subjected to static cyclic fatigue testing using double-curved canals until fracture occurred (TF). The number of cycles to fracture (NCF) was calculated by multiplying the rpm value by the TF. The length of the fractured fragment (FL) was determined by a digital microcaliper. Six samples of fractured files (n: 2/each) were examined by SEM to determine the fracture mode. The NCF and the FL data were analysed using one-way anova, post hoc Tamhane and Kruskal-Wallis tests using SPPS 21 software. The significance level was set at 5%. In the double-curved canal, all the files fractured first in the apical curvature and then in the coronal curvature. The NCF values revealed that the R-Pilot had the greatest cyclic fatigue resistance, followed by the HyFlex EDM and PathFile in both the apical and coronal curvatures (P < 0.05). R-Pilot NiTi glide path files, used in a reciprocating motion, had the greatest cyclic fatigue resistance amongst the tested NiTi glide path files in an artificial S-shaped canal. © 2017 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  9. New developments in file-based infrastructure for ATLAS event selection

    Energy Technology Data Exchange (ETDEWEB)

    Gemmeren, P van; Malon, D M [Argonne National Laboratory, Argonne, Illinois 60439 (United States); Nowak, M, E-mail: gemmeren@anl.go [Brookhaven National Laboratory, Upton, NY 11973-5000 (United States)

    2010-04-01

    In ATLAS software, TAGs are event metadata records that can be stored in various technologies, including ROOT files and relational databases. TAGs are used to identify and extract events that satisfy certain selection predicates, which can be coded as SQL-style queries. TAG collection files support in-file metadata to store information describing all events in the collection. Event Selector functionality has been augmented to provide such collection-level metadata to subsequent algorithms. The ATLAS I/O framework has been extended to allow computational processing of TAG attributes to select or reject events without reading the event data. This capability enables physicists to use more detailed selection criteria than are feasible in an SQL query. For example, the TAGs contain enough information not only to check the number of electrons, but also to calculate their distance to the closest jet-a calculation that would be difficult to express in SQL. Another new development allows ATLAS to write TAGs directly into event data files. This feature can improve performance by supporting advanced event selection capabilities, including computational processing of TAG information, without the need for external TAG file or database access.

  10. The European Southern Observatory-MIDAS table file system

    Science.gov (United States)

    Peron, M.; Grosbol, P.

    1992-01-01

    The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from the TFS, the selection of objects, the uncertainty joins across tables, and the graphical representation of data. This upgraded version of the TFS is a full implementation of the binary table extension of the FITS format; in addition, it also supports arrays of strings. Different storage strategies for optimal access of very large data sets are implemented and are addressed in detail. As a simple relational database, the TFS may be used for the management of personal data files. This opens the way to intelligent pipeline processing of large amounts of data. One of the key features of the Table File System is to provide also an extensive set of tools for the analysis of the final results of a reduction process. Column operations using standard and special mathematical functions as well as statistical distributions can be carried out; commands for linear regression and model fitting using nonlinear least square methods and user-defined functions are available. Finally, statistical tests of hypothesis and multivariate methods can also operate on tables.

  11. A mass spectrometry proteomics data management platform.

    Science.gov (United States)

    Sharma, Vagisha; Eng, Jimmy K; Maccoss, Michael J; Riffle, Michael

    2012-09-01

    Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are "organically" distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/.

  12. HUD GIS Boundary Files

    Data.gov (United States)

    Department of Housing and Urban Development — The HUD GIS Boundary Files are intended to supplement boundary files available from the U.S. Census Bureau. The files are for community planners interested in...

  13. Comparisons of experimental beta-ray spectra important to decay heat predictions with ENSDF [Evaluated Nuclear Structure Data File] evaluations

    International Nuclear Information System (INIS)

    Dickens, J.K.

    1990-03-01

    Graphical comparisons of recently obtained experimental beta-ray spectra with predicted beta-ray spectra based on the Evaluated Nuclear Structure Data File are exhibited for 77 fission products having masses 79--99 and 130--146 and lifetimes between 0.17 and 23650 sec. The comparisons range from very poor to excellent. For beta decay of 47 nuclides, estimates are made of ground-state transition intensities. For 14 cases the value in ENSDF gives results in very good agreement with the experimental data. 12 refs., 77 figs., 1 tab

  14. Global digital data sets of soil type, soil texture, surface slope and other properties: Documentation of archived data tape

    Science.gov (United States)

    Staub, B.; Rosenzweig, C.; Rind, D.

    1987-01-01

    The file structure and coding of four soils data sets derived from the Zobler (1986) world soil file is described. The data were digitized on a one-degree square grid. They are suitable for large-area studies such as climate research with general circulation models, as well as in forestry, agriculture, soils, and hydrology. The first file is a data set of codes for soil unit, land-ice, or water, for all the one-degree square cells on Earth. The second file is a data set of codes for texture, land-ice, or water, for the same soil units. The third file is a data set of codes for slope, land-ice, or water for the same units. The fourth file is the SOILWRLD data set, containing information on soil properties of land cells of both Matthews' and Food and Agriculture Organization (FAO) sources. The fourth file reconciles land-classification differences between the two and has missing data filled in.

  15. 33 CFR 148.246 - When is a document considered filed and where should I file it?

    Science.gov (United States)

    2010-07-01

    ... filed and where should I file it? 148.246 Section 148.246 Navigation and Navigable Waters COAST GUARD... Formal Hearings § 148.246 When is a document considered filed and where should I file it? (a) If a document to be filed is submitted by mail, it is considered filed on the date it is postmarked. If a...

  16. Basic Stand Alone Medicare Claims Public Use Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS is committed to increasing access to its Medicare claims data through the release of de-identified data files available for public use. They contain...

  17. Versioning Complex Data

    Energy Technology Data Exchange (ETDEWEB)

    Macduff, Matt C.; Lee, Benno; Beus, Sherman J.

    2014-06-29

    Using the history of ARM data files, we designed and demonstrated a data versioning paradigm that is feasible. Assigning versions to sets of files that are modified with some special assumptions and domain specific rules was effective in the case of ARM data, which has more than 5000 datastreams and 500TB of data.

  18. Nuclear Structure References (NSR) file

    International Nuclear Information System (INIS)

    Ewbank, W.B.

    1978-08-01

    The use of the Nuclear Structure References file by the Nuclear Data Project at ORNL is described. Much of the report concerns format information of interest only to those preparing input to the system or otherwise needing detailed knowledge of its internal structure. 17 figures

  19. The AEP Barnbook DATLIB. Nuclear Reaction Cross Sections and Reactivity Parameter Library and Files

    International Nuclear Information System (INIS)

    Feldbacher, R.

    1987-10-01

    Nuclear reaction data for light isotope charged particle reactions (Z<6) have been compiled. This hardcopy contains file headers, plots and an extended bibliography. Numerical data files and processing routines are available on tape at IAEA-NDS. (author). Refs

  20. Generalized File Management System or Proto-DBMS?

    Science.gov (United States)

    Braniff, Tom

    1979-01-01

    The use of a data base management system (DBMS) as opposed to traditional data processing is discussed. The generalized file concept is viewed as an entry level step to the DBMS. The transition process from one system to the other is detailed. (SF)

  1. Wadeable Streams Assessment Data

    Science.gov (United States)

    The Wadeable Streams Assessment (WSA) is a first-ever statistically-valid survey of the biological condition of small streams throughout the U.S. The U.S. Environmental Protection Agency (EPA) worked with the states to conduct the assessment in 2004-2005. Data for each parameter sampled in the Wadeable Streams Assessment (WSA) are available for downloading in a series of files as comma separated values (*.csv). Each *.csv data file has a companion text file (*.txt) that lists a dataset label and individual descriptions for each variable. Users should view the *.txt files first to help guide their understanding and use of the data.

  2. Data formats design of laser irradiation experiments in view of data analysis

    International Nuclear Information System (INIS)

    Su Chunxiao; Yu Xiaoqi; Yang Cunbang; Guo Su; Chen Hongsu

    2002-01-01

    The designing rules of new data file formats of laser irradiation experiments are introduced. Object-oriented programs are designed in studying experimental data of the laser facilities. The new format data files are combinations of the experiment data and diagnostic configuration data, which are applied in data processing and analysis. The edit of diagnostic configuration data in data acquisition program is also described

  3. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    International Nuclear Information System (INIS)

    Bauerdick, L A T; Bloom, K; Bockelman, B; Bradley, D C; Dasu, S; Dost, J M; Sfiligoi, I; Tadel, A; Tadel, M; Wuerthwein, F; Yagil, A

    2014-01-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  4. ARDA Dashboard Data Management Monitoring

    CERN Document Server

    Rocha, R; Andreeva, J; Saiz, P

    2007-01-01

    The Atlas DDM (Distributed Data Management) system is responsible for the management and distribution of data across the different grid sites. The data is generated at CERN and has to be made available as fast as possible in a large number of centres for production purposes, and later in many other sites for end-user analysis. Monitoring their data transfer activity and availability is an essential task for both site administrators and end users doing analysis in their local centres. Data management using the grid depends on a complex set of services. File catalogues for file and file location bookkeeping, transfer services for file movement, storage managers and others. In addition there are several flavours of each of these components, tens of sites each managing a distinct installation - over 100 at the present time - and in some organizations data is seen and moved in larger granularity than files - usually called datasets, which makes the successful usage of the standard grid monitoring tools a non strai...

  5. Remote file inquiry (RFI) system

    Science.gov (United States)

    1975-01-01

    System interrogates and maintains user-definable data files from remote terminals, using English-like, free-form query language easily learned by persons not proficient in computer programming. System operates in asynchronous mode, allowing any number of inquiries within limitation of available core to be active concurrently.

  6. Catching errors with patient-specific pretreatment machine log file analysis.

    Science.gov (United States)

    Rangaraj, Dharanipathy; Zhu, Mingyao; Yang, Deshan; Palaniswaamy, Geethpriya; Yaddanapudi, Sridhar; Wooten, Omar H; Brame, Scott; Mutic, Sasa

    2013-01-01

    A robust, efficient, and reliable quality assurance (QA) process is highly desired for modern external beam radiation therapy treatments. Here, we report the results of a semiautomatic, pretreatment, patient-specific QA process based on dynamic machine log file analysis clinically implemented for intensity modulated radiation therapy (IMRT) treatments delivered by high energy linear accelerators (Varian 2100/2300 EX, Trilogy, iX-D, Varian Medical Systems Inc, Palo Alto, CA). The multileaf collimator machine (MLC) log files are called Dynalog by Varian. Using an in-house developed computer program called "Dynalog QA," we automatically compare the beam delivery parameters in the log files that are generated during pretreatment point dose verification measurements, with the treatment plan to determine any discrepancies in IMRT deliveries. Fluence maps are constructed and compared between the delivered and planned beams. Since clinical introduction in June 2009, 912 machine log file analyses QA were performed by the end of 2010. Among these, 14 errors causing dosimetric deviation were detected and required further investigation and intervention. These errors were the result of human operating mistakes, flawed treatment planning, and data modification during plan file transfer. Minor errors were also reported in 174 other log file analyses, some of which stemmed from false positives and unreliable results; the origins of these are discussed herein. It has been demonstrated that the machine log file analysis is a robust, efficient, and reliable QA process capable of detecting errors originating from human mistakes, flawed planning, and data transfer problems. The possibility of detecting these errors is low using point and planar dosimetric measurements. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  7. FDIC Summary of Deposits (SOD) Download File

    Data.gov (United States)

    Federal Deposit Insurance Corporation — The FDIC's Summary of Deposits (SOD) download file contains deposit data for branches and offices of all FDIC-insured institutions. The Federal Deposit Insurance...

  8. ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog

    Science.gov (United States)

    Gray, F. P., Jr. (Editor)

    1979-01-01

    A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.

  9. Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files

    Science.gov (United States)

    2005-01-01

    The purpose of grant NCC3-966 was to investigate and evaluate the interchange of application-specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).

  10. A Forensic Log File Extraction Tool for ICQ Instant Messaging Clients

    Directory of Open Access Journals (Sweden)

    Kim Morfitt

    2006-09-01

    Full Text Available Instant messenger programs such as ICQ are often used by hackers and criminals for illicit purposes and consequently the log files from such programs are of interest in a forensic investigation. This paper outlines research that has resulted in the development of a tool for the extraction of ICQ log file entries. Detailed reconstruction of data from log files was achieved with a number of different ICQ software. There are several limitations with the current design including timestamp information not adjusted for the time zone, data could be altered, and conversations must be manually reconstructed. Future research will aim to address these and other limitations as pointed out in this paper.

  11. A File Archival System

    Science.gov (United States)

    Fanselow, J. L.; Vavrus, J. L.

    1984-01-01

    ARCH, file archival system for DEC VAX, provides for easy offline storage and retrieval of arbitrary files on DEC VAX system. System designed to eliminate situations that tie up disk space and lead to confusion when different programers develop different versions of same programs and associated files.

  12. A large data base on a small computer. Neutron Physics data and bibliography under IDMS

    International Nuclear Information System (INIS)

    Schofield, A.; Pellegrino, L.; Tubbs, N.

    1978-01-01

    The transfer of three associated files to an IDMS data base is reported: the CINDA bibliographic index to neutron physics publications, the cumulated EXFOR exchange tapes used for maintaining parallel data collections at all four centres and the CCDN's internal data storage and retrieval system NEUDADA. With associated dictionaries and inter-file conversion tables the corresponding IDMS data base will be about 160 Mbytes. The main characteristics of the three files are shown

  13. JENDL gas-production cross section file

    International Nuclear Information System (INIS)

    Nakagawa, Tsuneo; Narita, Tsutomu

    1992-05-01

    The JENDL gas-production cross section file was compiled by taking cross-section data from JENDL-3 and by using the ENDF-5 format. The data were given to 23 nuclei or elements in light nuclei and structural materials. Graphs of the cross sections and brief description on their evaluation methods are given in this report. (author)

  14. Experimental Directory Structure (Exdir: An Alternative to HDF5 Without Introducing a New File Format

    Directory of Open Access Journals (Sweden)

    Svenn-Arne Dragly

    2018-04-01

    Full Text Available Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5 is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir, an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel

  15. Experimental Directory Structure (Exdir): An Alternative to HDF5 Without Introducing a New File Format.

    Science.gov (United States)

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders

    2018-01-01

    Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from

  16. Computer Forensics Method in Analysis of Files Timestamps in Microsoft Windows Operating System and NTFS File System

    Directory of Open Access Journals (Sweden)

    Vesta Sergeevna Matveeva

    2013-02-01

    Full Text Available All existing file browsers displays 3 timestamps for every file in file system NTFS. Nowadays there are a lot of utilities that can manipulate temporal attributes to conceal the traces of file using. However every file in NTFS has 8 timestamps that are stored in file record and used in detecting the fact of attributes substitution. The authors suggest a method of revealing original timestamps after replacement and automated variant of it in case of a set of files.

  17. Diagnostics data management on MTX

    International Nuclear Information System (INIS)

    Butner, D.N.; Brown, M.D.; Casper, T.A.; Meyer, W.H.; Moller, J.M.

    1991-09-01

    The Microwave Tokamak Experiment (MTX) is a magnetic fusion energy research experiment to explore electron cyclotron heating using a free electron laser operating in the microwave range. The diagnostic data from MTX is acquired and processed by a distributed, multivendor, computer network. Each shot of the experiment produces data files containing up to 15 megabytes of data. Typically half-second shots are taken every 5 minutes with 50 to 60 shots taken on a single day. As many as 80 full data shots have been taken on a good day. Data files are created on Hewlett-Packard (HP) computers running Unix, HP computers running BASIC, and a Digital Equipment Corporation (DEC) VAXcluster running VMS. A small portion of the data acquired on the HP systems is immediately stored in a data system on the VAXcluster, but most data is held and processed on the computer on which it was acquired. A commercial database program running on the VAXcluster maintains a history of the data files created for each shot. During the night, data files on all computers are compressed to about one-third their original size and the files on the HP computers are transferred to the VAXcluster. When enough data has accumulated, all data files that have not been previously archived are archived to 8 mm magnetic tape. Once the data is on the VAXcluster, a single defined procedure call may be used to obtain data that was taken on any of the computers in the network. Data that has been archived to tape is maintained on disk for a few days. Users may specify that certain shots be designated ''goodshots,'' whose data files will be maintained on disk for a longer period of time. If a user requests data for a shot that is no longer on disk, retrieval processes on the VAXcluster determine which tapes contain the data, request the computer operator to load the tapes if necessary, and retrieve the files from the tapes. The data is then available for processing by programs running on any computer in the network

  18. 76 FR 43679 - Filing via the Internet; Notice of Additional File Formats for efiling

    Science.gov (United States)

    2011-07-21

    ... list of acceptable file formats the four-character file extensions for Microsoft Office 2007/2010... files from Office 2007 or Office 2010 in an Office 2003 format prior to submission. Dated: July 15, 2011...

  19. Census Data

    Data.gov (United States)

    Department of Housing and Urban Development — The Bureau of the Census has released Census 2000 Summary File 1 (SF1) 100-Percent data. The file includes the following population items: sex, age, race, Hispanic...

  20. PKA spectrum file

    Energy Technology Data Exchange (ETDEWEB)

    Kawai, M. [Toshiba Corp., Kawasaki, Kanagawa (Japan). Nuclear Engineering Lab.

    1997-03-01

    In the Japanese Nuclear Data Committee, the PKA/KERMA file containing PKA spectra, KERMA factors and DPA cross sections in the energy range between 10{sup -5} eV and 50 MeV is being prepared from the evaluated nuclear data. The processing code ESPERANT was developed to calculate quantities of PKA, KERMA and DPA from evaluated nuclear data for medium and heavy elements by using the effective single particle emission approximation (ESPEA). For light elements, the PKA spectra are evaluated by the SCINFUL/DDX and EXIFON codes, simultaneously with other neutron cross sections. The DPA cross sections due to charged particle emitted from light elements are evaluated for high neutron energy above 20 MeV. (author)

  1. UPIN Group File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Group Unique Physician Identifier Number (UPIN) File is the business entity file that contains the group practice UPIN and descriptive information. It does NOT...

  2. High Performance Data Transfer for Distributed Data Intensive Sciences

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Chin [Zettar Inc., Mountain View, CA (United States); Cottrell, R ' Les' A. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Hanushevsky, Andrew B. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Kroeger, Wilko [SLAC National Accelerator Lab., Menlo Park, CA (United States); Yang, Wei [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2017-03-06

    We report on the development of ZX software providing high performance data transfer and encryption. The design scales in: computation power, network interfaces, and IOPS while carefully balancing the available resources. Two U.S. patent-pending algorithms help tackle data sets containing lots of small files and very large files, and provide insensitivity to network latency. It has a cluster-oriented architecture, using peer-to-peer technologies to ease deployment, operation, usage, and resource discovery. Its unique optimizations enable effective use of flash memory. Using a pair of existing data transfer nodes at SLAC and NERSC, we compared its performance to that of bbcp and GridFTP and determined that they were comparable. With a proof of concept created using two four-node clusters with multiple distributed multi-core CPUs, network interfaces and flash memory, we achieved 155Gbps memory-to-memory over a 2x100Gbps link aggregated channel and 70Gbps file-to-file with encryption over a 5000 mile 100Gbps link.

  3. Wave data processing toolbox manual

    Science.gov (United States)

    Sullivan, Charlene M.; Warner, John C.; Martini, Marinna A.; Lightsom, Frances S.; Voulgaris, George; Work, Paul

    2006-01-01

    Researchers routinely deploy oceanographic equipment in estuaries, coastal nearshore environments, and shelf settings. These deployments usually include tripod-mounted instruments to measure a suite of physical parameters such as currents, waves, and pressure. Instruments such as the RD Instruments Acoustic Doppler Current Profiler (ADCP(tm)), the Sontek Argonaut, and the Nortek Aquadopp(tm) Profiler (AP) can measure these parameters. The data from these instruments must be processed using proprietary software unique to each instrument to convert measurements to real physical values. These processed files are then available for dissemination and scientific evaluation. For example, the proprietary processing program used to process data from the RD Instruments ADCP for wave information is called WavesMon. Depending on the length of the deployment, WavesMon will typically produce thousands of processed data files. These files are difficult to archive and further analysis of the data becomes cumbersome. More imperative is that these files alone do not include sufficient information pertinent to that deployment (metadata), which could hinder future scientific interpretation. This open-file report describes a toolbox developed to compile, archive, and disseminate the processed wave measurement data from an RD Instruments ADCP, a Sontek Argonaut, or a Nortek AP. This toolbox will be referred to as the Wave Data Processing Toolbox. The Wave Data Processing Toolbox congregates the processed files output from the proprietary software into two NetCDF files: one file contains the statistics of the burst data and the other file contains the raw burst data (additional details described below). One important advantage of this toolbox is that it converts the data into NetCDF format. Data in NetCDF format is easy to disseminate, is portable to any computer platform, and is viewable with public-domain freely-available software. Another important advantage is that a metadata

  4. 12 CFR 5.4 - Filing required.

    Science.gov (United States)

    2010-01-01

    ... CORPORATE ACTIVITIES Rules of General Applicability § 5.4 Filing required. (a) Filing. A depository institution shall file an application or notice with the OCC to engage in corporate activities and... advise an applicant through a pre-filing communication to send the filing or submission directly to the...

  5. Privacy Impact Assessment for the Claims Office Master Files

    Science.gov (United States)

    The Claims Office Master Files System collects information on companies in debt to the EPA. Learn how this data is collected, how it will be used, access to the data, the purpose of data collection, and record retention policies for this data.

  6. Formalizing structured file services for the data storage and retrieval subsystem of the data management system for Spacestation Freedom

    Science.gov (United States)

    Jamsek, Damir A.

    1993-01-01

    A brief example of the use of formal methods techniques in the specification of a software system is presented. The report is part of a larger effort targeted at defining a formal methods pilot project for NASA. One possible application domain that may be used to demonstrate the effective use of formal methods techniques within the NASA environment is presented. It is not intended to provide a tutorial on either formal methods techniques or the application being addressed. It should, however, provide an indication that the application being considered is suitable for a formal methods by showing how such a task may be started. The particular system being addressed is the Structured File Services (SFS), which is a part of the Data Storage and Retrieval Subsystem (DSAR), which in turn is part of the Data Management System (DMS) onboard Spacestation Freedom. This is a software system that is currently under development for NASA. An informal mathematical development is presented. Section 3 contains the same development using Penelope (23), an Ada specification and verification system. The complete text of the English version Software Requirements Specification (SRS) is reproduced in Appendix A.

  7. Indicators of the Legal Security of Indigenous and Community Lands. Data file from LandMark: The Global Platform of Indigenous and Community Lands.

    NARCIS (Netherlands)

    Tagliarino, Nicholas Korte

    2016-01-01

    L. Alden Wily, N. Tagliarino, Harvard Law and International Development Society (LIDS), A. Vidal, C. Salcedo-La Vina, S. Ibrahim, and B. Almeida. 2016. Indicators of the Legal Security of Indigenous and Community Lands. Data file from LandMark: The Global Platform of Indigenous and Community Lands.

  8. The Unstructured Data Sharing System for Natural resources and Environment Science Data of the Chinese Academy of Science

    Directory of Open Access Journals (Sweden)

    Dafang Zhuang

    2007-10-01

    Full Text Available The data sharing system for resource and environment science databases of the Chinese Academy of Science (CAS is of an open three-tiered architecture, which integrates the geographical databases of about 9 institutes of CAS by the mechanism of distributive unstructured data management, metadata integration, catalogue services, and security control. The data tiers consist of several distributive data servers that are located in each CAS institute and support such unstructured data formats as vector files, remote sensing images or other raster files, documents, multi-media files, tables, and other format files. For the spatial data files, format transformation service is provided. The middle tier involves a centralized metadata server, which stores metadata records of data on all data servers. The primary function of this tier is catalog service, supporting the creation, search, browsing, updating, and deletion of catalogs. The client tier involves an integrated client that provides the end-users interfaces to search, browse, and download data or create a catalog and upload data.

  9. Nuclear data services of the Nuclear Data Centers Network available at the National Nuclear Data Center

    International Nuclear Information System (INIS)

    McLane, V.

    1997-01-01

    The Nuclear Data Centers Network provides low and medium energy nuclear reaction data to users around the world. Online retrievals are available through the U.S. National Nuclear Data Center, the Nuclear Energy Agency Data Bank, and the IAEA Nuclear Data Section from these extensive bibliographic, experimental data, and evaluated data files. In addition to nuclear reaction data, the various databases also provide nuclear structure and decay data, and other information of interest to users. The WorldWideWeb sites at the National Nuclear Data Center and the NEA Data Bank provide access to some of the Centers' files. (orig.)

  10. Collection and treatment of reliability data for nuclear plants

    International Nuclear Information System (INIS)

    McHugh, B.

    1973-09-01

    This paper describes some of the results achieved with the Argus data bank at the Institution of Thermal Power Engineering at the Chalmers University of Technology. This data bank, or rather data collection system, has been established to cover nuclear activities the world over. The system comprises in essence a number of data files. The prime files are those containing the basic data on the various plants - plant size and type, country and NSSS supplier and an indication of plantstatus. Further files contain plant design data and parameters and all available information on construction as commissioning timetables. To cover the operation of plant two files have been established. One file, which is updated on a monthly basis, contains power production statistics. The other file contains failure data. In this file are recorded the time and duration of plant shutdown together with the primary reason (s) for this. (M.S.)

  11. Apically extruded dentin debris by reciprocating single-file and multi-file rotary system.

    Science.gov (United States)

    De-Deus, Gustavo; Neves, Aline; Silva, Emmanuel João; Mendonça, Thais Accorsi; Lourenço, Caroline; Calixto, Camila; Lima, Edson Jorge Moreira

    2015-03-01

    This study aims to evaluate the apical extrusion of debris by the two reciprocating single-file systems: WaveOne and Reciproc. Conventional multi-file rotary system was used as a reference for comparison. The hypotheses tested were (i) the reciprocating single-file systems extrude more than conventional multi-file rotary system and (ii) the reciprocating single-file systems extrude similar amounts of dentin debris. After solid selection criteria, 80 mesial roots of lower molars were included in the present study. The use of four different instrumentation techniques resulted in four groups (n = 20): G1 (hand-file technique), G2 (ProTaper), G3 (WaveOne), and G4 (Reciproc). The apparatus used to evaluate the collection of apically extruded debris was typical double-chamber collector. Statistical analysis was performed for multiple comparisons. No significant difference was found in the amount of the debris extruded between the two reciprocating systems. In contrast, conventional multi-file rotary system group extruded significantly more debris than both reciprocating groups. Hand instrumentation group extruded significantly more debris than all other groups. The present results yielded favorable input for both reciprocation single-file systems, inasmuch as they showed an improved control of apically extruded debris. Apical extrusion of debris has been studied extensively because of its clinical relevance, particularly since it may cause flare-ups, originated by the introduction of bacteria, pulpal tissue, and irrigating solutions into the periapical tissues.

  12. 76 FR 52323 - Combined Notice of Filings; Filings Instituting Proceedings

    Science.gov (United States)

    2011-08-22

    .... Applicants: Young Gas Storage Company, Ltd. Description: Young Gas Storage Company, Ltd. submits tariff..., but intervention is necessary to become a party to the proceeding. The filings are accessible in the.... More detailed information relating to filing requirements, interventions, protests, and service can be...

  13. User-Friendly Data Servers for Climate Studies at the Asia-Pacific Data-Research Center (APDRC)

    Science.gov (United States)

    Yuan, G.; Shen, Y.; Zhang, Y.; Merrill, R.; Waseda, T.; Mitsudera, H.; Hacker, P.

    2002-12-01

    The APDRC was recently established within the International Pacific Research Center (IPRC) at the University of Hawaii. The APDRC mission is to increase understanding of climate variability in the Asia-Pacific region by developing the computational, data-management, and networking infrastructure necessary to make data resources readily accessible and usable by researchers, and by undertaking data-intensive research activities that will both advance knowledge and lead to improvements in data preparation and data products. A focus of recent activity is the implementation of user-friendly data servers. The APDRC is currently running a Live Access Server (LAS) developed at NOAA/PMEL to provide access to and visualization of gridded climate products via the web. The LAS also allows users to download the selected data subsets in various formats (such as binary, netCDF and ASCII). Most of the datasets served by the LAS are also served through our OPeNDAP server (formerly DODS), which allows users to directly access the data using their desktop client tools (e.g. GrADS, Matlab and Ferret). In addition, the APDRC is running an OPeNDAP Catalog/Aggregation Server (CAS) developed by Unidata at UCAR to serve climate data and products such as model output and satellite-derived products. These products are often large (> 2 GB) and are therefore stored as multiple files (stored separately in time or in parameters). The CAS remedies the inconvenience of multiple files and allows access to the whole dataset (or any subset that cuts across the multiple files) via a single request command from any DODS enabled client software. Once the aggregation of files is configured at the server (CAS), the process of aggregation is transparent to the user. The user only needs to know a single URL for the entire dataset, which is, in fact, stored as multiple files. CAS even allows aggregation of files on different systems and at different locations. Currently, the APDRC is serving NCEP, ECMWF

  14. 18 CFR 11.16 - Filing requirements.

    Science.gov (United States)

    2010-04-01

    ... ACT Charges for Headwater Benefits § 11.16 Filing requirements. (a) Applicability. (1) Any party subject to a headwater benefits determination under this subpart must supply project-specific data, in... are attributable to the annual costs of interest, maintenance, and depreciation, identifying the...

  15. Enkripsi dan Dekripsi File dengan Algoritma Blowfish pada Perangkat Mobile Berbasis Android

    Directory of Open Access Journals (Sweden)

    Siswo Wardoyo

    2016-03-01

    Full Text Available Cryptography is one of the ways used to secure data in the form of a file with encrypt files so that others are not entitled to know the file is private and confidential. One method is the algorithm Blowfish Cryptography which is a symmetric key using the algorithm to perform encryption and decryption. Applications that are built can perform file encryption-shaped images, videos, and documents. These applications can be running on a mobile phone that has a minimal operating system Android version 2.3. The software used to build these applications is Eclipse. The results of this research indicate that applications built capable of performing encryption and decryption. The results file encryption makes files into another unknown meaning. By using the keys numbered 72 bits or 9 character takes 1,49x108 years to break it with the speed it’s computation is 106 key/sec.

  16. MXA: a customizable HDF5-based data format for multi-dimensional data sets

    International Nuclear Information System (INIS)

    Jackson, M; Simmons, J P; De Graef, M

    2010-01-01

    A new digital file format is proposed for the long-term archival storage of experimental data sets generated by serial sectioning instruments. The format is known as the multi-dimensional eXtensible Archive (MXA) format and is based on the public domain Hierarchical Data Format (HDF5). The MXA data model, its description by means of an eXtensible Markup Language (XML) file with associated Document Type Definition (DTD) are described in detail. The public domain MXA package is available through a dedicated web site (mxa.web.cmu.edu), along with implementation details and example data files

  17. Data modeling and evaluation

    International Nuclear Information System (INIS)

    Bauge, E.; Hilaire, S.

    2006-01-01

    This lecture is devoted to the nuclear data evaluation process, during which the current knowledge (experimental or theoretical) of nuclear reactions is condensed and synthesised into a computer file (the evaluated data file) that application codes can process and use for simulation calculations. After an overview of the content of evaluated nuclear data files, we describe the different methods used for evaluating nuclear data. We specifically focus on the model based approach which we use to evaluate data in the continuum region. A few examples, coming from the day to day practice of data evaluation will illustrate this lecture. Finally, we will discuss the most likely perspectives for improvement of the evaluation process in the next decade. (author)

  18. Status of the ENDF/B special applications files

    International Nuclear Information System (INIS)

    Stewart, L.

    1977-01-01

    The newly formed SAFE Subcommittee of the Cross Section Evaluation Working Group is charged with the responsibility for providing, reviewing, and testing several ENDF/B special purpose evaluated files. This responsibility currently encompasses dosimetry, activation, hydrogen and helium production, and radioactive decay data required by a variety of users. New formats have been approved by CSEWG for the inclusion of the activation and hydrogen and helium production cross-section libraries. The decay data will be in the same format as that already employed by the Fission Product and Actinide Subcommittee of CSEWG. While an extensive dosimetry file was available on the ENDF/B-IV library for fast reactor applications, other data are needed to extend the range of applications, especially to higher incident neutron energies. This Subcommittee has long-range plans to provide evaluated neutron interaction data that can be recommended for use in many specialized applications. 1 figure, 3 tables

  19. FHEO Filed Cases

    Data.gov (United States)

    Department of Housing and Urban Development — The dataset is a list of all the Title VIII fair housing cases filed by FHEO from 1/1/2007 - 12/31/2012 including the case number, case name, filing date, state and...

  20. The Mark 3 data base handler

    Science.gov (United States)

    Ryan, J. W.; Ma, C.; Schupler, B. R.

    1980-01-01

    A data base handler which would act to tie Mark 3 system programs together is discussed. The data base handler is written in FORTRAN and is implemented on the Hewlett-Packard 21MX and the IBM 360/91. The system design objectives were to (1) provide for an easily specified method of data interchange among programs, (2) provide for a high level of data integrity, (3) accommodate changing requirments, (4) promote program accountability, (5) provide a single source of program constants, and (6) provide a central point for data archiving. The system consists of two distinct parts: a set of files existing on disk packs and tapes; and a set of utility subroutines which allow users to access the information in these files. Users never directly read or write the files and need not know the details of how the data are formatted in the files. To the users, the storage medium is format free. A user does need to know something about the sequencing of his data in the files but nothing about data in which he has no interest.

  1. The International Reactor Dosimetry File (IRDF-85)

    International Nuclear Information System (INIS)

    Cullen, D.E.; McLaughlin, P.K.

    1985-04-01

    This document describes the contents of the second version of the International Reactor Dosimetry File (IRDF-85), distributed by the Nuclear Data Section of the International Atomic Energy Agency. This library superseded IRDF-82. (author)

  2. A data grid prototype for distributed data production in CMS

    CERN Document Server

    Hafeez, M; Stockinger, H E

    2001-01-01

    The CMS experiment at CERN is setting up a grid infrastructure required to fulfil the needs imposed by Terabyte scale productions for the next few years. The goal is to automate the production and at the same time allow the users to interact with the system, if required, to make decisions which would optimise performance. We present the architecture, design and functionality of our first working objectivity file replication prototype. The middle-ware of choice is the Globus toolkit that provides promising functionality. Our results prove the ability of the Globus toolkit to be used as an underlying technology for a world-wide Data Grid. The required data management functionality includes high speed file transfers, secure access to remote files, selection and synchronisation of replicas and managing the meta information. The whole system is expected to be flexible enough to incorporate site specific policies. The data management granularity is the file rather than the object level. The first prototype is curre...

  3. Methods and apparatus for multi-resolution replication of files in a parallel computing system using semantic information

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-10-20

    Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.

  4. Combination of Rivest-Shamir-Adleman Algorithm and End of File Method for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Amalia, Amalia; Elviwani

    2018-03-01

    Data security is one of the crucial issues in the delivery of information. One of the ways which used to secure the data is by encoding it into something else that is not comprehensible by human beings by using some crypto graphical techniques. The Rivest-Shamir-Adleman (RSA) cryptographic algorithm has been proven robust to secure messages. Since this algorithm uses two different keys (i.e., public key and private key) at the time of encryption and decryption, it is classified as asymmetric cryptography algorithm. Steganography is a method that is used to secure a message by inserting the bits of the message into a larger media such as an image. One of the known steganography methods is End of File (EoF). In this research, the cipher text resulted from the RSA algorithm is compiled into an array form and appended to the end of the image. The result of the EoF is the image which has a line with black gradations under it. This line contains the secret message. This combination of cryptography and steganography in securing the message is expected to increase the security of the message, since the message encryption technique (RSA) is mixed with the data hiding technique (EoF).

  5. Experimental Analysis of File Transfer Rates over Wide-Area Dedicated Connections

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S. [ORNL; Liu, Qiang [ORNL; Sen, Satyabrata [ORNL; Hinkel, Gregory Carl [ORNL; Imam, Neena [ORNL; Foster, Ian [University of Chicago; Kettimuthu, R. [Argonne National Laboratory (ANL); Settlemyer, Bradley [Los Alamos National Laboratory (LANL); Wu, Qishi [University of Memphis; Yun, Daqing [Harrisburg University

    2016-12-01

    File transfers over dedicated connections, supported by large parallel file systems, have become increasingly important in high-performance computing and big data workflows. It remains a challenge to achieve peak rates for such transfers due to the complexities of file I/O, host, and network transport subsystems, and equally importantly, their interactions. We present extensive measurements of disk-to-disk file transfers using Lustre and XFS file systems mounted on multi-core servers over a suite of 10 Gbps emulated connections with 0-366 ms round trip times. Our results indicate that large buffer sizes and many parallel flows do not always guarantee high transfer rates. Furthermore, large variations in the measured rates necessitate repeated measurements to ensure confidence in inferences based on them. We propose a new method to efficiently identify the optimal joint file I/O and network transport parameters using a small number of measurements. We show that for XFS and Lustre with direct I/O, this method identifies configurations achieving 97% of the peak transfer rate while probing only 12% of the parameter space.

  6. JENDL dosimetry file 99 (JENDL/D-99)

    International Nuclear Information System (INIS)

    Kobayashi, Katsuhei; Iwasaki, Shin

    2002-01-01

    The JENDL Dosimetry File 99 (JENDL/D-99), which is a revised version of the JENDL Dosimetry File 91 (JENDL/D-91), has been compiled and released for the determination of neutron flux and energy spectra. This work was undertaken to remove the inconsistency between the cross sections and their covariances in JENDL/D-91 since the covariances were mainly taken from IRDF-85 although the cross sections were based on JENDL-3. Dosimetry cross sections have been evaluated for 67 reactions on 47 nuclides together with covariances. The cross sections for 34 major reactions and their covariances were simultaneously generated, and the remaining 33 reaction data were mainly taken from JENDL/D-91. Latest measurements were taken into account in the evaluation. The resultant evaluated data are given in the neutron energy region below 20 MeV in both of point-wise and group-wise files in the ENDF-6 format. In order to confirm the reliability of the evaluated data, several integral tests have been carried out: comparisons with average cross sections measured in fission neutron fields, fast/thermal reactor spectra, DT neutron fields and Li(d,n) neutron fields. It was found from the comparisons that the cross sections calculated from JENDL/D-99 are generally in good agreement with the measured data. The contents of JENDL/D-99 and the results of the integral tests are described in this report. All of the dosimetry cross sections are shown in a graphical form in the Appendix. (author)

  7. ENDF/B-5. Fission Product Yields File

    International Nuclear Information System (INIS)

    Schwerer, O.

    1985-10-01

    The ENDF/B-5 Fission Product Yields File contains a complete set of independent and cumulative fission product yields, representing the final data from ENDF/B-5 as received at the IAEA Nuclear Data Section in June 1985. Yields for 11 fissioning nuclides at one or more neutron incident energies are included. The data are available costfree on magnetic tape from the IAEA Nuclear Data Section. (author). 4 refs

  8. IAGA Geomagnetic Data Analysis format - Analysis_IAGA

    Science.gov (United States)

    -Emilian Toader, Victorin; Marmureanu, Alexandru

    2013-04-01

    Geomagnetic research involves a continuous Earth's magnetic field monitoring and software for processing large amounts of data. The Analysis_IAGA program reads and analyses files in IAGA2002 format used within the INTERMAGNET observer network. The data is made available by INTERMAGNET (http://www.intermagnet.org/Data_e.php) and NOAA - National Geophysical Data Center (ftp://ftp.ngdc.noaa.gov/wdc/geomagnetism/data/observatories/definitive) cost free for scientific use. The users of this software are those who study geomagnetism or use this data along with other atmospheric or seismic factors. Analysis_IAGA allows the visualization of files for the same station, with the feature of merging data for analyzing longer time intervals. Each file contains data collected within a 24 hour time interval with a sampling rate of 60 seconds or 1 second. Adding a large number of files may be done by dividing the sampling frequency. Also, the program has the feature of combining data files gathered from multiple stations as long as the sampling rate and time intervals are the same. Different channels may be selected, visualized and filtered individually. Channel properties can be saved and edited in a file. Data can be processed (spectral power, P / F, estimated frequency, Bz/Bx, Bz/By, convolutions and correlations on pairs of axis, discrete differentiation) and visualized along with the original signals on the same panel. With the help of cursors/magnifiers time differences can be calculated. Each channel can be analyzed separately. Signals can be filtered using bandpass, lowpass, highpass (Butterworth, Chebyshev, Inver Chebyshev, Eliptic, Bessel, Median, ZeroPath). Separate graphics visualize the spectral power, frequency spectrum histogram, the evolution of the estimated frequency, P/H, the spectral power. Adaptive JTFA spectrograms can be selected: CSD (Cone-Shaped Distribution), CWD (Choi-Williams Distribution), Gabor, STFT (short-time Fourier transform), WVD (Wigner

  9. Specific uses of a data management system for data reduction and tabulation

    International Nuclear Information System (INIS)

    Little, C.A.; Blair, M.S.; Barclay, T.R.

    1984-01-01

    The Remedial Action Survey and Certification Activities (RASCA) group processes large amounts of data for each of the many properties surveyed each year. In previous years, data manipulation (e.g., converting from cpm to R/hr) was performed using hand calculators. A system has recently been developed which largely automates the conversion of all field data and their tabulation for reporting purposes. The system consists of three items of hardware and two items of software. The hardware includes a Commodore Business Machines (CBM) Model 8032, an 8050 dual 5 1/2 inch floppy disk drive and a Gemini dot-matrix printer. The software includes a commercial data management system, Manager (developed by Canadian Micro Distributors) and an in-house program (DATA TABLES) written to read the Manager files and print the tables. Manager is a very flexible data management system that allows entry of data into sequential files which are sortable over any selected variable. Data are entered into sequential file and stored on a floppy disk for use at a later time. When all data have been correctly edited and proofed, the DATA TABLES program is invoked to read the sequential files and print out report-ready tables. Efficiency, and especially, accuracy in preparing data for reporting have been greatly increased

  10. 76 FR 62092 - Filing Procedures

    Science.gov (United States)

    2011-10-06

    ... INTERNATIONAL TRADE COMMISSION Filing Procedures AGENCY: International Trade Commission. ACTION: Notice of issuance of Handbook on Filing Procedures. SUMMARY: The United States International Trade Commission (``Commission'') is issuing a Handbook on Filing Procedures to replace its Handbook on Electronic...

  11. Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments.

    Science.gov (United States)

    Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier

    2016-01-05

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. Data Publishing and Sharing Via the THREDDS Data Repository

    Science.gov (United States)

    Wilson, A.; Caron, J.; Davis, E.; Baltzer, T.

    2007-12-01

    The terms "Team Science" and "Networked Science" have been coined to describe a virtual organization of researchers tied via some intellectual challenge, but often located in different organizations and locations. A critical component to these endeavors is publishing and sharing of content, including scientific data. Imagine pointing your web browser to a web page that interactively lets you upload data and metadata to a repository residing on a remote server, which can then be accessed by others in a secure fasion via the web. While any content can be added to this repository, it is designed particularly for storing and sharing scientific data and metadata. Server support includes uploading of data files that can subsequently be subsetted, aggregrated, and served in NetCDF or other scientific data formats. Metadata can be associated with the data and interactively edited. The THREDDS Data Repository (TDR) is a server that provides client initiated, on demand, location transparent storage for data of any type that can then be served by the THREDDS Data Server (TDS). The TDR provides functionality to: * securely store and "own" data files and associated metadata * upload files via HTTP and gridftp * upload a collection of data as single file * modify and restructure repository contents * incorporate metadata provided by the user * generate additional metadata programmatically * edit individual metadata elements The TDR can exist separately from a TDS, serving content via HTTP. Also, it can work in conjunction with the TDS, which includes functionality to provide: * access to data in a variety of formats via -- OPeNDAP -- OGC Web Coverage Service (for gridded datasets) -- bulk HTTP file transfer * a NetCDF view of datasets in NetCDF, OPeNDAP, HDF-5, GRIB, and NEXRAD formats * serving of very large volume datasets, such as NEXRAD radar * aggregation into virtual datasets * subsetting via OPeNDAP and NetCDF Subsetting services This talk will discuss TDR

  13. Evaluation of Contact Friction in Fracture of Rotationally Bent Nitinol Endodontic Files

    Science.gov (United States)

    Haimed, Tariq Abu

    2011-12-01

    maximum strain amplitude (MSA) for each file size were determined based on images of the files inside the glass tubes. The force of insertion for each file type under each condition was also measured inside 45 and 60 degree glass tube paths, static and while dynamic. The results showed that NCF of Ni-Ti files is strongly inversely related to the CF which ranged from 0.15 for ODS and 3-HEPT coated files to 0.43 for irrigant bleach. High CF (in the presence of bleach) significantly reduced the NCF. Conversely, lower CF (in the presence of other solutions and file coatings) resulted in significantly higher NCF. CF was found to be directly related to the surface tension of the media used. Similarly, high MSA typical of low radius of curvature and high bending angle significantly diminished the fatigue life of Ni-Ti files. The integral of the force of insertion versus time curve was the highest for bleach irrigation which also showed the highest CF. Scanning electron microscope inspection of file fracture surfaces illustrated a 2-step progressive failure mode characterized by creation of a smooth initial fatigue area (striation marks) followed by catastrophic ductile fracture (dimple area) when the intact file shaft area was sufficiently reduced. The bleach-lubricated files failed earlier and with a smaller fatigue area (23%) than all other groups (31-35%) indicating premature fracture in the presence of higher frictional forces. The acquired data demonstrate that the combination of low MSA and low CF (by using coatings or solutions with low surface tension), related to the magnitude of the superficial drag force, can lead to statistically longer rotational bending lifetimes for Ni-Ti files. Based on the data of this study, lubricant solutions with low surface tension could significantly improve the fracture life of Ni-Ti files in root canal glass model. Laboratory testing using natural teeth should be performed to evaluate the effect of using such solutions on the fatigue

  14. 12 CFR 1780.9 - Filing of papers.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Filing of papers. 1780.9 Section 1780.9 Banks... papers. (a) Filing. Any papers required to be filed shall be addressed to the presiding officer and filed... Director or the presiding officer. All papers filed by electronic media shall also concurrently be filed in...

  15. Continuous-Energy Data Checks

    Energy Technology Data Exchange (ETDEWEB)

    Haeck, Wim [Radioprotection and Nuclear Safety Institute, Fontenay-aux-Roses (France); Conlin, Jeremy Lloyd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); McCartney, Austin Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-25

    The purpose of this report is to provide an overview of all Quality Assurance tests that have to be performed on a nuclear data set to be transformed into an ACE formatted nuclear data file. The ACE file is capable of containing different types of data such as continuous energy neutron data, thermal scattering data, etc. Within this report, we will limit ourselves to continuous energy neutron data.

  16. Prototype of a file-based high-level trigger in CMS

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    The DAQ system of the CMS experiment at the LHC is upgraded during the accelerator shutdown in 2013/14. To reduce the interdependency of the DAQ system and the high-level trigger (HLT), we investigate the feasibility of using a file-system-based HLT. Events of ∼1 MB size are built at the level-1 trigger rate of 100 kHz. The events are assembled by ∼50 builder units (BUs). Each BU writes the raw events at ∼2GB/s to a local file system shared with Q(10) filter-unit machines (FUs) running the HLT code. The FUs read the raw data from the file system, select Q(1%) of the events, and write the selected events together with monitoring meta-data back to a disk. This data is then aggregated over several steps and made available for offline reconstruction and online monitoring. We present the challenges, technical choices, and performance figures from the prototyping phase. In addition, the steps to the final system implementation will be discussed.

  17. Beyond a Terabyte File System

    Science.gov (United States)

    Powers, Alan K.

    1994-01-01

    The Numerical Aerodynamics Simulation Facility's (NAS) CRAY C916/1024 accesses a "virtual" on-line file system, which is expanding beyond a terabyte of information. This paper will present some options to fine tuning Data Migration Facility (DMF) to stretch the online disk capacity and explore the transitions to newer devices (STK 4490, ER90, RAID).

  18. The Jade File System. Ph.D. Thesis

    Science.gov (United States)

    Rao, Herman Chung-Hwa

    1991-01-01

    File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its

  19. Digital Elevation Model (DEM) file of topographic elevations for the Death Valley region of southern Nevada and southeastern California processed from US Geological Survey 1-degree Digital Elevation Model data files

    International Nuclear Information System (INIS)

    Turner, A.K.; D'Agnese, F.A.; Faunt, C.C.

    1996-01-01

    Elevation data have been compiled into a digital data base for an ∼100,000-km 2 area of the southern Great Basin, the Death Valley region of southern Nevada, and SE Calif., located between lat 35 degree N, long 115 degree W, and lat 38 degree N, long 118 degree W. This region includes the Nevada Test Site, Yucca Mountain, and adjacent parts of southern Nevada and eastern California and encompasses the Death Valley regional ground-water system. Because digital maps are often useful for applications other than that for which they were originally intended, and because the area corresponds to a region under continuing investigation by several groups, these digital files are being released by USGS

  20. ENDF/B-4 General Purpose File 1974

    International Nuclear Information System (INIS)

    Schwerer, O.

    1980-04-01

    This document summarizes contents and documentation of the 1974 version of the General Purpose File of the ENDF/B Library maintained by the National Nuclear Data Center (NNDC) at the Brookhaven National Laboratory, USA. The Library contains numerical neutron reaction data for 90 isotopes or elements. The entire Library or selective retrievals from it can be obtained on magnetic tape from the IAEA Nuclear Data Section. (author)

  1. TIGER/Line Shapefile, 2010, Series Information File for the 2010 Census Block State-based Shapefile with Housing and Population Data

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  2. General activities of JAERI nuclear data center and Japanese nuclear data committee

    International Nuclear Information System (INIS)

    Fukahori, Tokio

    1999-01-01

    The nuclear data center of Japan Atomic Energy Research Institute (JAERI/NDC) is playing the role of Japanese domestic nuclear data center and gateway to foreign data centers. As the domestic nuclear data center, activities of JAERI/NDC are 1) compiling the Japanese Evaluated Nuclear Data Library (JENDL) for both general and special purposes, 2) importing and exporting nuclear data, 3) nuclear data services for the domestic users, and 4) organizing japanese Nuclear Data Committee (JNDC) as a secretariat. Compiled JENDL General Purpose Files up to now are JENDL-1, 2, 3, 3.1 and 3.2. The data for 340 nuclei in the energy range from 10 -5 eV to 20 MeV are available in JENDL-3.2. JENDL Special Purpose Files were also prepared in order to meet the requests from the specified application fields. JNDC has about 140 members. JNDC consists of Main Committee, Steering Committee, Subcommittee on Nuclear Data, Subcommittee on Reactor Constants, Subcommittee on Nuclear Fuel Cycle and Standing Groups. Above subcommittees are performing essential evaluation for the files described above, checking the JENDL files through the benchmark and integral testing as well as considering the standard group constant, and considering about evaluation of decay heat and nuclide generation/depletion and fission product yields. (author)

  3. DJFS: Providing Highly Reliable and High‐Performance File System with Small‐Sized

    Directory of Open Access Journals (Sweden)

    Junghoon Kim

    2017-11-01

    Full Text Available File systems and applications try to implement their own update protocols to guarantee data consistency, which is one of the most crucial aspects of computing systems. However, we found that the storage devices are substantially under‐utilized when preserving data consistency because they generate massive storage write traffic with many disk cache flush operations and force‐unit‐access (FUA commands. In this paper, we present DJFS (Delta‐Journaling File System that provides both a high level of performance and data consistency for different applications. We made three technical contributions to achieve our goal. First, to remove all storage accesses with disk cache flush operations and FUA commands, DJFS uses small‐sized NVRAM for a file system journal. Second, to reduce the access latency and space requirements of NVRAM, DJFS attempts to journal compress the differences in the modified blocks. Finally, to relieve explicit checkpointing overhead, DJFS aggressively reflects the checkpoint transactions to file system area in the unit of the specified region. Our evaluation on TPC‐C SQLite benchmark shows that, using our novel optimization schemes, DJFS outperforms Ext4 by up to 64.2 times with only 128 MB of NVRAM.

  4. Data processing in the integrated data base for spent fuel and radioactive waste

    International Nuclear Information System (INIS)

    Forsberg, C.W.; Morrison, G.W.; Notz, K.J.

    1984-01-01

    The Integrated Data Base (IDB) Program at Oak Ridge National Laboratory (ORNL) produces for the U.S. Department of Energy (DOE) the official spent fuel and radioactive waste inventories and projections for the United States through the year 2020. Inventory data are collected and checked for consistency, projection data are calculated based on specified assumptions, and both are converted to a standard format. Spent fuel and waste radionclides are decayed as a function of time. The resulting information constitutes the core data files called the Past/Present/Future (P/P/F) data base. A data file management system, SAS /sup R/, is used to retrieve the data and create several types of output: an annual report, an electronic summary data file designed for IBM-PC /sup R/ -compatible computers, and special-request reports

  5. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  6. LHCb Data Management: consistency, integrity and coherence of data

    CERN Document Server

    Bargiotti, Marianne

    2007-01-01

    The Large Hadron Collider (LHC) at CERN will start operating in 2007. The LHCb experiment is preparing for the real data handling and analysis via a series of data challenges and production exercises. The aim of these activities is to demonstrate the readiness of the computing infrastructure based on WLCG (Worldwide LHC Computing Grid) technologies, to validate the computing model and to provide useful samples of data for detector and physics studies. DIRAC (Distributed Infrastructure with Remote Agent Control) is the gateway to WLCG. The Dirac Data Management System (DMS) relies on both WLCG Data Management services (LCG File Catalogues, Storage Resource Managers and File Transfer Service) and LHCb specific components (Bookkeeping Metadata File Catalogue). Although the Dirac DMS has been extensively used over the past years and has proved to achieve a high grade of maturity and reliability, the complexity of both the DMS and its interactions with numerous WLCG components as well as the instability of facilit...

  7. Plug Load Data

    Data.gov (United States)

    National Aeronautics and Space Administration — We provide MATLAB binary files (.mat) and comma separated values files of data collected from a pilot study of a plug load management system that allows for the...

  8. Ex Vivo Comparison of Mtwo and RaCe Rotary File Systems in Root Canal Deviation: One File Only versus the Conventional Method.

    Science.gov (United States)

    Aminsobhani, Mohsen; Razmi, Hasan; Nozari, Solmaz

    2015-07-01

    Cleaning and shaping of the root canal system is an important step in endodontic therapy. New instruments incorporate new preparation techniques that can improve the efficacy of cleaning and shaping. The aim of this study was to compare the efficacy of Mtwo and RaCe rotary file systems in straightening the canal curvature using only one file or the conventional method. Sixty mesial roots of extracted human mandibular molars were prepared by RaCe and Mtwo nickel-titanium (NiTi) rotary files using the conventional and only one rotary file methods. The working length was 18 mm and the curvatures of the root canals were between 15-45°. By superimposing x-ray images before and after the instrumentation, deviation of the canals was assessed using Adobe Photoshop CS3 software. Preparation time was recorded. Data were analyzed using three-way ANOVA and Tukey's post hoc test. There were no significant differences between RaCe and Mtwo or between the two root canal preparation methods in root canal deviation in buccolingual and mesiodistal radiographs (P>0.05). Changes of root canal curvature in >35° subgroups were significantly more than in other subgroups with smaller canal curvatures. Preparation time was shorter in one file only technique. According to the results, the two rotary systems and the two root canal preparation methods had equal efficacy in straightening the canals; but the preparation time was shorter in one file only group.

  9. XRootd, disk-based, caching-proxy for optimization of data-access, data-placement and data-replication

    CERN Document Server

    Tadel, Matevz

    2013-01-01

    Following the smashing success of XRootd-based USCMS data-federation, AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching-proxy. The first one simply starts fetching a whole file as soon as a file-open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop file-system have been developed to allow foran immediate fallback to network access when local HDFS storage fails to provide the requested block. Tools needed to analyze and to tweak block replication factors and to inject downloaded blocks into a running HDFS installation have also been developed. Both cache implementations are in operation at UCSD and several tests were also performed at UNL and UW-M. Operational experience and applications to automatic storage healing and opportunistic compu...

  10. New data storage and retrieval systems for JET data

    Energy Technology Data Exchange (ETDEWEB)

    Layne, Richard E-mail: richard.layne@ukaea.org.uk; Wheatley, Martin E-mail: martin.wheatley@ukaea.org.uk

    2002-06-01

    Since the start of the Joint European Torus (JET), an IBM mainframe has been the main platform for data analysis and storage (J. Comput. Phys. 73 (1987) 85). The mainframe was removed in June 2001 and Solaris and Linux are now the main data storage and analysis platforms. New data storage and retrieval systems have therefore been developed: the Data Warehouse, the JET pulse file server, and the processed pulse file system. In this paper, the new systems will be described, and the design decisions that led to the final systems will be outlined.

  11. New data storage and retrieval systems for JET data

    International Nuclear Information System (INIS)

    Layne, Richard; Wheatley, Martin

    2002-01-01

    Since the start of the Joint European Torus (JET), an IBM mainframe has been the main platform for data analysis and storage (J. Comput. Phys. 73 (1987) 85). The mainframe was removed in June 2001 and Solaris and Linux are now the main data storage and analysis platforms. New data storage and retrieval systems have therefore been developed: the Data Warehouse, the JET pulse file server, and the processed pulse file system. In this paper, the new systems will be described, and the design decisions that led to the final systems will be outlined

  12. Secure-Network-Coding-Based File Sharing via Device-to-Device Communication

    OpenAIRE

    Wang, Lei; Wang, Qing

    2017-01-01

    In order to increase the efficiency and security of file sharing in the next-generation networks, this paper proposes a large scale file sharing scheme based on secure network coding via device-to-device (D2D) communication. In our scheme, when a user needs to share data with others in the same area, the source node and all the intermediate nodes need to perform secure network coding operation before forwarding the received data. This process continues until all the mobile devices in the netw...

  13. Linking road accident data to other files : an integrated road accident recordkeeping system. Contribution in Proceedings of Seminar P 'Road Safety' held at the 14th PTHC Summer Annual Meeting, University of Sussex, England, from 14-17 July 1986. Volume P 284, p. 55-86.

    OpenAIRE

    Harris, S.

    1986-01-01

    The road accident data which the police collect is of great value to road safety research and is used extensively. This data increases greatly in value if it can be linked to other files which contain more detailed information on exposure. Linking road accident data to other files results in what we call an Integrated Road Accident Recordkeeping System in -which the combined value of the linked files is greater than that of the sum of their individual values.

  14. Conversion of Input Data between KENO and MCNP File Formats for Computer Criticality Assessments

    International Nuclear Information System (INIS)

    Schwarz, Randolph A.; Carter, Leland L.; Schwarz Alysia L.

    2006-01-01

    KENO is a Monte Carlo criticality code that is maintained by Oak Ridge National Laboratory (ORNL). KENO is included in the SCALE (Standardized Computer Analysis for Licensing Evaluation) package. KENO is often used because it was specifically designed for criticality calculations. Because KENO has convenient geometry input, including the treatment of lattice arrays of materials, it is frequently used for production calculations. Monte Carlo N-Particle (MCNP) is a Monte Carlo transport code maintained by Los Alamos National Laboratory (LANL). MCNP has a powerful 3D geometry package and an extensive cross section database. It is a general-purpose code and may be used for calculations involving shielding or medical facilities, for example, but can also be used for criticality calculations. MCNP is becoming increasingly more popular for performing production criticality calculations. Both codes have their own specific advantages. After a criticality calculation has been performed with one of the codes, it is often desirable (or may be a safety requirement) to repeat the calculation with the other code to compare the important parameters using a different geometry treatment and cross section database. This manual conversion of input files between the two codes is labor intensive. The industry needs the capability of converting geometry models between MCNP and KENO without a large investment in manpower. The proposed conversion package will aid the user in converting between the codes. It is not intended to be used as a ''black box''. The resulting input file will need to be carefully inspected by criticality safety personnel to verify the intent of the calculation is preserved in the conversion. The purpose of this package is to help the criticality specialist in the conversion process by converting the geometry, materials, and pertinent data cards

  15. 78 FR 21930 - Aquenergy Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application...

    Science.gov (United States)

    2013-04-12

    ... Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application Document, and Approving Use of the Traditional Licensing Process a. Type of Filing: Notice of Intent to File License...: November 11, 2012. d. Submitted by: Aquenergy Systems, Inc., a fully owned subsidiaries of Enel Green Power...

  16. 12 CFR 16.33 - Filing fees.

    Science.gov (United States)

    2010-01-01

    ... Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY SECURITIES OFFERING DISCLOSURE RULES § 16.33 Filing fees. (a) Filing fees must accompany certain filings made under the provisions of this part... Comptroller of the Currency Fees published pursuant to § 8.8 of this chapter. (b) Filing fees must be paid by...

  17. 75 FR 4689 - Electronic Tariff Filings

    Science.gov (United States)

    2010-01-29

    ... elements ``are required to properly identify the nature of the tariff filing, organize the tariff database... (or other pleading) and the Type of Filing code chosen will be resolved in favor of the Type of Filing...'s wish expressed in its transmittal letter or in other pleadings, the Commission may not review a...

  18. Experiment in data tagging in information-accessing services containing energy-related data. Final report 1975--78

    International Nuclear Information System (INIS)

    1978-08-01

    This report describes the results of an experiment conducted by Chemical Abstracts Service (CAS), on the use of 'data tags' in a machine-readable output file for incorporation into an on-line search service. 'Data tags' are codes which uniquely identify specific types of numerical data in the corresponding source documents referenced in the file. Editorial and processing procedures were established for the identification of data types; the recording, editing, verification, and correction of the data tags; and their compilation into a special version of ENERGY, a CAS computer-readable abstract text file. Possible data tagging plans are described and criteria for extended studies in data tagging and accessing are outlined

  19. New thermal neutron scattering files for ENDF/B-VI release 2

    International Nuclear Information System (INIS)

    MacFarlane, R.E.

    1994-03-01

    At thermal neutron energies, the binding of the scattering nucleus in a solid, liquid, or gas affects the cross section and the distribution of secondary neutrons. These effects are described in the thermal sub-library of Version VI of the Evaluated Nuclear Data Files (ENDF/B-VI) using the File 7 format. In the original release of the ENDF/B-VI library, the data in File 7 were obtained by converting the thermal scattering evaluations of ENDF/B-III to the ENDF-6 format. These original evaluations were prepared at General Atomics (GA) in the late sixties, and they suffer from accuracy limitations imposed by the computers of the day. This report describes new evaluations for six of the thermal moderator materials and six new cold moderator materials. The calculations were made with the LEAPR module of NJOY, which uses methods based on the British code LEAP, together with the original GA physics models, to obtain new ENDF files that are accurate over a wider range of energy and momentum transfer than the existing files. The new materials are H in H 2 O, Be metal, Be in BeO, C in graphite, H in ZrH, Zr in ZrH, liquid ortho-hydrogen, liquid para-hydrogen, liquid ortho-deuterium, liquid para-deuterium liquid methane, and solid methane

  20. Data archiving and analysis for CWDD

    International Nuclear Information System (INIS)

    Coleman, T.A.; Novick, A.H.; Meystrik, C.C.; Marselle, J.R.

    1992-01-01

    A computer system has been developed to handle archiving and analysis of data acquired during operations of the Continuous Wave Deuterium Demonstrator (CWDD). Data files generated by the CWDD Instrumentation and Control system are transferred across a local area network to the CWDD Archive system where they are enlisted into the archive and stored on removeable media optical disk drives. A relational database management system maintains an on-line database catalog of all archived files. This database contains information about file contents and formats, and holds signal parameter configuration tables needed to extract and interpret data from the files. Software has been developed to assist the selection and retrieval of data on demand based upon references in the catalog. Data retrieved from the archive is transferred to commercial data visualization applications for viewing, plotting and analysis

  1. Data Management for Mars Exploration Rovers

    Science.gov (United States)

    Snyder, Joseph F.; Smyth, David E.

    2004-01-01

    Data Management for the Mars Exploration Rovers (MER) project is a comprehensive system addressing the needs of development, test, and operations phases of the mission. During development of flight software, including the science software, the data management system can be simulated using any POSIX file system. During testing, the on-board file system can be bit compared with files on the ground to verify proper behavior and end-to-end data flows. During mission operations, end-to-end accountability of data products is supported, from science observation concept to data products within the permanent ground repository. Automated and human-in-the-loop ground tools allow decisions regarding retransmitting, re-prioritizing, and deleting data products to be made using higher level information than is available to a protocol-stack approach such as the CCSDS File Delivery Protocol (CFDP).

  2. The UK chemical nuclear data library: a summary of the data available in ENDF/B format

    International Nuclear Information System (INIS)

    Davies, B.S.J.

    1981-11-01

    The UK Chemical Nuclear Data Committee files have been considerably revised and extended. The files now embrace: fission yields (C31), fission product decay data (UKFPDD-2), activation product decay data (UKPADD-1), and heavy element decay data (UKHEDD-1). The fission yield data is based on Crouch's third round of adjustment and includes yields to isometric states. The decay data files include data on half-life, decay modes, branching ratios and alpha, beta and gamma radiation energies and intensities. The data have all been recommended by the UK Chemical Nuclear Data Committee for use in the UK reactor programme; they are stored on magnetic tape at AERE Harwell, AEE Winfrith and CEGB Berkeley Nuclear Laboratories. (author)

  3. 78 FR 75554 - Combined Notice of Filings

    Science.gov (United States)

    2013-12-12

    ...-000. Applicants: Young Gas Storage Company, Ltd. Description: Young Fuel Reimbursement Filing to be.... Protests may be considered, but intervention is necessary to become a party to the proceeding. eFiling is... qualifying facilities filings can be found at: http://www.ferc.gov/docs-filing/efiling/filing-req.pdf . For...

  4. MICE data handling on the Grid

    International Nuclear Information System (INIS)

    Martyniak, J

    2014-01-01

    The international Muon Ionisation Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionisation cooling for the first time, for application to a future Neutrino factory or Muon Collider. The experiment is currently under construction at the ISIS synchrotron at the Rutherford Appleton Laboratory (RAL), UK. In this paper we present a system – the Raw Data Mover, which allows us to store and distribute MICE raw data – and a framework for offline reconstruction and data management. The aim of the Raw Data Mover is to upload raw data files onto a safe tape storage as soon as the data have been written out by the DAQ system and marked as ready to be uploaded. Internal integrity of the files is verified and they are uploaded to the RAL Tier-1 Castor Storage Element (SE) and placed on two tapes for redundancy. We also make another copy at a separate disk-based SE at this stage to make it easier for users to access data quickly. Both copies are check-summed and the replicas are registered with an instance of the LCG File Catalog (LFC). On success a record with basic file properties is added to the MICE Metadata DB. The reconstruction process is triggered by new raw data records filled in by the mover system described above. Off-line reconstruction jobs for new raw files are submitted to RAL Tier-1 and the output is stored on tape. Batch reprocessing is done at multiple MICE enabled Grid sites and output files are shipped to central tape or disk storage at RAL using a custom File Transfer Controller.

  5. Small file aggregation in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  6. 5 CFR 1203.13 - Filing pleadings.

    Science.gov (United States)

    2010-01-01

    ... delivery, by facsimile, or by e-filing in accordance with § 1201.14 of this chapter. If the document was... submitted by e-filing, it is considered to have been filed on the date of electronic submission. (e... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Filing pleadings. 1203.13 Section 1203.13...

  7. PFS: a distributed and customizable file system

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.

    1996-01-01

    In this paper we present our ongoing work on the Pegasus File System (PFS), a distributed and customizable file system that can be used for off-line file system experiments and on-line file system storage. PFS is best described as an object-oriented component library from which either a true file

  8. Screw-in forces during instrumentation by various file systems.

    Science.gov (United States)

    Ha, Jung-Hong; Kwak, Sang Won; Kim, Sung-Kyo; Kim, Hyeon-Cheol

    2016-11-01

    The purpose of this study was to compare the maximum screw-in forces generated during the movement of various Nickel-Titanium (NiTi) file systems. Forty simulated canals in resin blocks were randomly divided into 4 groups for the following instruments: Mtwo size 25/0.07 (MTW, VDW GmbH), Reciproc R25 (RPR, VDW GmbH), ProTaper Universal F2 (PTU, Dentsply Maillefer), and ProTaper Next X2 (PTN, Dentsply Maillefer, n = 10). All the artificial canals were prepared to obtain a standardized lumen by using ProTaper Universal F1. Screw-in forces were measured using a custom-made experimental device (AEndoS- k , DMJ system) during instrumentation with each NiTi file system using the designated movement. The rotation speed was set at 350 rpm with an automatic 4 mm pecking motion at a speed of 1 mm/sec. The pecking depth was increased by 1 mm for each pecking motion until the file reach the working length. Forces were recorded during file movement, and the maximum force was extracted from the data. Maximum screw-in forces were analyzed by one-way ANOVA and Tukey's post hoc comparison at a significance level of 95%. Reciproc and ProTaper Universal files generated the highest maximum screw-in forces among all the instruments while M-two and ProTaper Next showed the lowest ( p files with smaller cross-sectional area for higher flexibility is recommended.

  9. 76 FR 61351 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-10-04

    ... MBR Baseline Tariff Filing to be effective 9/22/2011. Filed Date: 09/22/2011. Accession Number... submits tariff filing per 35.1: ECNY MBR Re-File to be effective 9/22/2011. Filed Date: 09/22/2011... Industrial Energy Buyers, LLC submits tariff filing per 35.1: NYIEB MBR Re-File to be effective 9/22/2011...

  10. Deceit: A flexible distributed file system

    Science.gov (United States)

    Siegel, Alex; Birman, Kenneth; Marzullo, Keith

    1989-01-01

    Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.

  11. D0 Superconducting Solenoid Quench Data and Slow Dump Data Acquisition

    International Nuclear Information System (INIS)

    Markley, D.

    1998-01-01

    This Dzero Engineering note describes the method for which the 2 Tesla Superconducting Solenoid Fast Dump and Slow Dump data are accumulated, tracked and stored. The 2 Tesla Solenoid has eleven data points that need to be tracked and then stored when a fast dump or a slow dump occur. The TI555(Texas Instruments) PLC(Programmable Logic Controller) which controls the DC power circuit that powers the Solenoid, also has access to all the voltage taps and other equipment in the circuit. The TI555 constantly logs these eleven points in a rotating memory buffer. When either a fast dump(dump switch opens) or a slow dump (power supply turns off) occurs, the TI555 organizes the respective data and will down load the data to a file on DO-CCRS2. This data in this file is moved over ethernet and is stored in a CSV (comma separated format) file which can easily be examined by Microsoft Excel or any other spreadsheet. The 2 Tesla solenoid control system also locks in first fault information. The TI555 decodes the first fault and passes it along to the program collecting the data and storing it on DO-CCRS2. This first fault information is then part of the file.

  12. IMPROVED COMPRESSION OF XML FILES FOR FAST IMAGE TRANSMISSION

    Directory of Open Access Journals (Sweden)

    S. Manimurugan

    2011-02-01

    Full Text Available The eXtensible Markup Language (XML is a format that is widely used as a tool for data exchange and storage. It is being increasingly used in secure transmission of image data over wireless network and World Wide Web. Verbose in nature, XML files can be tens of megabytes long. Thus, to reduce their size and to allow faster transmission, compression becomes vital. Several general purpose compression tools have been proposed without satisfactory results. This paper proposes a novel technique using modified BWT for compressing XML files in a lossless fashion. The experimental results show that the performance of the proposed technique outperforms both general purpose and XML-specific compressors.

  13. 10 CFR 110.89 - Filing and service.

    Science.gov (United States)

    2010-01-01

    ...: Rulemakings and Adjudications Staff or via the E-Filing system, following the procedure set forth in 10 CFR 2.302. Filing by mail is complete upon deposit in the mail. Filing via the E-Filing system is completed... residence with some occupant of suitable age and discretion; (2) Following the requirements for E-Filing in...

  14. 49 CFR 1104.6 - Timely filing required.

    Science.gov (United States)

    2010-10-01

    ... offers next day delivery to Washington, DC. If the e-filing option is chosen (for those pleadings and documents that are appropriate for e-filing, as determined by reference to the information on the Board's Web site), then the e-filed pleading or document is timely filed if the e-filing process is completed...

  15. The European Activation File, EAF-2005

    International Nuclear Information System (INIS)

    Forrest, R.A.

    2005-01-01

    The current version of the European Activation File is EAF-2003. This contains various libraries of nuclear data required for activation calculations. An important component is the neutron-induced cross-section library. Plans to expose fusion components to high neutron fluxes include the IFMIF materials testing facility. This accelerator-based device will produce neutrons with a high-energy tail up to about 55 MeV. In order to carry out activation calculations on materials exposed to such neutrons it is necessary to extend the energy range of the cross-section library. Work on extending the energy range to 60 MeV is nearing completion. A test version (EAF-2004) was produced at the end of 2003 showing the feasibility of the chosen approach. This library required calculated data to extend the existing data from 20-60 MeV and to enlarge it with new classes of reactions with high thresholds. A summary of the new library EAF-2005, which is under development and is planned for distribution at the beginning of 2005, is given. The other files in EAF-2005 are briefly described; these cover cross-section uncertainty information and decay data. Both these have been extended beyond the current version to allow activation calculations at energies up to 60 MeV

  16. Optimizing Instruction Scheduling and Register Allocation for Register-File-Connected Clustered VLIW Architectures

    Science.gov (United States)

    Tang, Haijing; Wang, Siye; Zhang, Yanjun

    2013-01-01

    Clustering has become a common trend in very long instruction words (VLIW) architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC) VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC) VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file. PMID:23970841

  17. Optimizing Instruction Scheduling and Register Allocation for Register-File-Connected Clustered VLIW Architectures

    Directory of Open Access Journals (Sweden)

    Haijing Tang

    2013-01-01

    Full Text Available Clustering has become a common trend in very long instruction words (VLIW architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file.

  18. FROM CAD MODEL TO 3D PRINT VIA “STL” FILE FORMAT

    Directory of Open Access Journals (Sweden)

    Cătălin IANCU

    2010-06-01

    Full Text Available The paper work presents the STL file format, which is now used for transferring information from CAD software to a 3D printer, for obtaining the solid model in Rapid prototyping and Computer Aided Manufacturing. It’s presented also the STL format structure, its history, limitations and further development, as well as its new version to arrive and other similar file formats. As a conclusion, STL files used to transfer data from CAD package to 3D printers has a series of limitations and therefore new formats will replace it soon.

  19. DICOM supported sofware configuration by XML files

    International Nuclear Information System (INIS)

    LucenaG, Bioing Fabian M; Valdez D, Andres E; Gomez, Maria E; Nasisi, Oscar H

    2007-01-01

    A method for the configuration of informatics systems that provide support to DICOM standards using XML files is proposed. The difference with other proposals is base on that this system does not code the information of a DICOM objects file, but codes the standard itself in an XML file. The development itself is the format for the XML files mentioned, in order that they can support what DICOM normalizes for multiple languages. In this way, the same configuration file (or files) can be use in different systems. Jointly the XML configuration file generated, we wrote also a set of CSS and XSL files. So the same file can be visualized in a standard browser, as a query system of DICOM standard, emerging use, that did not was a main objective but brings a great utility and versatility. We exposed also some uses examples of the configuration file mainly in relation with the load of DICOM information objects. Finally, at the conclusions we show the utility that the system has already provided when the edition of DICOM standard changes from 2006 to 2007

  20. 12 CFR 908.25 - Filing of papers.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Filing of papers. 908.25 Section 908.25 Banks... RULES OF PRACTICE AND PROCEDURE IN HEARINGS ON THE RECORD General Rules § 908.25 Filing of papers. (a) Filing. Any papers required to be filed shall be addressed to the presiding officer and filed with the...

  1. Input data for inferring species distributions in Kyphosidae world-wide

    Directory of Open Access Journals (Sweden)

    Steen Wilhelm Knudsen

    2016-09-01

    Full Text Available Input data files for inferring the relationship among the family Kyphosidae, as presented in (Knudsen and Clements, 2016 [1], is here provided together with resulting topologies, to allow the reader to explore the topologies in detail. The input data files comprise seven nexus-files with sequence alignments of mtDNA and nDNA markers for performing Bayesian analysis. A matrix of recoded character states inferred from the morphology examined in museum specimens representing Dichistiidae, Girellidae, Kyphosidae, Microcanthidae and Scorpididae, is also provided, and can be used for performing a parsimonious analysis to infer the relationship among these perciform families. The nucleotide input data files comprise both multiple and single representatives of the various species to allow for inference of the relationship among the species in Kyphosidae and between the families closely related to Kyphosidae. The ‘.xml’-files with various constrained relationships among the families potentially closely related to Kyphosidae are also provided to allow the reader to rerun and explore the results from the stepping-stone analysis. The resulting topologies are supplied in newick-file formats together with input data files for Bayesian analysis, together with ‘.xml’-files. Re-running the input data files in the appropriate software, will enable the reader to examine log-files and tree-files themselves. Keywords: Sea chub, Drummer, Kyphosus, Scorpis, Girella

  2. PFS: a distributed and customizable file system

    OpenAIRE

    Bosch, H.G.P.; Mullender, Sape J.

    1996-01-01

    In this paper we present our ongoing work on the Pegasus File System (PFS), a distributed and customizable file system that can be used for off-line file system experiments and on-line file system storage. PFS is best described as an object-oriented component library from which either a true file system or a file-system simulator can be constructed. Each of the components in the library is easily replaced by another implementation to accommodate a wide range of applications.

  3. Detecting Malicious Code by Binary File Checking

    Directory of Open Access Journals (Sweden)

    Marius POPA

    2014-01-01

    Full Text Available The object, library and executable code is stored in binary files. Functionality of a binary file is altered when its content or program source code is changed, causing undesired effects. A direct content change is possible when the intruder knows the structural information of the binary file. The paper describes the structural properties of the binary object files, how the content can be controlled by a possible intruder and what the ways to identify malicious code in such kind of files. Because the object files are inputs in linking processes, early detection of the malicious content is crucial to avoid infection of the binary executable files.

  4. RX LAPAN Rocket data Program With Dbase III Plus

    International Nuclear Information System (INIS)

    Sauman

    2001-01-01

    The components data rocket RX LAPAN are taken from workshop product and assembling rocket RX. In this application software, the test data are organized into two data files, i.e. test file and rocket file. Besides [providing facilities to add, edit and delete data, this software provides also data manipulation facility to support analysis and identification of rocket RX failures and success

  5. Central Personnel Data File (CPDF) Status Data

    Data.gov (United States)

    Office of Personnel Management — Precursor to the Enterprise Human Resources Integration-Statistical Data Mart (EHRI-SDM). It contains data about the employee and their position, along with various...

  6. An asynchronous writing method for restart files in the gysela code in prevision of exascale systems*

    Directory of Open Access Journals (Sweden)

    Thomine O.

    2013-12-01

    Full Text Available The present work deals with an optimization procedure developed in the full-f global GYrokinetic SEmi-LAgrangian code (GYSELA. Optimizing the writing of the restart files is necessary to reduce the computing impact of crashes. These files require a very large memory space, and particularly so for very large mesh sizes. The limited bandwidth of the data pipe between the comput- ing nodes and the storage system induces a non-scalable part in the GYSELA code, which increases with the mesh size. Indeed the transfer time of RAM to data depends linearly on the files size. The necessity of non synchronized writing-in-file procedure is therefore crucial. A new GYSELA module has been developed. This asynchronous procedure allows the frequent writ- ing of the restart files, whilst preventing a severe slowing down due to the limited writing bandwidth. This method has been improved to generate a checksum control of the restart files, and automatically rerun the code in case of a crash for any cause.

  7. 77 FR 74839 - Combined Notice of Filings

    Science.gov (United States)

    2012-12-18

    ..., LP. Description: National Grid LNG, LP submits tariff filing per 154.203: Adoption of NAESB Version 2... with Order to Amend NAESB Version 2.0 Filing to be effective 12/1/2012. Filed Date: 12/11/12. Accession...: Refile to comply with Order on NAESB Version 2.0 Filing to be effective 12/1/2012. Filed Date: 12/11/12...

  8. Dynamic file-access characteristics of a production parallel scientific workload

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1994-01-01

    Multiprocessors have permitted astounding increases in computational performance, but many cannot meet the intense I/O requirements of some scientific applications. An important component of any solution to this I/O bottleneck is a parallel file system that can provide high-bandwidth access to tremendous amounts of data in parallel to hundreds or thousands of processors. Most successful systems are based on a solid understanding of the expected workload, but thus far there have been no comprehensive workload characterizations of multiprocessor file systems. This paper presents the results of a three week tracing study in which all file-related activity on a massively parallel computer was recorded. Our instrumentation differs from previous efforts in that it collects information about every I/O request and about the mix of jobs running in a production environment. We also present the results of a trace-driven caching simulation and recommendations for designers of multiprocessor file systems.

  9. Common data model access; a unified layer to access data from data analysis point of view

    International Nuclear Information System (INIS)

    Poirier, S.; Buteau, A.; Ounsy, M.; Rodriguez, C.; Hauser, N.; Lam, T.; Xiong, N.

    2012-01-01

    For almost 20 years, the scientific community of neutron and synchrotron institutes have been dreaming of a common data format for exchanging experimental results and applications for reducing and analyzing the data. Using HDF5 as a data container has become the standard in many facilities. The big issue is the standardization of the data organization (schema) within the HDF5 container. By introducing a new level of indirection for data access, the Common-Data-Model-Access (CDMA) framework proposes a solution and allows separation of responsibilities between data reduction developers and the institute. Data reduction developers are responsible for data reduction code; the institute provides a plug-in to access the data. The CDMA is a core API that accesses data through a data format plug-in mechanism and scientific application definitions (sets of keywords) coming from a consensus between scientists and institutes. Using a innovative 'mapping' system between application definitions and physical data organizations, the CDMA allows data reduction application development independent of the data file container AND schema. Each institute develops a data access plug-in for its own data file formats along with the mapping between application definitions and its data files. Thus data reduction applications can be developed from a strictly scientific point of view and are immediately able to process data acquired from several institutes. (authors)

  10. The EcoData retriever: improving access to existing ecological data.

    Directory of Open Access Journals (Sweden)

    Benjamin D Morris

    Full Text Available Ecological research relies increasingly on the use of previously collected data. Use of existing datasets allows questions to be addressed more quickly, more generally, and at larger scales than would otherwise be possible. As a result of large-scale data collection efforts, and an increasing emphasis on data publication by journals and funding agencies, a large and ever-increasing amount of ecological data is now publicly available via the internet. Most ecological datasets do not adhere to any agreed-upon standards in format, data structure or method of access. Some may be broken up across multiple files, stored in compressed archives, and violate basic principles of data structure. As a result acquiring and utilizing available datasets can be a time consuming and error prone process. The EcoData Retriever is an extensible software framework which automates the tasks of discovering, downloading, and reformatting ecological data files for storage in a local data file or relational database. The automation of these tasks saves significant time for researchers and substantially reduces the likelihood of errors resulting from manual data manipulation and unfamiliarity with the complexities of individual datasets.

  11. Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files. SG39 meeting, May 2016

    International Nuclear Information System (INIS)

    Herman, Michal Wladyslaw; Cabellos De Francisco, Oscar; Beck, Bret; Ignatyuk, Anatoly V.; Palmiotti, Giuseppe; Grudzevich, Oleg T.; Salvatores, Massimo; Chadwick, Mark; Pelloni, Sandro; Diez De La Obra, Carlos Javier; Wu, Haicheng; Sobes, Vladimir; Rearden, Bradley T.; Yokoyama, Kenji; Hursin, Mathieu; Penttila, Heikki; Kodeli, Ivan-Alexander; Plevnik, Lucijan; Plompen, Arjan; Gabrielli, Fabrizio; Leal, Luiz Carlos; Aufiero, Manuele; Fiorito, Luca; Hummel, Andrew; Siefman, Daniel; Leconte, Pierre

    2016-05-01

    The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. WPEC subgroup 40-CIELO (Collaborative International Evaluated Library Organization) provides a new working paradigm to facilitate evaluated nuclear reaction data advances. It brings together experts from across the international nuclear reaction data community to identify and document discrepancies among existing evaluated data libraries, measured data, and model calculation interpretations, and aims to make progress in reconciling these discrepancies to create more accurate ENDF-formatted files. SG40-CIELO focusses on 6 important isotopes: "1H, "1"6O, "5"6Fe, "2"3"5","2"3"8U, "2"3"9Pu. This document is the proceedings of the seventh formal Subgroup 39 meeting and of the Joint SG39+SG40 Session held at the NEA, OECD Conference Center, Paris, France on 10-11 May 2016. It comprises a Summary Record of the meeting, and all the available presentations (slides) given by the participants: A - Welcome and actions review (Oscar CABELLOS); B - Methods: - XGPT: uncertainty propagation and data assimilation from continuous energy covariance matrix and resonance parameters covariances (Manuele AUFIERO); - Optimal experiment utilization (REWINDing PIA), (G. Palmiotti); C - Experiment analysis, sensitivity calculations and benchmarks: - Tripoli-4 analysis of SEG experiments (Andrew HUMMEL); - Tripoli-4 analysis of BERENICE experiments (P. DUFAY, Cyrille DE SAINT JEAN); - Preparation of sensitivities of k-eff, beta-eff and shielding benchmarks for adjustment exercise (Ivo KODELI); - SA and

  12. 77 FR 35371 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-06-13

    .... Applicants: Duke Energy Miami Fort, LLC. Description: MBR Filing to be effective 10/1/2012. Filed Date: 6/5...-000. Applicants: Duke Energy Piketon, LLC. Description: MBR Filing to be effective 10/1/2012. Filed...-1959-000. Applicants: Duke Energy Stuart, LLC. Description: MBR Filing to be effective 10/1/2012. Filed...

  13. HCUP State Emergency Department Databases (SEDD) - Restricted Access File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The State Emergency Department Databases (SEDD) contain the universe of emergency department visits in participating States. Restricted access data files are...

  14. A software to report and file by personal computer

    International Nuclear Information System (INIS)

    Di Giandomenico, E.; Filippone, A.; Esposito, A.; Bonomo, L.

    1989-01-01

    During the past four years the authors have been gaining experince in reporting radiological examinations by personal computer. Today they describe the project of a new software which allows the reporting and filing of roentgenograms. This program was realized by a radiologist, using a well known data base management system: dBASE III. The program was shaped to fit the radiologist's needs: it helps to report, and allows to file, radiological data, with the diagnosic codes used by the American College of Radiology. In this paper the authors describe the data base structure and indicate the software functions which make its use possible. Thus, this paper is not aimed at advertising a new reporting program, but at demonstrating how the radiologist can himself manage some aspects of his work with the help of a personal computer

  15. File structure and organization in the automation system for operative account of equipment and materials in JINR

    International Nuclear Information System (INIS)

    Gulyaeva, N.D.; Markova, N.F.; Nikitina, V.I.; Tentyukova, G.N.

    1975-01-01

    The structure and organization of files in the information bank for the first variant of a JINR material and technical supply subsystem are described. Automated system of equipment operative stock-taking on the base of the SDS-6200 computer is developed. Information is stored on magnetic discs. The arrangement of each file depends on its purpose and structure of data. Access to the files can be arbitrary or consecutive. The files are divided into groups: primary document files, long-term reference, information on items that may change as a result of administrative decision [ru

  16. Kepler Data Release 25 Notes (Q0-Q17)

    Science.gov (United States)

    Mullally, Susan E.; Caldwell, Douglas A.; Barclay, Thomas Stewart; Barentsen, Geert; Clarke, Bruce Donald; Bryson, Stephen T.; Burke, Christopher James; Campbell, Jennifer Roseanna; Catanzarite, Joseph H.; Christiansen, Jessie; hide

    2016-01-01

    These Data Release Notes provide information specific to the current reprocessing and re-export of the Q0-Q17 data. The data products included in this data release include target pixel files, light curve files, FFIs,CBVs, ARP, Background, and Collateral files. This release marks the final processing of the Kepler Mission Data. See Tables 1 and 2 for a list of the reprocessed Kepler cadence data. See Table 3 for a list of the available FFIs. The Long Cadence Data, Short Cadence Data, and FFI data are documented in these data release notes. The ancillary files (i.e., cotrending basis vectors, artifact removal pixels, background, and collateral data) are described in the Archive Manual (Thompson et al., 2016).

  17. 76 FR 63291 - Combined Notice Of Filings #1

    Science.gov (United States)

    2011-10-12

    ... filing per 35: MBR Tariff to be effective 9/23/2011. Filed Date: 09/23/2011. Accession Number: 20110923.... submits tariff filing per 35: MBR Tariff to be effective 9/23/2011. Filed Date: 09/23/2011. Accession.... submits tariff filing per 35: MBR Tariff to be effective 9/23/2011. Filed Date: 09/23/2011. Accession...

  18. Globus File Transfer Services | High-Performance Computing | NREL

    Science.gov (United States)

    installed on the systems at both ends of the data transfer. The NREL endpoint is nrel#globus. Click Login on the Globus web site. On the login page select "Globus ID" as the login method and click Login to the Globus website. From the Manage Data drop down menu, select Transfer Files. Then click Get

  19. Biomass Data | Geospatial Data Science | NREL

    Science.gov (United States)

    Biomass Data Biomass Data These datasets detail the biomass resources available in the United Coverage File Last Updated Metadata Biomethane Zip 72.2 MB 10/30/2014 Biomethane.xml Solid Biomass Zip 69.5

  20. Automatic generation of configuration files for a distributed control system

    CERN Document Server

    Cupérus, J

    1995-01-01

    The CERN PS accelerator complex is composed of 9 interlinked accelerators for production and acceleration of various kinds of particles. The hardware is controlled through CAMAC, VME, G64, and GPIB modules, which in turn are controlled by more than 100 microprocessors in VME crates. To produce startup files for all these microprocessors, with the correct drivers, programs and parameters in each of them, is quite a challenge. The problem is solved by generating the startup files automatically from the description of the control system in a relational database. The generation process detects inconsistencies and incomplete information. Included in the startup files are data which are formally comments, but can be interpreted for run-time checking of interface modules and program activity.