WorldWideScience

Sample records for access computer file

  1. File access prediction using neural networks.

    Science.gov (United States)

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors.

  2. F2AC: A Lightweight, Fine-Grained, and Flexible Access Control Scheme for File Storage in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wei Ren

    2016-01-01

    Full Text Available Current file storage service models for cloud servers assume that users either belong to single layer with different privileges or cannot authorize privileges iteratively. Thus, the access control is not fine-grained and flexible. Besides, most access control methods at cloud servers mainly rely on computationally intensive cryptographic algorithms and, especially, may not be able to support highly dynamic ad hoc groups with addition and removal of group members. In this paper, we propose a scheme called F2AC, which is a lightweight, fine-grained, and flexible access control scheme for file storage in mobile cloud computing. F2AC can not only achieve iterative authorization, authentication with tailored policies, and access control for dynamically changing accessing groups, but also provide access privilege transition and revocation. A new access control model called directed tree with linked leaf model is proposed for further implementations in data structures and algorithms. The extensive analysis is given for justifying the soundness and completeness of F2AC.

  3. Algorithms and file structures for computational geometry

    International Nuclear Information System (INIS)

    Hinrichs, K.; Nievergelt, J.

    1983-01-01

    Algorithms for solving geometric problems and file structures for storing large amounts of geometric data are of increasing importance in computer graphics and computer-aided design. As examples of recent progress in computational geometry, we explain plane-sweep algorithms, which solve various topological and geometric problems efficiently; and we present the grid file, an adaptable, symmetric multi-key file structure that provides efficient access to multi-dimensional data along any space dimension. (orig.)

  4. Access to DIII-D data located in multiple files and multiple locations

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1993-10-01

    The General Atomics DIII-D tokamak fusion experiment is now collecting over 80 MB of data per discharge once every 10 min, and that quantity is expected to double within the next year. The size of the data files, even in compressed format, is becoming increasingly difficult to handle. Data is also being acquired now on a variety of UNIX systems as well as MicroVAX and MODCOMP computer systems. The existing computers collect all the data into a single shot file, and this data collection is taking an ever increasing amount of time as the total quantity of data increases. Data is not available to experimenters until it has been collected into the shot file, which is in conflict with the substantial need for data examination on a timely basis between shots. The experimenters are also spread over many different types of computer systems (possibly located at other sites). To improve data availability and handling, software has been developed to allow individual computer systems to create their own shot files locally. The data interface routine PTDATA that is used to access DIII-D data has been modified so that a user's code on any computer can access data from any computer where that data might be located. This data access is transparent to the user. Breaking up the shot file into separate files in multiple locations also impacts software used for data archiving, data management, and data restoration

  5. ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog

    Science.gov (United States)

    Gray, F. P., Jr. (Editor)

    1979-01-01

    A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.

  6. Accessing files in an Internet: The Jade file system

    Science.gov (United States)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  7. Accessing files in an internet - The Jade file system

    Science.gov (United States)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  8. Study and development of a document file system with selective access

    International Nuclear Information System (INIS)

    Mathieu, Jean-Claude

    1974-01-01

    The objective of this research thesis was to design and to develop a set of software aimed at an efficient management of a document file system by using methods of selective access to information. Thus, the three main aspects of file processing (creation, modification, reorganisation) have been addressed. The author first presents the main problems related to the development of a comprehensive automatic documentation system, and their conventional solutions. Some future aspects, notably dealing with the development of peripheral computer technology, are also evoked. He presents the characteristics of INIS bibliographic records provided by the IAEA which have been used to create the files. In the second part, he briefly describes the file system general organisation. This system is based on the use of two main files: an inverse file which contains for each descriptor a list of of numbers of files indexed by this descriptor, and a dictionary of descriptor or input file which gives access to the inverse file. The organisation of these both files is then describes in a detailed way. Other related or associated files are created, and the overall architecture and mechanisms integrated into the file data input software are described, as well as various processing applied to these different files. Performance and possible development are finally discussed

  9. A technique for integrating remote minicomputers into a general computer's file system

    CERN Document Server

    Russell, R D

    1976-01-01

    This paper describes a simple technique for interfacing remote minicomputers used for real-time data acquisition into the file system of a central computer. Developed as part of the ORION system at CERN, this 'File Manager' subsystem enables a program in the minicomputer to access and manipulate files of any type as if they resided on a storage device attached to the minicomputer. Yet, completely transparent to the program, the files are accessed from disks on the central system via high-speed data links, with response times comparable to local storage devices. (6 refs).

  10. Dynamic file-access characteristics of a production parallel scientific workload

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1994-01-01

    Multiprocessors have permitted astounding increases in computational performance, but many cannot meet the intense I/O requirements of some scientific applications. An important component of any solution to this I/O bottleneck is a parallel file system that can provide high-bandwidth access to tremendous amounts of data in parallel to hundreds or thousands of processors. Most successful systems are based on a solid understanding of the expected workload, but thus far there have been no comprehensive workload characterizations of multiprocessor file systems. This paper presents the results of a three week tracing study in which all file-related activity on a massively parallel computer was recorded. Our instrumentation differs from previous efforts in that it collects information about every I/O request and about the mix of jobs running in a production environment. We also present the results of a trace-driven caching simulation and recommendations for designers of multiprocessor file systems.

  11. A model for optimizing file access patterns using spatio-temporal parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Boonthanome, Nouanesengsy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Ahrens, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible file access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.

  12. A digital imaging teaching file by using the internet, HTML and personal computers

    International Nuclear Information System (INIS)

    Chun, Tong Jin; Jeon, Eun Ju; Baek, Ho Gil; Kang, Eun Joo; Baik, Seung Kug; Choi, Han Yong; Kim, Bong Ki

    1996-01-01

    A film-based teaching file takes up space and the need to search through such a file places limits on the extent to which it is likely to be used. Furthermore it is not easy for doctors in a medium-sized hospital to experience a variety of cases, and so for these reasons we created an easy-to-use digital imaging teaching file with HTML(Hypertext Markup Language) and downloaded images via World Wide Web(WWW) services on the Internet. This was suitable for use by computer novices. We used WWW internet services as a resource for various images and three different IMB-PC compatible computers(386DX, 486DX-II, and Pentium) in downloading the images and in developing a digitalized teaching file. These computers were connected with the Internet through a high speed dial-up modem(28.8Kbps) and to navigate the Internet. Twinsock and Netscape were used. 3.0, Korean word processing software, was used to create HTML(Hypertext Markup Language) files and the downloaded images were linked to the HTML files. In this way, a digital imaging teaching file program was created. Access to a Web service via the Internet required a high speed computer(at least 486DX II with 8MB RAM) for comfortabel use; this also ensured that the quality of downloaded images was not degraded during downloading and that these were good enough to use as a teaching file. The time needed to retrieve the text and related images depends on the size of the file, the speed of the network, and the network traffic at the time of connection. For computer novices, a digital image teaching file using HTML is easy to use. Our method of creating a digital imaging teaching file by using Internet and HTML would be easy to create and radiologists with little computer experience who want to study various digital radiologic imaging cases would find it easy to use

  13. Considering User's Access Pattern in Multimedia File Systems

    Science.gov (United States)

    Cho, KyoungWoon; Ryu, YeonSeung; Won, Youjip; Koh, Kern

    2002-12-01

    Legacy buffer cache management schemes for multimedia server are grounded at the assumption that the application sequentially accesses the multimedia file. However, user access pattern may not be sequential in some circumstances, for example, in distance learning application, where the user may exploit the VCR-like function(rewind and play) of the system and accesses the particular segments of video repeatedly in the middle of sequential playback. Such a looping reference can cause a significant performance degradation of interval-based caching algorithms. And thus an appropriate buffer cache management scheme is required in order to deliver desirable performance even under the workload that exhibits looping reference behavior. We propose Adaptive Buffer cache Management(ABM) scheme which intelligently adapts to the file access characteristics. For each opened file, ABM applies either the LRU replacement or the interval-based caching depending on the Looping Reference Indicator, which indicates that how strong temporally localized access pattern is. According to our experiment, ABM exhibits better buffer cache miss ratio than interval-based caching or LRU, especially when the workload exhibits not only sequential but also looping reference property.

  14. Small file aggregation in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  15. Storing files in a parallel computing system using list-based index to identify replica files

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Zhang, Zhenhua; Grider, Gary

    2015-07-21

    Improved techniques are provided for storing files in a parallel computing system using a list-based index to identify file replicas. A file and at least one replica of the file are stored in one or more storage nodes of the parallel computing system. An index for the file comprises at least one list comprising a pointer to a storage location of the file and a storage location of the at least one replica of the file. The file comprises one or more of a complete file and one or more sub-files. The index may also comprise a checksum value for one or more of the file and the replica(s) of the file. The checksum value can be evaluated to validate the file and/or the file replica(s). A query can be processed using the list.

  16. WinSCP for Windows File Transfers | High-Performance Computing | NREL

    Science.gov (United States)

    WinSCP for Windows File Transfers WinSCP for Windows File Transfers WinSCP for can used to securely transfer files between your local computer running Microsoft Windows and a remote computer running Linux

  17. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    International Nuclear Information System (INIS)

    Brun, R; Duellmann, D; Ganis, G; Janyst, L; Peters, A J; Rademakers, F; Sindrilaru, E; Hanushevsky, A

    2011-01-01

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.

  18. Study and development of a document file system with selective access; Etude et realisation d'un systeme de fichiers documentaires a acces selectif

    Energy Technology Data Exchange (ETDEWEB)

    Mathieu, Jean-Claude

    1974-06-21

    The objective of this research thesis was to design and to develop a set of software aimed at an efficient management of a document file system by using methods of selective access to information. Thus, the three main aspects of file processing (creation, modification, reorganisation) have been addressed. The author first presents the main problems related to the development of a comprehensive automatic documentation system, and their conventional solutions. Some future aspects, notably dealing with the development of peripheral computer technology, are also evoked. He presents the characteristics of INIS bibliographic records provided by the IAEA which have been used to create the files. In the second part, he briefly describes the file system general organisation. This system is based on the use of two main files: an inverse file which contains for each descriptor a list of of numbers of files indexed by this descriptor, and a dictionary of descriptor or input file which gives access to the inverse file. The organisation of these both files is then describes in a detailed way. Other related or associated files are created, and the overall architecture and mechanisms integrated into the file data input software are described, as well as various processing applied to these different files. Performance and possible development are finally discussed.

  19. Securing the AliEn File Catalogue - Enforcing authorization with accountable file operations

    International Nuclear Information System (INIS)

    Schreiner, Steffen; Banerjee, Subho Sankar; Betev, Latchezar; Carminati, Federico; Vladimirovna Datskova, Olga; Furano, Fabrizio; Grigoras, Alina; Grigoras, Costin; Mendez Lorenzo, Patricia; Peters, Andreas Joachim; Saiz, Pablo; Bagnasco, Stefano; Zhu Jianlin

    2011-01-01

    The AliEn Grid Services, as operated by the ALICE Collaboration in its global physics analysis grid framework, is based on a central File Catalogue together with a distributed set of storage systems and the possibility to register links to external data resources. This paper describes several identified vulnerabilities in the AliEn File Catalogue access protocol regarding fraud and unauthorized file alteration and presents a more secure and revised design: a new mechanism, called LFN Booking Table, is introduced in order to keep track of access authorization in the transient state of files entering or leaving the File Catalogue. Due to a simplification of the original Access Envelope mechanism for xrootd-protocol-based storage systems, fundamental computational improvements of the mechanism were achieved as well as an up to 50% reduction of the credential's size. By extending the access protocol with signed status messages from the underlying storage system, the File Catalogue receives trusted information about a file's size and checksum and the protocol is no longer dependent on client trust. Altogether, the revised design complies with atomic and consistent transactions and allows for accountable, authentic, and traceable file operations. This paper describes these changes as part and beyond the development of AliEn version 2.19.

  20. HCUP State Emergency Department Databases (SEDD) - Restricted Access File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The State Emergency Department Databases (SEDD) contain the universe of emergency department visits in participating States. Restricted access data files are...

  1. 22 CFR 1429.21 - Computation of time for filing papers.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Computation of time for filing papers. 1429.21... MISCELLANEOUS AND GENERAL REQUIREMENTS General Requirements § 1429.21 Computation of time for filing papers. In... subchapter requires the filing of any paper, such document must be received by the Board or the officer or...

  2. File management for experiment control parameters within a distributed function computer network

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1976-10-01

    An attempt to design and implement a computer system for control of and data collection from a set of laboratory experiments reveals that many of the experiments in the set require an extensive collection of parameters for their control. The operation of the experiments can be greatly simplified if a means can be found for storing these parameters between experiments and automatically accessing them as they are required. A subsystem for managing files of such experiment control parameters is discussed. 3 figures

  3. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Science.gov (United States)

    2010-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  4. RAMA: A file system for massively parallel computers

    Science.gov (United States)

    Miller, Ethan L.; Katz, Randy H.

    1993-01-01

    This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.

  5. Storing files in a parallel computing system based on user-specified parser function

    Science.gov (United States)

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  6. Portable File Format (PFF) specifications

    Energy Technology Data Exchange (ETDEWEB)

    Dolan, Daniel H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Created at Sandia National Laboratories, the Portable File Format (PFF) allows binary data transfer across computer platforms. Although this capability is supported by many other formats, PFF files are still in use at Sandia, particularly in pulsed power research. This report provides detailed PFF specifications for accessing data without relying on legacy code.

  7. Computer access security code system

    Science.gov (United States)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  8. Arranging and finding folders and files on your Windows 7 computer

    CERN Document Server

    Steps, Studio Visual

    2014-01-01

    If you have lots of documents on your desk, it may prove to be impossible to find the document you are looking for. In order to easily find certain documents, they are often stored in a filing cabinet and arranged in a logical order. The folders on your computer serve the same purpose. They do not just contain files; they can also contain other folders. You can create an unlimited number of folders, and each folder can contain any number of subfolders and files. You can use Windows Explorer, also called the folder window, to work with the files and folders on your computer. You can copy, delete, move, find, and sort files, among other things. Or you can transfer files and folders to a USB stick, an external hard drive, a CD, DVD or Blu-Ray disk. In this practical guide we will show you how to use the folder window, and help you arrange your own files.

  9. The design and development of GRASS file reservation system

    International Nuclear Information System (INIS)

    Huang Qiulan; Zhu Suijiang; Cheng Yaodong; Chen Gang

    2010-01-01

    GFRS (GRASS File Reservation System) is designed to improve the file access performance of GRASS (Grid-enabled Advanced Storage System) which is a Hierarchical Storage Management (HSM) system developed at Computing Center, Institute of High Energy Physics. GRASS can provide massive storage management and data migration, but the data migration policy is simply based factors such as pool water level, the intervals for migration and so on, so it is short of precise control over files. As for that, we design GFRS to implement user-based file reservation which is to reserve and keep the required files on disks for High Energy physicists. CFRS can improve file access speed for users by avoiding migrating frequently accessed files to tapes. In this paper we first give a brief introduction of GRASS system and then detailed architecture and implementation of GFRS. Experiments results from GFRS have shown good performance and a simple analysis is made based on it. (authors)

  10. An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files

    Directory of Open Access Journals (Sweden)

    Anthony Chan

    2008-01-01

    Full Text Available A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events. These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughly proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage. The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.

  11. 13 CFR 120.1010 - SBA access to SBA Lender, Intermediary, and NTAP files.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false SBA access to SBA Lender, Intermediary, and NTAP files. 120.1010 Section 120.1010 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Risk-Based Lender Oversight Supervision § 120.1010 SBA access to SBA Lender...

  12. Database organization for computer-aided characterization of laser diode

    International Nuclear Information System (INIS)

    Oyedokun, Z.O.

    1988-01-01

    Computer-aided data logging involves a huge amount of data which must be properly managed for optimized storage space, easy access, retrieval and utilization. An organization method is developed to enhance the advantages of computer-based data logging of the testing of the semiconductor injection laser which optimize storage space, permit authorized user easy access and inhibits penetration. This method is based on unique file identification protocol tree structure and command file-oriented access procedures

  13. Ocean Surface Topography Mission (OSTM) /Jason-2: Auxiliary Files (NODC Accession 0044983)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This accession contains the data descriptions for the OSTM/Jason-2 Auxiliary data files, which is served through the NOAA/NESDIS Comprehensive Large Array-data...

  14. Ocean Surface Topography Mission (OSTM) /Jason-2: Ancillary Files (NODC Accession 0044982)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This accession contains the data descriptions for the OSTM/Jason-2 Ancillary data files, which is served through the NOAA/NESDIS Comprehensive Large Array-data...

  15. NET: an inter-computer file transfer command

    International Nuclear Information System (INIS)

    Burris, R.D.

    1978-05-01

    The NET command was defined and supported in order to facilitate file transfer between computers. Among the goals of the implementation were greatest possible ease of use, maximum power (i.e., support of a diversity of equipment and operations), and protection of the operating system

  16. Methods and apparatus for multi-resolution replication of files in a parallel computing system using semantic information

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-10-20

    Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.

  17. Utilizing HDF4 File Content Maps for the Cloud

    Science.gov (United States)

    Lee, Hyokyung Joe

    2016-01-01

    We demonstrate a prototype study that HDF4 file content map can be used for efficiently organizing data in cloud object storage system to facilitate cloud computing. This approach can be extended to any binary data formats and to any existing big data analytics solution powered by cloud computing because HDF4 file content map project started as long term preservation of NASA data that doesn't require HDF4 APIs to access data.

  18. Documentation of CATHENA input files for the APOLLO computer

    International Nuclear Information System (INIS)

    1988-06-01

    Input files created for the VAX version of the CATHENA two-fluid code have been modified and documented for simulation on the AECB's APOLLO computer system. The input files describe the RD-14 thermalhydraulic loop, the RD-14 steam generator, the RD-12 steam generator blowdown test facility, the Stern Laboratories Cold Water Injection Facility (CWIT), and a CANDU 600 reactor. Sample CATHENA predictions are given and compared with experimental results where applicable. 24 refs

  19. Peregrine System | High-Performance Computing | NREL

    Science.gov (United States)

    classes of nodes that users access: Login Nodes Peregrine has four login nodes, each of which has Intel E5 /scratch file systems, the /mss file system is mounted on all login nodes. Compute Nodes Peregrine has 2592

  20. Distributed computing for FTU data handling

    Energy Technology Data Exchange (ETDEWEB)

    Bertocchi, A. E-mail: bertocchi@frascati.enea.it; Bracco, G.; Buceti, G.; Centioli, C.; Giovannozzi, E.; Iannone, F.; Panella, M.; Vitale, V

    2002-06-01

    The growth of data warehouse in tokamak experiment is leading fusion laboratories to provide new IT solutions in data handling. In the last three years, the Frascati Tokamak Upgrade (FTU) experimental database was migrated from IBM-mainframe to Unix distributed computing environment. The migration efforts have taken into account the following items: (1) a new data storage solution based on storage area network over fibre channel; (2) andrew file system (AFS) for wide area network file sharing; (3) 'one measure/one file' philosophy replacing 'one shot/one file' to provide a faster read/write data access; (4) more powerful services, such as AFS, CORBA and MDSplus to allow users to access FTU database from different clients, regardless their O.S.; (5) large availability of data analysis tools, from the locally developed utility SHOW to the multi-platform Matlab, interactive data language and jScope (all these tools are now able to access also the Joint European Torus data, in the framework of the remote data access activity); (6) a batch-computing cluster of Alpha/CompaqTru64 CPU based on CODINE/GRD to optimize utilization of software and hardware resources.

  1. Methods and apparatus for capture and storage of semantic information with sub-files in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-02-03

    Techniques are provided for storing files in a parallel computing system using sub-files with semantically meaningful boundaries. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a plurality of sub-files. The method comprises the steps of obtaining a user specification of semantic information related to the file; providing the semantic information as a data structure description to a data formatting library write function; and storing the semantic information related to the file with one or more of the sub-files in one or more storage nodes of the parallel computing system. The semantic information provides a description of data in the file. The sub-files can be replicated based on semantically meaningful boundaries.

  2. The image database management system of teaching file using personal computer

    International Nuclear Information System (INIS)

    Shin, M. J.; Kim, G. W.; Chun, T. J.; Ahn, W. H.; Baik, S. K.; Choi, H. Y.; Kim, B. G.

    1995-01-01

    For the systemic management and easy using of teaching file in radiology department, the authors tried to do the setup of a database management system of teaching file using personal computer. We used a personal computer (IBM PC compatible, 486DX2) including a image capture card(Window vision, Dooin Elect, Seoul, Korea) and video camera recorder (8mm, CCD-TR105, Sony, Tokyo, Japan) for the acquisition and storage of images. We developed the database program by using Foxpro for Window 2.6(Microsoft, Seattle, USA) executed in the Window 3.1 (Microsoft, Seattle, USA). Each datum consisted of hospital number, name, sex, age, examination date, keyword, radiologic examination modalities, final diagnosis, radiologic findings, references and representative images. The images were acquired and stored as bitmap format (8 bitmap, 540 X 390 ∼ 545 X 414, 256 gray scale) and displayed on the 17 inch-flat monitor(1024 X 768, Samtron, Seoul, Korea). Without special devices, the images acquisition and storage could be done on the reading viewbox, simply. The image quality on the computer's monitor was less than the one of original film on the viewbox, but generally the characteristics of each lesions could be differentiated. Easy retrieval of data was possible for the purpose of teaching file system. Without high cost appliances, we could consummate the image database system of teaching file using personal computer with relatively inexpensive method

  3. Processing of evaluated neutron data files in ENDF format on personal computers

    International Nuclear Information System (INIS)

    Vertes, P.

    1991-11-01

    A computer code package - FDMXPC - has been developed for processing evaluated data files in ENDF format. The earlier version of this package is supplemented with modules performing calculations using Reich-Moore and Adler-Adler resonance parameters. The processing of evaluated neutron data files by personal computers requires special programming considerations outlined in this report. The scope of the FDMXPC program system is demonstrated by means of numerical examples. (author). 5 refs, 4 figs, 4 tabs

  4. Recalling ISX shot data files from the off-line archive

    International Nuclear Information System (INIS)

    Stanton, J.S.

    1981-02-01

    This document describes a set of computer programs designed to allow access to ISX shot data files stored on off-line disk packs. The programs accept user requests for data files and build a queue of end requests. When an operator is available to mount the necessary disk packs, the system copies the requested files to an on-line disk area. The program runs on the Fusion Energy Division's DECsystem-10 computer. The request queue is implemented under the System 1022 data base management system. The support programs are coded in MACRO-10 and FORTRAN-10

  5. Fast probabilistic file fingerprinting for big data.

    Science.gov (United States)

    Tretyakov, Konstantin; Laur, Sven; Smant, Geert; Vilo, Jaak; Prins, Pjotr

    2013-01-01

    Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff.

  6. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  7. Survey on Security Issues in File Management in Cloud Computing Environment

    Science.gov (United States)

    Gupta, Udit

    2015-06-01

    Cloud computing has pervaded through every aspect of Information technology in past decade. It has become easier to process plethora of data, generated by various devices in real time, with the advent of cloud networks. The privacy of users data is maintained by data centers around the world and hence it has become feasible to operate on that data from lightweight portable devices. But with ease of processing comes the security aspect of the data. One such security aspect is secure file transfer either internally within cloud or externally from one cloud network to another. File management is central to cloud computing and it is paramount to address the security concerns which arise out of it. This survey paper aims to elucidate the various protocols which can be used for secure file transfer and analyze the ramifications of using each protocol.

  8. Informatics in Radiology (infoRAD): personal computer security: part 2. Software Configuration and file protection.

    Science.gov (United States)

    Caruso, Ronald D

    2004-01-01

    Proper configuration of software security settings and proper file management are necessary and important elements of safe computer use. Unfortunately, the configuration of software security options is often not user friendly. Safe file management requires the use of several utilities, most of which are already installed on the computer or available as freeware. Among these file operations are setting passwords, defragmentation, deletion, wiping, removal of personal information, and encryption. For example, Digital Imaging and Communications in Medicine medical images need to be anonymized, or "scrubbed," to remove patient identifying information in the header section prior to their use in a public educational or research environment. The choices made with respect to computer security may affect the convenience of the computing process. Ultimately, the degree of inconvenience accepted will depend on the sensitivity of the files and communications to be protected and the tolerance of the user. Copyright RSNA, 2004

  9. Adding Data Management Services to Parallel File Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, Scott [Univ. of California, Santa Cruz, CA (United States)

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decades the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file

  10. A software to report and file by personal computer

    International Nuclear Information System (INIS)

    Di Giandomenico, E.; Filippone, A.; Esposito, A.; Bonomo, L.

    1989-01-01

    During the past four years the authors have been gaining experince in reporting radiological examinations by personal computer. Today they describe the project of a new software which allows the reporting and filing of roentgenograms. This program was realized by a radiologist, using a well known data base management system: dBASE III. The program was shaped to fit the radiologist's needs: it helps to report, and allows to file, radiological data, with the diagnosic codes used by the American College of Radiology. In this paper the authors describe the data base structure and indicate the software functions which make its use possible. Thus, this paper is not aimed at advertising a new reporting program, but at demonstrating how the radiologist can himself manage some aspects of his work with the help of a personal computer

  11. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    OpenAIRE

    Karlheinz Schwarz; Rainer Breitling; Christian Allen

    2013-01-01

    Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation) is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized ...

  12. Interoperable Access to NCAR Research Data Archive Collections

    Science.gov (United States)

    Schuster, D.; Ji, Z.; Worley, S. J.; Manross, K.

    2014-12-01

    The National Center for Atmospheric Research (NCAR) Research Data Archive (RDA) provides free access to 600+ observational and gridded dataset collections. The RDA is designed to support atmospheric and related sciences research, updated frequently where datasets have ongoing production, and serves data to 10,000 unique users annually. The traditional data access options include web-based direct archive file downloads, user selected data subsets and format conversions produced by server-side computations, and client and cURL-based APIs for routine scripted data retrieval. To enhance user experience and utility, the RDA now also offers THREDDS Data Server (TDS) access for many highly valued dataset collections. TDS offered datasets are presented as aggregations, enabling users to access an entire dataset collection, that can be comprised of 1000's of files, through a single virtual file. The OPeNDAP protocol, supported by the TDS, allows compatible tools to open and access these virtual files remotely, and make the native data file format transparent to the end user. The combined functionality (TDS/OPeNDAP) gives users the ability to browse, select, visualize, and download data from a complete dataset collection without having to transfer archive files to a local host. This presentation will review the TDS basics and describe the specific TDS implementation on the RDA's diverse archive of GRIB-1, GRIB-2, and gridded NetCDF formatted dataset collections. Potential future TDS implementation on in-situ observational dataset collections will be discussed. Illustrative sample cases will be used to highlight the end users benefits from this interoperable data access to the RDA.

  13. Characteristics of file sharing and peer to peer networking | Opara ...

    African Journals Online (AJOL)

    Characteristics of file sharing and peer to peer networking. ... distributing or providing access to digitally stored information, such as computer programs, ... including in multicast systems, anonymous communications systems, and web caches.

  14. Comparative evaluation of effect of rotary and reciprocating single-file systems on pericervical dentin: A cone-beam computed tomography study.

    Science.gov (United States)

    Zinge, Priyanka Ramdas; Patil, Jayaprakash

    2017-01-01

    The aim of this study is to evaluate and compare the effect of one shape, Neolix rotary single-file systems and WaveOne, Reciproc reciprocating single-file systems on pericervical dentin (PCD) using cone-beam computed tomography (CBCT). A total of 40 freshly extracted mandibular premolars were collected and divided into two groups, namely, Group A - Rotary: A 1 - Neolix and A 2 - OneShape and Group B - Reciprocating: B 1 - WaveOne and B 2 - Reciproc. Preoperative scans of each were taken followed by conventional access cavity preparation and working length determination with 10-k file. Instrumentation of the canal was done according to the respective file system, and postinstrumentation CBCT scans of teeth were obtained. 90 μm thick slices were obtained 4 mm apical and coronal to the cementoenamel junction. The PCD thickness was calculated as the shortest distance from the canal outline to the closest adjacent root surface, which was measured in four surfaces, i.e., facial, lingual, mesial, and distal for all the groups in the two obtained scans. There was no significant difference found between rotary single-file systems and reciprocating single-file systems in their effect on PCD, but in Group B 2 , there was most significant loss of tooth structure in the mesial, lingual, and distal surface ( P file system removes more PCD as compared to other experimental groups, whereas Neolix single file system had the least effect on PCD.

  15. Protecting your files on the DFS file system

    CERN Multimedia

    Computer Security Team

    2011-01-01

    The Windows Distributed File System (DFS) hosts user directories for all NICE users plus many more data.    Files can be accessed from anywhere, via a dedicated web portal (http://cern.ch/dfs). Due to the ease of access to DFS with in CERN it is of utmost importance to properly protect access to sensitive data. As the use of DFS access control mechanisms is not obvious to all users, passwords, certificates or sensitive files might get exposed. At least this happened in past to the Andrews File System (AFS) - the Linux equivalent to DFS) - and led to bad publicity due to a journalist accessing supposedly "private" AFS folders (SonntagsZeitung 2009/11/08). This problem does not only affect the individual user but also has a bad impact on CERN's reputation when it comes to IT security. Therefore, all departments and LHC experiments agreed recently to apply more stringent protections to all DFS user folders. The goal of this data protection policy is to assist users in pro...

  16. Protecting your files on the AFS file system

    CERN Multimedia

    2011-01-01

    The Andrew File System is a world-wide distributed file system linking hundreds of universities and organizations, including CERN. Files can be accessed from anywhere, via dedicated AFS client programs or via web interfaces that export the file contents on the web. Due to the ease of access to AFS it is of utmost importance to properly protect access to sensitive data in AFS. As the use of AFS access control mechanisms is not obvious to all users, passwords, private SSH keys or certificates have been exposed in the past. In one specific instance, this also led to bad publicity due to a journalist accessing supposedly "private" AFS folders (SonntagsZeitung 2009/11/08). This problem does not only affect the individual user but also has a bad impact on CERN's reputation when it comes to IT security. Therefore, all departments and LHC experiments agreed in April 2010 to apply more stringent folder protections to all AFS user folders. The goal of this data protection policy is to assist users in...

  17. Disk access controller for Multi 8 computer

    International Nuclear Information System (INIS)

    Segalard, Jean

    1970-01-01

    After having presented the initial characteristics and weaknesses of the software provided for the control of a memory disk coupled with a Multi 8 computer, the author reports the development and improvement of this controller software. He presents the different constitutive parts of the computer and the operation of the disk coupling and of the direct access to memory. He reports the development of the disk access controller: software organisation, loader, subprograms and statements

  18. Comparison of canal transportation and centering ability of hand Protaper files and rotary Protaper files by using micro computed tomography

    OpenAIRE

    Amit Gandhi; Taru Gandhi

    2011-01-01

    Introduction and objective: The aim of the present study was to compare root canal preparation with rotary ProTaper files and hand ProTaper files to find a better instrumentation technique for maintaining root canal geometry with the aid of computed tomography. Material and methods: Twenty curved root canals with at least 10 degree of curvature were divided into 2 groups of 10 teeth each. In group I the canals were prepared with hand ProTaper files and in group II the canals were prepared wit...

  19. Towards an Approach of Semantic Access Control for Cloud Computing

    Science.gov (United States)

    Hu, Luokai; Ying, Shi; Jia, Xiangyang; Zhao, Kai

    With the development of cloud computing, the mutual understandability among distributed Access Control Policies (ACPs) has become an important issue in the security field of cloud computing. Semantic Web technology provides the solution to semantic interoperability of heterogeneous applications. In this paper, we analysis existing access control methods and present a new Semantic Access Control Policy Language (SACPL) for describing ACPs in cloud computing environment. Access Control Oriented Ontology System (ACOOS) is designed as the semantic basis of SACPL. Ontology-based SACPL language can effectively solve the interoperability issue of distributed ACPs. This study enriches the research that the semantic web technology is applied in the field of security, and provides a new way of thinking of access control in cloud computing.

  20. Computer Security Systems Enable Access.

    Science.gov (United States)

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  1. Automated Computer Access Request System

    Science.gov (United States)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  2. Models of the solvent-accessible surface of biopolymers

    Energy Technology Data Exchange (ETDEWEB)

    Smith, R.E.

    1996-09-01

    Many biopolymers such as proteins, DNA, and RNA have been studied because they have important biomedical roles and may be good targets for therapeutic action in treating diseases. This report describes how plastic models of the solvent-accessible surface of biopolymers were made. Computer files containing sets of triangles were calculated, then used on a stereolithography machine to make the models. Small (2 in.) models were made to test whether the computer calculations were done correctly. Also, files of the type (.stl) required by any ISO 9001 rapid prototyping machine were written onto a CD-ROM for distribution to American companies.

  3. Computer Access and Flowcharting as Variables in Learning Computer Programming.

    Science.gov (United States)

    Ross, Steven M.; McCormick, Deborah

    Manipulation of flowcharting was crossed with in-class computer access to examine flowcharting effects in the traditional lecture/laboratory setting and in a classroom setting where online time was replaced with manual simulation. Seventy-two high school students (24 male and 48 female) enrolled in a computer literacy course served as subjects.…

  4. Cooperative storage of shared files in a parallel computing system with dynamic block size

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  5. #DDOD Use Case: Access to Medicare Part D Drug Event File (PDE) for cost transparency

    Data.gov (United States)

    U.S. Department of Health & Human Services — SUMMARY DDOD use case to request access to Medicare Part D Drug Event File (PDE) for cost transparency to pharmacies and patients. WHAT IS A USE CASE? A “Use Case”...

  6. 76 FR 63291 - Combined Notice Of Filings #1

    Science.gov (United States)

    2011-10-12

    ... filing per 35: MBR Tariff to be effective 9/23/2011. Filed Date: 09/23/2011. Accession Number: 20110923.... submits tariff filing per 35: MBR Tariff to be effective 9/23/2011. Filed Date: 09/23/2011. Accession.... submits tariff filing per 35: MBR Tariff to be effective 9/23/2011. Filed Date: 09/23/2011. Accession...

  7. Optimising LAN access to grid enabled storage elements

    International Nuclear Information System (INIS)

    Stewart, G A; Dunne, B; Elwell, A; Millar, A P; Cowan, G A

    2008-01-01

    When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE

  8. Cloud Computing Cryptography "State-of-the-Art"

    OpenAIRE

    Omer K. Jasim; Safia Abbas; El-Sayed M. El-Horbaty; Abdel-Badeeh M. Salem

    2013-01-01

    Cloud computing technology is very useful in present day to day life, it uses the internet and the central remote servers to provide and maintain data as well as applications. Such applications in turn can be used by the end users via the cloud communications without any installation. Moreover, the end users’ data files can be accessed and manipulated from any other computer using the internet services. Despite the flexibility of data and application accessing and usage that cloud computing e...

  9. A Web Service for File-Level Access to Disk Images

    Directory of Open Access Journals (Sweden)

    Sunitha Misra

    2014-07-01

    Full Text Available Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs. Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.

  10. The global unified parallel file system (GUPFS) project: FY 2002 activities and results

    Energy Technology Data Exchange (ETDEWEB)

    Butler, Gregory F.; Lee, Rei Chi; Welcome, Michael L.

    2003-04-07

    The Global Unified Parallel File System (GUPFS) project is a multiple-phase, five-year project at the National Energy Research Scientific Computing (NERSC) Center to provide a scalable, high performance, high bandwidth, shared file system for all the NERSC production computing and support systems. The primary purpose of the GUPFS project is to make it easier to conduct advanced scientific research using the NERSC systems. This is to be accomplished through the use of a shared file system providing a unified file namespace, operating on consolidated shared storage that is directly accessed by all the NERSC production computing and support systems. During its first year, FY 2002, the GUPFS project focused on identifying, testing, and evaluating existing and emerging shared/cluster file system, SAN fabric, and storage technologies; identifying NERSC user input/output (I/O) requirements, methods, and mechanisms; and developing appropriate benchmarking methodologies and benchmark codes for a parallel environment. This report presents the activities and progress of the GUPFS project during its first year, the results of the evaluations conducted, and plans for near-term and longer-term investigations.

  11. An Extended Two-Phase Method for Accessing Sections of Out-of-Core Arrays

    Directory of Open Access Journals (Sweden)

    Rajeev Thakur

    1996-01-01

    Full Text Available A number of applications on parallel computers deal with very large data sets that cannot fit in main memory. In such applications, data must be stored in files on disks and fetched into memory during program execution. Parallel programs with large out-of-core arrays stored in files must read/write smaller sections of the arrays from/to files. In this article, we describe a method for accessing sections of out-of-core arrays efficiently. Our method, the extended two-phase method, uses collective l/O: Processors cooperate to combine several l/O requests into fewer larger granularity requests, to reorder requests so that the file is accessed in proper sequence, and to eliminate simultaneous l/O requests for the same data. In addition, the l/O workload is divided among processors dynamically, depending on the access requests. We present performance results obtained from two real out-of-core parallel applications – matrix multiplication and a Laplace's equation solver – and several synthetic access patterns, all on the Intel Touchstone Delta. These results indicate that the extended two-phase method significantly outperformed a direct (noncollective method for accessing out-of-core array sections.

  12. Grid collector: An event catalog with automated file management

    International Nuclear Information System (INIS)

    Wu, Kesheng; Zhang, Wei-Ming; Sim, Alexander; Gu, Junmin; Shoshani, Arie

    2003-01-01

    High Energy Nuclear Physics (HENP) experiments such as STAR at BNL and ATLAS at CERN produce large amounts of data that are stored as files on mass storage systems in computer centers. In these files, the basic unit of data is an event. Analysis is typically performed on a selected set of events. The files containing these events have to be located, copied from mass storage systems to disks before analysis, and removed when no longer needed. These file management tasks are tedious and time consuming. Typically, all events contained in the files are read into memory before a selection is made. Since the time to read the events dominate the overall execution time, reading the unwanted event needlessly increases the analysis time. The Grid Collector is a set of software modules that works together to address these two issues. It automates the file management tasks and provides ''direct'' access to the selected events for analyses. It is currently integrated with the STAR analysis framework. The users can select events based on tags, such as, ''production date between March 10 and 20, and the number of charged tracks > 100.'' The Grid Collector locates the files containing relevant events, transfers the files across the Grid if necessary, and delivers the events to the analysis code through the familiar iterators. There has been some research efforts to address the file management issues, the Grid Collector is unique in that it addresses the event access issue together with the file management issues. This makes it more useful to a large variety of users

  13. Accessible engineering drawings for visually impaired machine operators.

    Science.gov (United States)

    Ramteke, Deepak; Kansal, Gayatri; Madhab, Benu

    2014-01-01

    An engineering drawing provides manufacturing information to a machine operator. An operator plans and executes machining operations based on this information. A visually impaired (VI) operator does not have direct access to the drawings. Drawing information is provided to them verbally or by using sample parts. Both methods have limitations that affect the quality of output. Use of engineering drawings is a standard practice for every industry; this hampers employment of a VI operator. Accessible engineering drawings are required to increase both independence, as well as, employability of VI operators. Today, Computer Aided Design (CAD) software is used for making engineering drawings, which are saved in CAD files. Required information is extracted from the CAD files and converted into Braille or voice. The authors of this article propose a method to make engineering drawings information directly accessible to a VI operator.

  14. Long term file migration. Part I: file reference patterns

    International Nuclear Information System (INIS)

    Smith, A.J.

    1978-08-01

    In most large computer installations, files are moved between on-line disk and mass storage (tape, integrated mass storage device) either automatically by the system or specifically at the direction of the user. This is the first of two papers which study the selection of algorithms for the automatic migration of files between mass storage and disk. The use of the text editor data sets at the Stanford Linear Accelerator Center (SLAC) computer installation is examined through the analysis of thirteen months of file reference data. Most files are used very few times. Of those that are used sufficiently frequently that their reference patterns may be examined, about a third show declining rates of reference during their lifetime; of the remainder, very few (about 5%) show correlated interreference intervals, and interreference intervals (in days) appear to be more skewed than would occur with the Bernoulli process. Thus, about two-thirds of all sufficiently active files appear to be referenced as a renewal process with a skewed interreference distribution. A large number of other file reference statistics (file lifetimes, interference distributions, moments, means, number of uses/file, file sizes, file rates of reference, etc.) are computed and presented. The results are applied in the following paper to the development and comparative evaluation of file migration algorithms. 17 figures, 13 tables

  15. File structure and organization in the automation system for operative account of equipment and materials in JINR

    International Nuclear Information System (INIS)

    Gulyaeva, N.D.; Markova, N.F.; Nikitina, V.I.; Tentyukova, G.N.

    1975-01-01

    The structure and organization of files in the information bank for the first variant of a JINR material and technical supply subsystem are described. Automated system of equipment operative stock-taking on the base of the SDS-6200 computer is developed. Information is stored on magnetic discs. The arrangement of each file depends on its purpose and structure of data. Access to the files can be arbitrary or consecutive. The files are divided into groups: primary document files, long-term reference, information on items that may change as a result of administrative decision [ru

  16. Prefetching in file systems for MIMD multiprocessors

    Science.gov (United States)

    Kotz, David F.; Ellis, Carla Schlatter

    1990-01-01

    The question of whether prefetching blocks on the file into the block cache can effectively reduce overall execution time of a parallel computation, even under favorable assumptions, is considered. Experiments have been conducted with an interleaved file system testbed on the Butterfly Plus multiprocessor. Results of these experiments suggest that (1) the hit ratio, the accepted measure in traditional caching studies, may not be an adequate measure of performance when the workload consists of parallel computations and parallel file access patterns, (2) caching with prefetching can significantly improve the hit ratio and the average time to perform an I/O (input/output) operation, and (3) an improvement in overall execution time has been observed in most cases. In spite of these gains, prefetching sometimes results in increased execution times (a negative result, given the optimistic nature of the study). The authors explore why it is not trivial to translate savings on individual I/O requests into consistently better overall performance and identify the key problems that need to be addressed in order to improve the potential of prefetching techniques in the environment.

  17. DJFS: Providing Highly Reliable and High‐Performance File System with Small‐Sized

    Directory of Open Access Journals (Sweden)

    Junghoon Kim

    2017-11-01

    Full Text Available File systems and applications try to implement their own update protocols to guarantee data consistency, which is one of the most crucial aspects of computing systems. However, we found that the storage devices are substantially under‐utilized when preserving data consistency because they generate massive storage write traffic with many disk cache flush operations and force‐unit‐access (FUA commands. In this paper, we present DJFS (Delta‐Journaling File System that provides both a high level of performance and data consistency for different applications. We made three technical contributions to achieve our goal. First, to remove all storage accesses with disk cache flush operations and FUA commands, DJFS uses small‐sized NVRAM for a file system journal. Second, to reduce the access latency and space requirements of NVRAM, DJFS attempts to journal compress the differences in the modified blocks. Finally, to relieve explicit checkpointing overhead, DJFS aggressively reflects the checkpoint transactions to file system area in the unit of the specified region. Our evaluation on TPC‐C SQLite benchmark shows that, using our novel optimization schemes, DJFS outperforms Ext4 by up to 64.2 times with only 128 MB of NVRAM.

  18. ACCESS TO A COMPUTER SYSTEM. BETWEEN LEGAL PROVISIONS AND TECHNICAL REALITY

    Directory of Open Access Journals (Sweden)

    Maxim DOBRINOIU

    2016-05-01

    Full Text Available Nowadays, on a rise of cybersecurity incidents and a very complex IT&C environment, the national legal systems must adapt in order to properly address the new and modern forms of criminality in cyberspace. The illegal access to a computer system remains one of the most important cyber-related crimes due to its popularity but also from the perspective as being a door opened to computer data and sometimes a vehicle for other tech crimes. In the same time, the information society services slightly changed the IT paradigm and represent the new interface between users and systems. Is true that services rely on computer systems, but accessing services goes now beyond the simple accessing computer systems as commonly understood by most of the legislations. The article intends to explain other sides of the access related to computer systems and services, with the purpose to advance possible legal solutions to certain case scenarios.

  19. Grid collector: An event catalog with automated file management

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Zhang, Wei-Ming; Sim, Alexander; Gu, Junmin; Shoshani, Arie

    2003-10-17

    High Energy Nuclear Physics (HENP) experiments such as STAR at BNL and ATLAS at CERN produce large amounts of data that are stored as files on mass storage systems in computer centers. In these files, the basic unit of data is an event. Analysis is typically performed on a selected set of events. The files containing these events have to be located, copied from mass storage systems to disks before analysis, and removed when no longer needed. These file management tasks are tedious and time consuming. Typically, all events contained in the files are read into memory before a selection is made. Since the time to read the events dominate the overall execution time, reading the unwanted event needlessly increases the analysis time. The Grid Collector is a set of software modules that works together to address these two issues. It automates the file management tasks and provides ''direct'' access to the selected events for analyses. It is currently integrated with the STAR analysis framework. The users can select events based on tags, such as, ''production date between March 10 and 20, and the number of charged tracks > 100.'' The Grid Collector locates the files containing relevant events, transfers the files across the Grid if necessary, and delivers the events to the analysis code through the familiar iterators. There has been some research efforts to address the file management issues, the Grid Collector is unique in that it addresses the event access issue together with the file management issues. This makes it more useful to a large variety of users.

  20. New developments in file-based infrastructure for ATLAS event selection

    Energy Technology Data Exchange (ETDEWEB)

    Gemmeren, P van; Malon, D M [Argonne National Laboratory, Argonne, Illinois 60439 (United States); Nowak, M, E-mail: gemmeren@anl.go [Brookhaven National Laboratory, Upton, NY 11973-5000 (United States)

    2010-04-01

    In ATLAS software, TAGs are event metadata records that can be stored in various technologies, including ROOT files and relational databases. TAGs are used to identify and extract events that satisfy certain selection predicates, which can be coded as SQL-style queries. TAG collection files support in-file metadata to store information describing all events in the collection. Event Selector functionality has been augmented to provide such collection-level metadata to subsequent algorithms. The ATLAS I/O framework has been extended to allow computational processing of TAG attributes to select or reject events without reading the event data. This capability enables physicists to use more detailed selection criteria than are feasible in an SQL query. For example, the TAGs contain enough information not only to check the number of electrons, but also to calculate their distance to the closest jet-a calculation that would be difficult to express in SQL. Another new development allows ATLAS to write TAGs directly into event data files. This feature can improve performance by supporting advanced event selection capabilities, including computational processing of TAG information, without the need for external TAG file or database access.

  1. Household computer and Internet access: The digital divide in a pediatric clinic population

    Science.gov (United States)

    Carroll, Aaron E.; Rivara, Frederick P.; Ebel, Beth; Zimmerman, Frederick J.; Christakis, Dimitri A.

    2005-01-01

    Past studies have noted a digital divide, or inequality in computer and Internet access related to socioeconomic class. This study sought to measure how many households in a pediatric primary care outpatient clinic had household access to computers and the Internet, and whether this access differed by socio-economic status or other demographic information. We conducted a phone survey of a population-based sample of parents with children ages 0 to 11 years old. Analyses assessed predictors of having home access to a computer, the Internet, and high-speed Internet service. Overall, 88.9% of all households owned a personal computer, and 81.4% of all households had Internet access. Among households with Internet access, 48.3% had high speed Internet at home. There were statistically significant associations between parental income or education and home computer ownership and Internet access. However, the impact of this difference was lessened by the fact that over 60% of families with annual household income of $10,000–$25,000, and nearly 70% of families with only a high-school education had Internet access at home. While income and education remain significant predictors of household computer and internet access, many patients and families at all economic levels have access, and might benefit from health promotion interventions using these modalities. PMID:16779012

  2. Use of DBMS-10 for storage and retrieval of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Dunford, C.L.

    1977-01-01

    The use of a data base management system (DBMS) for storage of, and retrieval from, the many scientific data bases maintained by the National Nuclear Data Center is currently being investigated. It would appear that a commercially available DBMS package would save the Center considerable money and manpower when adding new data files to the library and in the long-term maintenance of current data files. Current DBMS technology and experience with an internal DBMS system suggests an inherent inefficiency in processing large data networks where significant portions are accessed in a sequential manner. Such a file is the Evaluated Nuclear Data File (ENDF/B), which contains many large data tables, each one normally accessed in a sequential manner. After gaining some experience and success in small applications of the commercially available DBMS package, DBMS-10, on the Center's DECsystem-10 computer, it was decided to select a large data base as a test case before making a final decision on the implementation of DBMS-10 for all data bases. The obvious approach is to utilize the DBMS to index a random-access file. In this way one is able to increase the storage and retrieval efficiency at the one-time cost of additional programing effort. 2 figures

  3. Use of DBMS-10 for storage and retrieval of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Dunford, C.L.

    1978-01-01

    The use of a data base management system (DBMS) for storage of, and retrieval from, the many scientific data bases maintained by the National Nuclear Data Center is currently being investigated. It would appear that a commercially available DBMS package would save the Center considerable money and manpower when adding new data files to our library and in the long-term maintenance of our current data files. Current DBMS technology and experience with our internal DBMS system suggests an inherent inefficiency in processing large data networks where significant portions are accessed in a sequential manner. Such a file is the Evaluated Nuclear Data File (ENDF/B) which contains many large data tables, each one normally accessed in a sequential manner. After gaining some experience and success in small applications of the commercially available DBMS package, DBMS-10, on the Center's DECsystem-10 computer, it was decided to select one of our large data bases as a test case before making a final decision on the implementation of DBMS-10 for all our data bases. The obvious approach is to utilize the DBMS to index a random access file. In this way one is able to increase the storage and retrieval efficiency at the one-time cost of additional programming effort

  4. 77 FR 66458 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-11-05

    ... Service Company of Colorado. Description: 2012--10--26 PSCo MBR Filing to be effective 12/26/ 2012. Filed...--SPS MBR Filing to be effective 12/26/2012. Filed Date: 10/26/12. Accession Number: 20121026-5123...: Revised Application for MBR Authorization to be effective 10/16/2012. Filed Date: 10/25/12. Accession...

  5. Computer Security: When a person leaves - access rights remain!

    CERN Multimedia

    Computer Security Team

    2014-01-01

    We have been contacted recently by an embarrassed project manager who just figured out that a student who left at the end of 2013 still had access rights to read the whole project folder in February 2014: “How can that be?! In any other company, access rights would be purged at the same time as an employment contract terminates." Not so at CERN.   CERN has always been an open site with an open community. Physical access to the site is lightweight and you just need to have your CERN access card at hand. Further restrictions have only been put in place where safety or security really require them, and CERN does not require you to keep your access card on display. The same holds for the digital world. Once registered at CERN - either by contract, via your experiment or through the Users' office - you own a computing account that provides you with access to a wide variety of computing services. For example, last year 9,730 students/technicians/engineers/researchers/sta...

  6. Optimizing the Use of Storage Systems Provided by Cloud Computing Environments

    Science.gov (United States)

    Gallagher, J. H.; Potter, N.; Byrne, D. A.; Ogata, J.; Relph, J.

    2013-12-01

    Cloud computing systems present a set of features that include familiar computing resources (albeit augmented to support dynamic scaling of processing power) bundled with a mix of conventional and unconventional storage systems. The linux base on which many Cloud environments (e.g., Amazon) are based make it tempting to assume that any Unix software will run efficiently in this environment efficiently without change. OPeNDAP and NODC collaborated on a short project to explore how the S3 and Glacier storage systems provided by the Amazon Cloud Computing infrastructure could be used with a data server developed primarily to access data stored in a traditional Unix file system. Our work used the Amazon cloud system, but we strived for designs that could be adapted easily to other systems like OpenStack. Lastly, we evaluated different architectures from a computer security perspective. We found that there are considerable issues associated with treating S3 as if it is a traditional file system, even though doing so is conceptually simple. These issues include performance penalties because using a software tool that emulates a traditional file system to store data in S3 performs poorly when compared to a storing data directly in S3. We also found there are important benefits beyond performance to ensuring that data written to S3 can directly accessed without relying on a specific software tool. To provide a hierarchical organization to the data stored in S3, we wrote 'catalog' files, using XML. These catalog files map discrete files to S3 access keys. Like a traditional file system's directories, the catalogs can also contain references to other catalogs, providing a simple but effective hierarchy overlaid on top of S3's flat storage space. An added benefit to these catalogs is that they can be viewed in a web browser; our storage scheme provides both efficient access for the data server and access via a web browser. We also looked at the Glacier storage system and

  7. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    Directory of Open Access Journals (Sweden)

    Karlheinz Schwarz

    2013-09-01

    Full Text Available Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized below. In each section a further focusing will be provided by occasionally organizing special issues on topics of high interests, collecting papers on fundamental work in the field. More applied papers should be submitted to their corresponding specialist journals. To help us achieve our goal with this journal, we have an excellent editorial board to advise us on the exciting current and future trends in computation from methodology to application. We very much look forward to hearing all about the research going on across the world. [...

  8. Design and creation of a direct access nuclear data file

    International Nuclear Information System (INIS)

    Charpentier, P.

    1981-06-01

    General considerations on the structure of instructions and files are reviewed. Design, organization and mode of use of the different files: instruction file, index files, inverted files, automatic analysis and inquiry programs are examined [fr

  9. Dynamic computing random access memory

    International Nuclear Information System (INIS)

    Traversa, F L; Bonani, F; Pershin, Y V; Di Ventra, M

    2014-01-01

    The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200–2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology. (paper)

  10. Code 672 observational science branch computer networks

    Science.gov (United States)

    Hancock, D. W.; Shirk, H. G.

    1988-01-01

    In general, networking increases productivity due to the speed of transmission, easy access to remote computers, ability to share files, and increased availability of peripherals. Two different networks within the Observational Science Branch are described in detail.

  11. Equity and Computers for Mathematics Learning: Access and Attitudes

    Science.gov (United States)

    Forgasz, Helen J.

    2004-01-01

    Equity and computer use for secondary mathematics learning was the focus of a three year study. In 2003, a survey was administered to a large sample of grade 7-10 students. Some of the survey items were aimed at determining home access to and ownership of computers, and students' attitudes to mathematics, computers, and computer use for…

  12. Gender, Computer Access and Use as Predictors of Nigerian ...

    African Journals Online (AJOL)

    This study X-rayed the contributions of gender, access to computer and computer use to the Nigerian undergraduates' computer proficiency. Three hundred and fifteen (315) undergraduates from the Faculty of Education of. Olabisi Onabanjo University, Nigeria served as the sample for this study. The instruments used for ...

  13. Approaches in highly parameterized inversion-PESTCommander, a graphical user interface for file and run management across networks

    Science.gov (United States)

    Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.

    2012-01-01

    Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the

  14. 75 FR 62522 - Combined Notice of Filings No. 3

    Science.gov (United States)

    2010-10-12

    ... filing per 154.203: NAESB EDI Form Filing to be effective 11/1/ 2010. Filed Date: 09/30/2010. Accession....9 EDI Form to be effective 11/1/2010. Filed Date: 09/30/2010. Accession Number: 20100930-5348...

  15. File and metadata management for BESIII distributed computing

    International Nuclear Information System (INIS)

    Nicholson, C; Zheng, Y H; Lin, L; Deng, Z Y; Li, W D; Zhang, X M

    2012-01-01

    The BESIII experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPCII e + e − collider to study physics in the π-charm energy region around 3.7 GeV; BEPCII has produced the worlds largest samples of J/φ and φ’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very centralized to a distributed computing environment, as well as the development of an efficient file and metadata management system. While BESIII is on a smaller scale than some other HEP experiments, this poses particular challenges for its distributed computing and data management system. These constraints include limited resources and manpower, and low quality of network connections to IHEP. Drawing on the rich experience of the HEP community, a system has been developed which meets these constraints. The design and development of the BESIII distributed data management system, including its integration with other BESIII distributed computing components, such as job management, are presented here.

  16. Transfer of numeric ASCII data files between Apple and IBM personal computers.

    Science.gov (United States)

    Allan, R W; Bermejo, R; Houben, D

    1986-01-01

    Listings for programs designed to transfer numeric ASCII data files between Apple and IBM personal computers are provided with accompanying descriptions of how the software operates. Details of the hardware used are also given. The programs may be easily adapted for transferring data between other microcomputers.

  17. Identity based Encryption and Biometric Authentication Scheme for Secure Data Access in Cloud Computing

    DEFF Research Database (Denmark)

    Cheng, Hongbing; Rong, Chunming; Tan, Zheng-Hua

    2012-01-01

    Cloud computing will be a main information infrastructure in the future; it consists of many large datacenters which are usually geographically distributed and heterogeneous. How to design a secure data access for cloud computing platform is a big challenge. In this paper, we propose a secure data...... access scheme based on identity-based encryption and biometric authentication for cloud computing. Firstly, we describe the security concern of cloud computing and then propose an integrated data access scheme for cloud computing, the procedure of the proposed scheme include parameter setup, key...... distribution, feature template creation, cloud data processing and secure data access control. Finally, we compare the proposed scheme with other schemes through comprehensive analysis and simulation. The results show that the proposed data access scheme is feasible and secure for cloud computing....

  18. Migrating Educational Data and Services to Cloud Computing: Exploring Benefits and Challenges

    Science.gov (United States)

    Lahiri, Minakshi; Moseley, James L.

    2013-01-01

    "Cloud computing" is currently the "buzzword" in the Information Technology field. Cloud computing facilitates convenient access to information and software resources as well as easy storage and sharing of files and data, without the end users being aware of the details of the computing technology behind the process. This…

  19. Grid collector an event catalog with automated file management

    CERN Document Server

    Ke Sheng Wu; Sim, A; Jun Min Gu; Shoshani, A

    2004-01-01

    High Energy Nuclear Physics (HENP) experiments such as STAR at BNL and ATLAS at CERN produce large amounts of data that are stored as files on mass storage systems in computer centers. In these files, the basic unit of data is an event. Analysis is typically performed on a selected set of events. The files containing these events have to be located, copied from mass storage systems to disks before analysis, and removed when no longer needed. These file management tasks are tedious and time consuming. Typically, all events contained in the files are read into memory before a selection is made. Since the time to read the events dominate the overall execution time, reading the unwanted event needlessly increases the analysis time. The Grid Collector is a set of software modules that works together to address these two issues. It automates the file management tasks and provides "direct" access to the selected events for analyses. It is currently integrated with the STAR analysis framework. The users can select ev...

  20. Design and Implementation of File Access and Control System Based on Dynamic Web

    Institute of Scientific and Technical Information of China (English)

    GAO Fuxiang; YAO Lan; BAO Shengfei; YU Ge

    2006-01-01

    A dynamic Web application, which can help the departments of enterprise to collaborate with each other conveniently, is proposed. Several popular design solutions are introduced at first. Then, dynamic Web system is chosen for developing the file access and control system. Finally, the paper gives the detailed process of the design and implementation of the system, which includes some key problems such as solutions of document management and system security. Additionally, the limitations of the system as well as the suggestions of further improvement are also explained.

  1. ChemEngine: harvesting 3D chemical structures of supplementary data from PDF files.

    Science.gov (United States)

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2016-01-01

    Digital access to chemical journals resulted in a vast array of molecular information that is now available in the supplementary material files in PDF format. However, extracting this molecular information, generally from a PDF document format is a daunting task. Here we present an approach to harvest 3D molecular data from the supporting information of scientific research articles that are normally available from publisher's resources. In order to demonstrate the feasibility of extracting truly computable molecules from PDF file formats in a fast and efficient manner, we have developed a Java based application, namely ChemEngine. This program recognizes textual patterns from the supplementary data and generates standard molecular structure data (bond matrix, atomic coordinates) that can be subjected to a multitude of computational processes automatically. The methodology has been demonstrated via several case studies on different formats of coordinates data stored in supplementary information files, wherein ChemEngine selectively harvested the atomic coordinates and interpreted them as molecules with high accuracy. The reusability of extracted molecular coordinate data was demonstrated by computing Single Point Energies that were in close agreement with the original computed data provided with the articles. It is envisaged that the methodology will enable large scale conversion of molecular information from supplementary files available in the PDF format into a collection of ready- to- compute molecular data to create an automated workflow for advanced computational processes. Software along with source codes and instructions available at https://sourceforge.net/projects/chemengine/files/?source=navbar.Graphical abstract.

  2. 78 FR 20901 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-04-08

    ... Marketing Inc. submits Category Seller Clarification to be effective 3/29/2013. Filed Date: 3/29/13... submits Cost- Based Tariffs Compliance Filing to be effective 4/1/2013. Filed Date: 3/29/13. Accession... to be effective N/A. Filed Date: 3/29/13. Accession Number: 20130329-5223. Comments Due: 5 p.m. ET 4...

  3. An integrated solution for remote data access

    Science.gov (United States)

    Sapunenko, Vladimir; D'Urso, Domenico; dell'Agnello, Luca; Vagnoni, Vincenzo; Duranti, Matteo

    2015-12-01

    Data management constitutes one of the major challenges that a geographically- distributed e-Infrastructure has to face, especially when remote data access is involved. We discuss an integrated solution which enables transparent and efficient access to on-line and near-line data through high latency networks. The solution is based on the joint use of the General Parallel File System (GPFS) and of the Tivoli Storage Manager (TSM). Both products, developed by IBM, are well known and extensively used in the HEP computing community. Owing to a new feature introduced in GPFS 3.5, so-called Active File Management (AFM), the definition of a single, geographically-distributed namespace, characterised by automated data flow management between different locations, becomes possible. As a practical example, we present the implementation of AFM-based remote data access between two data centres located in Bologna and Rome, demonstrating the validity of the solution for the use case of the AMS experiment, an astro-particle experiment supported by the INFN CNAF data centre with the large disk space requirements (more than 1.5 PB).

  4. Secure Dynamic access control scheme of PHR in cloud computing.

    Science.gov (United States)

    Chen, Tzer-Shyong; Liu, Chia-Hui; Chen, Tzer-Long; Chen, Chin-Sheng; Bau, Jian-Guo; Lin, Tzu-Ching

    2012-12-01

    With the development of information technology and medical technology, medical information has been developed from traditional paper records into electronic medical records, which have now been widely applied. The new-style medical information exchange system "personal health records (PHR)" is gradually developed. PHR is a kind of health records maintained and recorded by individuals. An ideal personal health record could integrate personal medical information from different sources and provide complete and correct personal health and medical summary through the Internet or portable media under the requirements of security and privacy. A lot of personal health records are being utilized. The patient-centered PHR information exchange system allows the public autonomously maintain and manage personal health records. Such management is convenient for storing, accessing, and sharing personal medical records. With the emergence of Cloud computing, PHR service has been transferred to storing data into Cloud servers that the resources could be flexibly utilized and the operation cost can be reduced. Nevertheless, patients would face privacy problem when storing PHR data into Cloud. Besides, it requires a secure protection scheme to encrypt the medical records of each patient for storing PHR into Cloud server. In the encryption process, it would be a challenge to achieve accurately accessing to medical records and corresponding to flexibility and efficiency. A new PHR access control scheme under Cloud computing environments is proposed in this study. With Lagrange interpolation polynomial to establish a secure and effective PHR information access scheme, it allows to accurately access to PHR with security and is suitable for enormous multi-users. Moreover, this scheme also dynamically supports multi-users in Cloud computing environments with personal privacy and offers legal authorities to access to PHR. From security and effectiveness analyses, the proposed PHR access

  5. 47 CFR 69.3 - Filing of access service tariffs.

    Science.gov (United States)

    2010-10-01

    ... tariff becomes effective, if such company or companies did not file such a tariff in the preceding... two-year period. Such tariffs shall be filed with a scheduled effective date of July 1. Such tariff... section shall not preclude the filing of revisions to those annual tariffs that will become effective on...

  6. WinHPC System Configuration | High-Performance Computing | NREL

    Science.gov (United States)

    ), login node (WinHPC02) and worker/compute nodes. The head node acts as the file, DNS, and license server . The login node is where the users connect to access the cluster. Node 03 has dual Intel Xeon E5530 2008 R2 HPC Edition. The login node, WinHPC02, is where users login to access the system. This is where

  7. Task-and-role-based access-control model for computational grid

    Institute of Scientific and Technical Information of China (English)

    LONG Tao; HONG Fan; WU Chi; SUN Ling-li

    2007-01-01

    Access control in a grid environment is a challenging issue because the heterogeneous nature and independent administration of geographically dispersed resources in grid require access control to use fine-grained policies. We established a task-and-role-based access-control model for computational grid (CG-TRBAC model), integrating the concepts of role-based access control (RBAC) and task-based access control (TBAC). In this model, condition restrictions are defined and concepts specifically tailored to Workflow Management System are simplified or omitted so that role assignment and security administration fit computational grid better than traditional models; permissions are mutable with the task status and system variables, and can be dynamically controlled. The CG-TRBAC model is proved flexible and extendible. It can implement different control policies. It embodies the security principle of least privilege and executes active dynamic authorization. A task attribute can be extended to satisfy different requirements in a real grid system.

  8. Protecting Files Hosted on Virtual Machines With Out-of-Guest Access Control

    Science.gov (United States)

    2017-12-01

    of the system call, we additionally check for 35 a match on the newname. As enforced by our SACL, the first part ensures that if the user or group...file, as per the SACL- enforced policy. Figure 3.8 shows the code for the permission checks done in the case of the open() and openat() system calls...maximum 200 words) When an operating system (OS) runs on a virtual machine (VM), a hypervisor, the software that facilitates virtualization of computer

  9. 76 FR 52323 - Combined Notice of Filings; Filings Instituting Proceedings

    Science.gov (United States)

    2011-08-22

    .... Applicants: Young Gas Storage Company, Ltd. Description: Young Gas Storage Company, Ltd. submits tariff..., but intervention is necessary to become a party to the proceeding. The filings are accessible in the.... More detailed information relating to filing requirements, interventions, protests, and service can be...

  10. Computer and internet access for long-term care residents: perceived benefits and barriers.

    Science.gov (United States)

    Tak, Sunghee H; Beck, Cornelia; McMahon, Ed

    2007-05-01

    In this study, the authors examined residents' computer and Internet access, as well as benefits and barriers to access in nursing homes. Administrators of 64 nursing homes in a national chain completed surveys. Fourteen percent of the nursing homes provided computers for residents to use, and 11% had Internet access. Some residents owned personal computers in their rooms. Administrators perceived the benefits of computer and Internet use for residents as facilitating direct communication with family and providing mental exercise, education, and enjoyment. Perceived barriers included cost and space for computer equipment and residents' cognitive and physical impairments. Implications of residents' computer activities were discussed for nursing care. Further research is warranted to examine therapeutic effects of computerized activities and their cost effectiveness.

  11. Computer networks and their implications for nuclear data

    International Nuclear Information System (INIS)

    Carlson, J.

    1992-01-01

    Computer networks represent a valuable resource for accessing information. Just as the computer has revolutionized the ability to process and analyze information, networks have and will continue to revolutionize data collection and access. A number of services are in routine use that would not be possible without the presence of an (inter)national computer network (which will be referred to as the internet). Services such as electronic mail, remote terminal access, and network file transfers are almost a required part of any large scientific/research organization. These services only represent a small fraction of the potential uses of the internet; however, the remainder of this paper discusses some of these uses and some technological developments that may influence these uses

  12. 75 FR 23752 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-05-04

    ...Corp. Description: PacifiCorp submits Revised Network Integration Transmission Service Agreement dated... Network Integration Transmission Service Agreement et al. Filed Date: 04/26/2010. Accession Numbers... filing per 35.12: Initial Market Based Rates to be effective 6/1/2010. Filed Date: 04/27/2010. Accession...

  13. Computers in plasma physics: remote data access and magnetic configuration design

    International Nuclear Information System (INIS)

    Blackwell, B.D.; McMillan, B.F.; Searle, A.C.; Gardner, H.J.; Price, D.M.; Fredian, T.W.

    2000-01-01

    Full text: Two graphically intensive examples of the application of computers in plasma physics are described remote data access for plasma confinement experiments, and a code for real-time magnetic field tracing and optimisation. The application for both of these is the H-1NF National Plasma Fusion Research Facility, a Commonwealth Major National Research Facility within the Research School of Physical Science, Institute of Advanced Studies, ANU. It is based on the 'flexible' heliac stellarator H-1, a plasma confinement device in which the confining fields are generated solely by external conductors. These complex, fully three dimensional magnetic fields are used as examples for the magnetic design application, and data from plasma physics experiments are used to illustrate the remote access techniques. As plasma fusion experiments grow in size, increased remote access allows physicists to participate in experiments and data analysis from their home base. Three types of access will be described and demonstrated - a simple Java-based web interface, an example TCP client-server built around the widely used MDSPlus data system and the visualisation package IDL (RSI Inc), and a virtual desktop Environment (VNC: AT and T Research) that simulates terminals local to the plasma facility. A client server TCP/IP - web interface to the programmable logic controller that provides user interface to the programmable high power magnet power supplies is described. A very general configuration file allows great flexibility, and allows new displays and interfaces to be created (usually) without changes to the underlying C++ and Java code. The magnetic field code BLINE provides accurate calculation of complex magnetic fields, and 3D visualisation in real time, using a low cost multiprocessor computer and an OpenGL-compatible graphics accelerator. A fast, flexible multi-mesh interpolation method is used for tracing vacuum magnetic field lines created by arbitrary filamentary

  14. The ENSDF radioactivity data base for IBM-PC and computer network access

    International Nuclear Information System (INIS)

    Ekstroem, P.; Spanier, L.

    1989-08-01

    A database for about 15000 gamma rays from 2777 radioactive nuclides derived from the international Evaluated Nuclear Structure Data File (ENSDF) is described together with supporting computer codes. The database is available on a PC diskette, costfree, from the IAEA Nuclear Data Section. (author)

  15. 75 FR 61733 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-10-06

    ... MBR Tariff making the required change. Filed Date: 09/27/2010. Accession Number: 20100928-0206... Grove MBR Baseline to be effective 9/27/2010. Filed Date: 09/27/2010. Accession Number: 20100927-5227..., LLC. Description: Champion Energy Services, LLC submits tariff filing per 35.12: MBR application to be...

  16. 76 FR 12353 - Combined Notice of Filings #2

    Science.gov (United States)

    2011-03-07

    ... PowerSecure Inc. at Washington, NC Walmart. Filed Date: 07/02/2010. Accession Number: 20100702-5029...: Self-Certification of PowerSecure Inc. at Laurinburg, NC Walmart. Filed Date: 07/02/2010. Accession...Secure, Inc. Description: Self-Certification of PowerSecure Inc. at Wilson, NC Walmart. Filed Date: 07/02...

  17. Legal, privacy, security, access and regulatory issues in cloud computing

    CSIR Research Space (South Africa)

    Dlodlo, N

    2011-04-01

    Full Text Available a gap on reporting are on are legal , privacy, security, access and regulatory issues. This paper raises an awareness of legal, privacy, security, access and regulatory issues that are associated with the advent of cloud computing. An in...

  18. Computer Access and Computer Use for Science Performance of Racial and Linguistic Minority Students

    Science.gov (United States)

    Chang, Mido; Kim, Sunha

    2009-01-01

    This study examined the effects of computer access and computer use on the science achievement of elementary school students, with focused attention on the effects for racial and linguistic minority students. The study used the Early Childhood Longitudinal Study (ECLS-K) database and conducted statistical analyses with proper weights and…

  19. Towards ubiquitous access of computer-assisted surgery systems.

    Science.gov (United States)

    Liu, Hui; Lufei, Hanping; Shi, Weishong; Chaudhary, Vipin

    2006-01-01

    Traditional stand-alone computer-assisted surgery (CAS) systems impede the ubiquitous and simultaneous access by multiple users. With advances in computing and networking technologies, ubiquitous access to CAS systems becomes possible and promising. Based on our preliminary work, CASMIL, a stand-alone CAS server developed at Wayne State University, we propose a novel mobile CAS system, UbiCAS, which allows surgeons to retrieve, review and interpret multimodal medical images, and to perform some critical neurosurgical procedures on heterogeneous devices from anywhere at anytime. Furthermore, various optimization techniques, including caching, prefetching, pseudo-streaming-model, and compression, are used to guarantee the QoS of the UbiCAS system. UbiCAS enables doctors at remote locations to actively participate remote surgeries, share patient information in real time before, during, and after the surgery.

  20. Next generation WLCG File Transfer Service (FTS)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    LHC experiments at CERN and worldwide utilize WLCG resources and middleware components to perform distributed computing tasks. One of the most important tasks is reliable file replication. It is a complex problem, suffering from transfer failures, disconnections, transfer duplication, server and network overload, differences in storage systems, etc. To address these problems, EMI and gLite have provided the independent File Transfer Service (FTS) and Grid File Access Library (GFAL) tools. Their development started almost a decade ago, in the meantime, requirements in data management have changed - the old architecture of FTS and GFAL cannot keep support easily these changes. Technology has also been progressing: FTS and GFAL do not fit into the new paradigms (cloud, messaging, for example). To be able to serve the next stage of LHC data collecting (from 2013), we need a new generation of  these tools: FTS 3 and GFAL 2. We envision a service requiring minimal configuration, which can dynamically adapt to the...

  1. Trust in social computing. The case of peer-to-peer file sharing networks

    Directory of Open Access Journals (Sweden)

    Heng Xu

    2011-09-01

    Full Text Available Social computing and online communities are changing the fundamental way people share information and communicate with each other. Social computing focuses on how users may have more autonomy to express their ideas and participate in social exchanges in various ways, one of which may be peer-to-peer (P2P file sharing. Given the greater risk of opportunistic behavior by malicious or criminal communities in P2P networks, it is crucial to understand the factors that affect individual’s use of P2P file sharing software. In this paper, we develop and empirically test a research model that includes trust beliefs and perceived risks as two major antecedent beliefs to the usage intention. Six trust antecedents are assessed including knowledge-based trust, cognitive trust, and both organizational and peer-network factors of institutional trust. Our preliminary results show general support for the model and offer some important implications for software vendors in P2P sharing industry and regulatory bodies.

  2. Computer self efficacy as correlate of on-line public access ...

    African Journals Online (AJOL)

    The use of Online Public Access Catalogue (OPAC) by students has a lot of advantages and computer self-efficacy is a factor that could determine its effective utilization. Little appears to be known about colleges of education students‟ use of OPAC, computer self-efficacy and the relationship between OPAC and computer ...

  3. Computer automation of a health physics program record

    International Nuclear Information System (INIS)

    Bird, E.M.; Flook, B.A.; Jarrett, R.D.

    1984-01-01

    A multi-user computer data base management system (DBMS) has been developed to automate USDA's national radiological safety program. It maintains information on approved users of radioactive material and radiation emanating equipment, as a central file which is accessed whenever information on the user is required. Files of inventory, personnel dosemetry records, laboratory and equipment surveys, leak tests, bioassay reports, and all other information are linked to each approved user by an assigned code that identifies the user by state, agency, and facility. The DBMS is menu-driven with provisions for addition, modification and report generation of information maintained in the system. This DBMS was designed as a single entry system to reduce the redundency of data entry. Prompts guide the user at decision points and data validation routines check for proper data entry. The DBMS generates lists of current inventories, leak test forms, inspection reports, scans for overdue reports from users, and generates follow-up letters. The DBMS system operates on a Wang OIS computer and utilizes its compiled BASIC, List Processing, Word Processing, and indexed (ISAM) file features. This system is a very fast relational database supporting many users simultaneously while providing several methods of data protection. All data files are compatible with List Processing. Information in these files can be examined, sorted, modified, or outputted to word processing documents using software supplied by Wang. This has reduced the need for special one-time programs and provides alternative access to the data

  4. Embedded systems for supporting computer accessibility.

    Science.gov (United States)

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  5. 77 FR 9912 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-02-21

    .... Docket Numbers: ER12-75-003. Applicants: Public Power, LLC. Description: Compliance Filing for MBR Tariff... Energy MBR Tariff to be effective 2/1/2012. Filed Date: 2/10/12. Accession Number: 20120210-5132... Cancellation of MBR Tariff to be effective 2/10/2012. Filed Date: 2/10/12. Accession Number: 20120210-5153...

  6. Remote data access in computational jobs on the ATLAS data grid

    CERN Document Server

    Begy, Volodimir; The ATLAS collaboration; Lassnig, Mario

    2018-01-01

    This work describes the technique of remote data access from computational jobs on the ATLAS data grid. In comparison to traditional data movement and stage-in approaches it is well suited for data transfers which are asynchronous with respect to the job execution. Hence, it can be used for optimization of data access patterns based on various policies. In this study, remote data access is realized with the HTTP and WebDAV protocols, and is investigated in the context of intra- and inter-computing site data transfers. In both cases, the typical scenarios for application of remote data access are identified. The paper also presents an analysis of parameters influencing the data goodput between heterogeneous storage element - worker node pairs on the grid.

  7. Prospective evaluation of an internet-linked handheld computer critical care knowledge access system.

    Science.gov (United States)

    Lapinsky, Stephen E; Wax, Randy; Showalter, Randy; Martinez-Motta, J Carlos; Hallett, David; Mehta, Sangeeta; Burry, Lisa; Stewart, Thomas E

    2004-12-01

    Critical care physicians may benefit from immediate access to medical reference material. We evaluated the feasibility and potential benefits of a handheld computer based knowledge access system linking a central academic intensive care unit (ICU) to multiple community-based ICUs. Four community hospital ICUs with 17 physicians participated in this prospective interventional study. Following training in the use of an internet-linked, updateable handheld computer knowledge access system, the physicians used the handheld devices in their clinical environment for a 12-month intervention period. Feasibility of the system was evaluated by tracking use of the handheld computer and by conducting surveys and focus group discussions. Before and after the intervention period, participants underwent simulated patient care scenarios designed to evaluate the information sources they accessed, as well as the speed and quality of their decision making. Participants generated admission orders during each scenario, which were scored by blinded evaluators. Ten physicians (59%) used the system regularly, predominantly for nonmedical applications (median 32.8/month, interquartile range [IQR] 28.3-126.8), with medical software accessed less often (median 9/month, IQR 3.7-13.7). Eight out of 13 physicians (62%) who completed the final scenarios chose to use the handheld computer for information access. The median time to access information on the handheld handheld computer was 19 s (IQR 15-40 s). This group exhibited a significant improvement in admission order score as compared with those who used other resources (P = 0.018). Benefits and barriers to use of this technology were identified. An updateable handheld computer system is feasible as a means of point-of-care access to medical reference material and may improve clinical decision making. However, during the study, acceptance of the system was variable. Improved training and new technology may overcome some of the barriers we

  8. Lessons Learned in Deploying the World s Largest Scale Lustre File System

    Energy Technology Data Exchange (ETDEWEB)

    Dillow, David A [ORNL; Fuller, Douglas [ORNL; Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Zhang, Zhe [ORNL; Hill, Jason J [ORNL; Shipman, Galen M [ORNL

    2010-01-01

    The Spider system at the Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is the world's largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF's diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF's diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing the file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.

  9. ArrayBridge: Interweaving declarative array processing with high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Haoyuan [The Ohio State Univ., Columbus, OH (United States); Floratos, Sofoklis [The Ohio State Univ., Columbus, OH (United States); Blanas, Spyros [The Ohio State Univ., Columbus, OH (United States); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Prabhat [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Paul [Paradigm4, Inc., Waltham, MA (United States)

    2017-05-04

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.

  10. 76 FR 14963 - Combined Notice of Filings #2

    Science.gov (United States)

    2011-03-18

    ... MBR Tariff to be effective 8/ 10/2010. Filed Date: 03/09/2011. Accession Number: 20110309-5000..., LLC submits tariff filing per 35.13(a)(2)(iii): Amendment to MBR Tariff to be effective 8/10/2010... to MBR Tariff to be effective 8/10/2010. Filed Date: 03/09/2011. Accession Number: 20110309-5002...

  11. Image Steganography of Multiple File Types with Encryption and Compression Algorithms

    Directory of Open Access Journals (Sweden)

    Ernest Andreigh C. Centina

    2017-05-01

    Full Text Available The goals of this study were to develop a system intended for securing files through the technique of image steganography integrated with cryptography by utilizing ZLIB Algorithm for compressing and decompressing secret files, DES Algorithm for encryption and decryption, and Least Significant Bit Algorithm for file embedding and extraction to avoid compromise on highly confidential files from exploits of unauthorized persons. Ensuing to this, the system is in acc ordance with ISO 9126 international quality standards. Every quality criteria of the system was evaluated by 10 Information Technology professionals, and the arithmetic Mean and Standard Deviation of the survey were computed. The result exhibits that m ost of them strongly agreed that the system is excellently effective based on Functionality, Reliability, Usability, Efficiency, Maintainability and Portability conformance to ISO 9126 standards. The system was found to be a useful tool for both governmen t agencies and private institutions for it could keep not only the message secret but also the existence of that particular message or file et maintaining the privacy of highly confidential and sensitive files from unauthorized access.

  12. Computer access and Internet use by urban and suburban emergency department customers.

    Science.gov (United States)

    Bond, Michael C; Klemt, Ryan; Merlis, Jennifer; Kopinski, Judith E; Hirshon, Jon Mark

    2012-07-01

    Patients are increasingly using the Internet (43% in 2000 vs. 70% in 2006) to obtain health information, but is there a difference in the ability of urban and suburban emergency department (ED) customers to access the Internet? To assess computer and Internet resources available to and used by people waiting to be seen in an urban ED and a suburban ED. Individuals waiting in the ED were asked survey questions covering demographics, type of insurance, access to a primary care provider, reason for their ED visit, computer access, and ability to access the Internet for health-related matters. There were 304 individuals who participated, 185 in the urban ED and 119 in the suburban ED. Urban subjects were more likely than suburban to be women, black, have low household income, and were less likely to have insurance. The groups were similar in regard to average age, education, and having a primary care physician. Suburban respondents were more likely to own a computer, but the majority in both groups had access to computers and the Internet. Their frequency of accessing the Internet was similar, as were their reasons for using it. Individuals from the urban ED were less willing to schedule appointments via the Internet but more willing to contact their health care provider via e-mail. The groups were equally willing to use the Internet to fill prescriptions and view laboratory results. Urban and suburban ED customers had similar access to the Internet. Both groups were willing to use the Internet to access personal health information. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. 78 FR 61941 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-10-07

    ... Amendment to MBR Tariff to be effective 9/25/2013. Filed Date: 9/24/13. Accession Number: 20130924-5080.... Description: Minco Wind II, LLC Amendment to MBR Tariff to be effective 9/25/2013. Filed Date: 9/24/13..., LLC. Description: MBR tariff cancellation to be effective 9/25/2013. Filed Date: 9/24/13. Accession...

  14. 76 FR 12725 - Combined Notice of Filings #2

    Science.gov (United States)

    2011-03-08

    ... MBR Tariff to be effective 3/2/2011. Filed Date: 03/01/2011 Accession Number: 20110301-5090 Comment... Substitute First Revised MBR Tariff to be effective 3/2/ 2011. Filed Date: 03/01/2011 Accession Number... 35.17(b): Revised Application for MBR and MBR Tariffs to be effective 4/1/2011. Filed Date: 02/28...

  15. Youth with cerebral palsy with differing upper limb abilities: how do they access computers?

    Science.gov (United States)

    Davies, T Claire; Chau, Tom; Fehlings, Darcy L; Ameratunga, Shanthi; Stott, N Susan

    2010-12-01

    To identify the current level of awareness of different computer access technologies and the choices made regarding mode of access by youth with cerebral palsy (CP) and their families. Survey. Two tertiary-level rehabilitation centers in New Zealand and Canada. Youth (N=60) with CP, Manual Ability Classification Scale (MACS) levels I to V, age 13 to 25 years. Not applicable. Questionnaire. Fifty (83%) of the 60 youth were aware of at least 1 available assistive technology (AT), such as touch screens and joysticks. However, only 34 youth (57%) were familiar with the accessibility options currently available in the most common operating systems. Thirty-three (94%) of 35 youth who were MACS I and II used a standard mouse and keyboard, while few chose to use assistive technology or accessibility options. In contrast, 10 (40%) of 25 youth who were MACS III to V used a variety of assistive technologies such as touch screens, joysticks, trackballs, and scanning technologies. This group also had the highest use of accessibility options, although only 15 (60%) of the 25 were aware of them. Most youth with CP were aware of, and used, assistive technologies to enhance their computer access but were less knowledgeable about accessibility options. Accessibility options allow users to modify their own computer interface and can thus enhance computer access for youth with CP. Clinicians should be knowledgeable enough to give informed advice in this area of computer access, thus ensuring that all youth with CP can benefit from both AT and accessibility options, as required. Copyright © 2010 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  16. Comparative Analysis of Canal Centering Ability of Different Single File Systems Using Cone Beam Computed Tomography- An In-Vitro Study.

    Science.gov (United States)

    Agarwal, Rolly S; Agarwal, Jatin; Jain, Pradeep; Chandra, Anil

    2015-05-01

    The ability of an endodontic instrument to remain centered in the root canal system is one of the most important characteristic influencing the clinical performance of a particular file system. Thus, it is important to assess the canal centering ability of newly introduced single file systems before they can be considered a viable replacement of full-sequence rotary file systems. The aim of the study was to compare the canal transportation, centering ability, and time taken for preparation of curved root canals after instrumentation with single file systems One Shape and Wave One, using cone-beam computed tomography (CBCT). Sixty mesiobuccal canals of mandibular molars with an angle of curvature ranging from 20(o) to 35(o) were divided into three groups of 20 samples each: ProTaper PT (group I) - full-sequence rotary control group, OneShape OS (group II)- single file continuous rotation, WaveOne WO - single file reciprocal motion (group III). Pre instrumentation and post instrumentation three-dimensional CBCT images were obtained from root cross-sections at 3mm, 6mm and 9mm from the apex. Scanned images were then accessed to determine canal transportation and centering ability. The data collected were evaluated using one-way analysis of variance (ANOVA) with Tukey's honestly significant difference test. It was observed that there were no differences in the magnitude of transportation between the rotary instruments (p >0.05) at both 3mm as well as 6mm from the apex. At 9 mm from the apex, Group I PT showed significantly higher mean canal transportation and lower centering ability (0.19±0.08 and 0.39±0.16), as compared to Group II OS (0.12±0.07 and 0.54±0.24) and Group III WO (0.13±0.06 and 0.55±0.18) while the differences between OS and WO were not statistically significant. It was concluded that there was minor difference between the tested groups. Single file systems demonstrated average canal transportation and centering ability comparable to full sequence

  17. Comparative Analysis of Canal Centering Ability of Different Single File Systems Using Cone Beam Computed Tomography- An In-Vitro Study

    Science.gov (United States)

    Agarwal, Jatin; Jain, Pradeep; Chandra, Anil

    2015-01-01

    Background The ability of an endodontic instrument to remain centered in the root canal system is one of the most important characteristic influencing the clinical performance of a particular file system. Thus, it is important to assess the canal centering ability of newly introduced single file systems before they can be considered a viable replacement of full-sequence rotary file systems. Aim The aim of the study was to compare the canal transportation, centering ability, and time taken for preparation of curved root canals after instrumentation with single file systems One Shape and Wave One, using cone-beam computed tomography (CBCT). Materials and Methods Sixty mesiobuccal canals of mandibular molars with an angle of curvature ranging from 20o to 35o were divided into three groups of 20 samples each: ProTaper PT (group I) – full-sequence rotary control group, OneShape OS (group II)- single file continuous rotation, WaveOne WO – single file reciprocal motion (group III). Pre instrumentation and post instrumentation three-dimensional CBCT images were obtained from root cross-sections at 3mm, 6mm and 9mm from the apex. Scanned images were then accessed to determine canal transportation and centering ability. The data collected were evaluated using one-way analysis of variance (ANOVA) with Tukey’s honestly significant difference test. Results It was observed that there were no differences in the magnitude of transportation between the rotary instruments (p >0.05) at both 3mm as well as 6mm from the apex. At 9 mm from the apex, Group I PT showed significantly higher mean canal transportation and lower centering ability (0.19±0.08 and 0.39±0.16), as compared to Group II OS (0.12±0.07 and 0.54±0.24) and Group III WO (0.13±0.06 and 0.55±0.18) while the differences between OS and WO were not statistically significant Conclusion It was concluded that there was minor difference between the tested groups. Single file systems demonstrated average canal

  18. Path Not Found: Disparities in Access to Computer Science Courses in California High Schools

    Science.gov (United States)

    Martin, Alexis; McAlear, Frieda; Scott, Allison

    2015-01-01

    "Path Not Found: Disparities in Access to Computer Science Courses in California High Schools" exposes one of the foundational causes of underrepresentation in computing: disparities in access to computer science courses in California's public high schools. This report provides new, detailed data on these disparities by student body…

  19. An analysis of file system and installation of the file management system for NOS operating system

    International Nuclear Information System (INIS)

    Lee, Young Jai; Park, Sun Hee; Hwang, In Ah; Kim, Hee Kyung

    1992-06-01

    In this technical report, we analyze NOS file structure for Cyber 170-875 and Cyber 960-31 computer system. We also describe functions, procedure and how-to-operate and how-to-use of VDS. VDS is used to manage large files effectively for Cyber computer system. The purpose of the VDS installation is to increase the virtual disk storage by utilizing magnetic tape, to assist the users of the computer system in managing their files, and to enhance the performance of KAERI Cyber computer system. (Author)

  20. The global unified parallel file system (GUPFS) project: FY 2003 activities and results

    Energy Technology Data Exchange (ETDEWEB)

    Butler, Gregory F.; Baird William P.; Lee, Rei C.; Tull, Craig E.; Welcome, Michael L.; Whitney Cary L.

    2004-04-30

    The Global Unified Parallel File System (GUPFS) project is a multiple-phase project at the National Energy Research Scientific Computing (NERSC) Center whose goal is to provide a scalable, high-performance, high-bandwidth, shared file system for all of the NERSC production computing and support systems. The primary purpose of the GUPFS project is to make the scientific users more productive as they conduct advanced scientific research at NERSC by simplifying the scientists' data management tasks and maximizing storage and data availability. This is to be accomplished through the use of a shared file system providing a unified file namespace, operating on consolidated shared storage that is accessible by all the NERSC production computing and support systems. In order to successfully deploy a scalable high-performance shared file system with consolidated disk storage, three major emerging technologies must be brought together: (1) shared/cluster file systems software, (2) cost-effective, high-performance storage area network (SAN) fabrics, and (3) high-performance storage devices. Although they are evolving rapidly, these emerging technologies individually are not targeted towards the needs of scientific high-performance computing (HPC). The GUPFS project is in the process of assessing these emerging technologies to determine the best combination of solutions for a center-wide shared file system, to encourage the development of these technologies in directions needed for HPC, particularly at NERSC, and to then put them into service. With the development of an evaluation methodology and benchmark suites, and with the updating of the GUPFS testbed system, the project did a substantial number of investigations and evaluations during FY 2003. The investigations and evaluations involved many vendors and products. From our evaluation of these products, we have found that most vendors and many of the products are more focused on the commercial market. Most vendors

  1. 78 FR 14530 - Combined Notice of Filings #2

    Science.gov (United States)

    2013-03-06

    ...: California Independent System Operator Corporation submits tariff filing per 35.13(a)(2)(iii): 2013-02-27 Pay... per 35.17(b): 2013-02-28--OASIS Att J Errata to be effective 4/15/2013. Filed Date: 2/27/13. Accession... Due: 5 p.m. ET 3/20/13. The filings are accessible in the Commission's eLibrary system by clicking on...

  2. Lecture 7: Worldwide LHC Computing Grid Overview

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    This presentation will introduce in an informal, but technically correct way the challenges that are linked to the needs of massively distributed computing architectures in the context of the LHC offline computing. The topics include technological and organizational aspects touching many aspects of LHC computing, from data access, to maintenance of large databases and huge collections of files, to the organization of computing farms and monitoring. Fabrizio Furano holds a Ph.D in Computer Science and has worked in the field of Computing for High Energy Physics for many years. Some of his preferred topics include application architectures, system design and project management, with focus on performance and scalability of data access. Fabrizio has experience in a wide variety of environments, from private companies to academic research in particular in object oriented methodologies, mainly using C++. He has also teaching experience at university level in Software Engineering and C++ Programming.

  3. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  4. PHREEQCI; a graphical user interface for the geochemical computer program PHREEQC

    Science.gov (United States)

    Charlton, Scott R.; Macklin, Clifford L.; Parkhurst, David L.

    1997-01-01

    PhreeqcI is a Windows-based graphical user interface for the geochemical computer program PHREEQC. PhreeqcI provides the capability to generate and edit input data files, run simulations, and view text files containing simulation results, all within the framework of a single interface. PHREEQC is a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Interactive access to all of the capabilities of PHREEQC is available with PhreeqcI. The interface is written in Visual Basic and will run on personal computers under the Windows(3.1), Windows95, and WindowsNT operating systems.

  5. Dimensional quality control of Ti-Ni dental file by optical coordinate metrology and computed tomography

    DEFF Research Database (Denmark)

    Yagüe-Fabra, J.A.; Tosello, Guido; Ontiveros, S.

    2014-01-01

    Endodontic dental files usually present complex 3D geometries, which make the complete measurement of the component very challenging with conventional micro metrology tools. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile...... techniques. However, the establishment of CT systems traceability when measuring 3D complex geometries is still an open issue. In this work, to verify the quality of the CT dimensional measurements, the dental file has been measured both with a μCT system and an optical CMM (OCMM). The uncertainty...

  6. 76 FR 61351 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-10-04

    ... MBR Baseline Tariff Filing to be effective 9/22/2011. Filed Date: 09/22/2011. Accession Number... submits tariff filing per 35.1: ECNY MBR Re-File to be effective 9/22/2011. Filed Date: 09/22/2011... Industrial Energy Buyers, LLC submits tariff filing per 35.1: NYIEB MBR Re-File to be effective 9/22/2011...

  7. PCF File Format.

    Energy Technology Data Exchange (ETDEWEB)

    Thoreson, Gregory G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    PCF files are binary files designed to contain gamma spectra and neutron count rates from radiation sensors. It is the native format for the GAmma Detector Response and Analysis Software (GADRAS) package [1]. It can contain multiple spectra and information about each spectrum such as energy calibration. This document outlines the format of the file that would allow one to write a computer program to parse and write such files.

  8. An Annotated and Cross-Referenced Bibliography on Computer Security and Access Control in Computer Systems.

    Science.gov (United States)

    Bergart, Jeffrey G.; And Others

    This paper represents a careful study of published works on computer security and access control in computer systems. The study includes a selective annotated bibliography of some eighty-five important published results in the field and, based on these papers, analyzes the state of the art. In annotating these works, the authors try to be…

  9. Distributed PACS using distributed file system with hierarchical meta data servers.

    Science.gov (United States)

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  10. Parallel Access of Out-Of-Core Dense Extendible Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J; Rotem, Doron

    2007-07-26

    Datasets used in scientific and engineering applications are often modeled as dense multi-dimensional arrays. For very large datasets, the corresponding array models are typically stored out-of-core as array files. The array elements are mapped onto linear consecutive locations that correspond to the linear ordering of the multi-dimensional indices. Two conventional mappings used are the row-major order and the column-major order of multi-dimensional arrays. Such conventional mappings of dense array files highly limit the performance of applications and the extendibility of the dataset. Firstly, an array file that is organized in say row-major order causes applications that subsequently access the data in column-major order, to have abysmal performance. Secondly, any subsequent expansion of the array file is limited to only one dimension. Expansions of such out-of-core conventional arrays along arbitrary dimensions, require storage reorganization that can be very expensive. Wepresent a solution for storing out-of-core dense extendible arrays that resolve the two limitations. The method uses a mapping function F*(), together with information maintained in axial vectors, to compute the linear address of an extendible array element when passed its k-dimensional index. We also give the inverse function, F-1*() for deriving the k-dimensional index when given the linear address. We show how the mapping function, in combination with MPI-IO and a parallel file system, allows for the growth of the extendible array without reorganization and no significant performance degradation of applications accessing elements in any desired order. We give methods for reading and writing sub-arrays into and out of parallel applications that run on a cluster of workstations. The axial-vectors are replicated and maintained in each node that accesses sub-array elements.

  11. 10 CFR 2.302 - Filing of documents.

    Science.gov (United States)

    2010-01-01

    ... this part shall be electronically transmitted through the E-Filing system, unless the Commission or... all methods of filing have been completed. (e) For filings by electronic transmission, the filer must... digital ID certificates, the NRC permits participants in the proceeding to access the E-Filing system to...

  12. File Type Identification of File Fragments using Longest Common Subsequence (LCS)

    Science.gov (United States)

    Rahmat, R. F.; Nicholas, F.; Purnamawati, S.; Sitompul, O. S.

    2017-01-01

    Computer forensic analyst is a person in charge of investigation and evidence tracking. In certain cases, the file needed to be presented as digital evidence was deleted. It is difficult to reconstruct the file, because it often lost its header and cannot be identified while being restored. Therefore, a method is required for identifying the file type of file fragments. In this research, we propose Longest Common Subsequences that consists of three steps, namely training, testing and validation, to identify the file type from file fragments. From all testing results we can conlude that our proposed method works well and achieves 92.91% of accuracy to identify the file type of file fragment for three data types.

  13. Comparison of canal transportation and centering ability of twisted files, Pathfile-ProTaper system, and stainless steel hand K-files by using computed tomography.

    Science.gov (United States)

    Gergi, Richard; Rjeily, Joe Abou; Sader, Joseph; Naaman, Alfred

    2010-05-01

    The purpose of this study was to compare canal transportation and centering ability of 2 rotary nickel-titanium (NiTi) systems (Twisted Files [TF] and Pathfile-ProTaper [PP]) with conventional stainless steel K-files. Ninety root canals with severe curvature and short radius were selected. Canals were divided randomly into 3 groups of 30 each. After preparation with TF, PP, and stainless steel files, the amount of transportation that occurred was assessed by using computed tomography. Three sections from apical, mid-root, and coronal levels of the canal were recorded. Amount of transportation and centering ability were assessed. The 3 groups were statistically compared with analysis of variance and Tukey honestly significant difference test. Less transportation and better centering ability occurred with TF rotary instruments (P < .0001). K-files showed the highest transportation followed by PP system. PP system showed significant transportation when compared with TF (P < .0001). The TF system was found to be the best for all variables measured in this study. Copyright (c) 2010 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  14. 77 FR 74839 - Combined Notice of Filings

    Science.gov (United States)

    2012-12-18

    ..., LP. Description: National Grid LNG, LP submits tariff filing per 154.203: Adoption of NAESB Version 2... with Order to Amend NAESB Version 2.0 Filing to be effective 12/1/2012. Filed Date: 12/11/12. Accession...: Refile to comply with Order on NAESB Version 2.0 Filing to be effective 12/1/2012. Filed Date: 12/11/12...

  15. Distributed data access in the sequential access model at the D0 experiment at Fermilab

    International Nuclear Information System (INIS)

    Terekhov, Igor; White, Victoria

    2000-01-01

    The authors present the Sequential Access Model (SAM), which is the data handling system for D0, one of two primary High Energy Experiments at Fermilab. During the next several years, the D0 experiment will store a total of about 1 PByte of data, including raw detector data and data processed at various levels. The design of SAM is not specific to the D0 experiment and carries few assumptions about the underlying mass storage level; its ideas are applicable to any sequential data access. By definition, in the sequential access mode a user application needs to process a stream of data, by accessing each data unit exactly once, the order of data units in the stream being irrelevant. The units of data are laid out sequentially in files. The adopted model allows for significant optimizations of system performance, decrease of user file latency and increase of overall throughput. In particular, caching is done with the knowledge of all the files needed in the near future, defined as all the files of the already running or submitted jobs. The bulk of the data is stored in files on tape in the mass storage system (MSS) called Enstore[2] and also developed at Fermilab. (The tape drives are served by an ADIC AML/2 Automated Tape Library). At any given time, SAM has a small fraction of the data cached on disk for processing. In the present paper, the authors discuss how data is delivered onto disk and how it is accessed by user applications. They will concentrate on data retrieval (consumption) from the MSS; when SAM is used for storing of data, the mechanisms are rather symmetrical. All of the data managed by SAM is cataloged in great detail in a relational database (ORACLE). The database also serves as the persistency mechanism for the SAM servers described in this paper. Any client or server in the SAM system which needs to store or retrieve information from the database does so through the interfaces of a CORBA-based database server. The users (physicists) use the

  16. 75 FR 66075 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-10-27

    ....12: Baseline MBR Concurrence to be effective 10/8/2010. Filed Date: 10/19/2010. Accession Number... Company submits tariff filing per 35.12: Baseline MBR Concurrence to be effective 10/8/2010. Filed Date... Power Company submits tariff filing per 35.12: Baseline MBR Concurrence to be effective 10/8/2010. Filed...

  17. Computer Forensics Method in Analysis of Files Timestamps in Microsoft Windows Operating System and NTFS File System

    Directory of Open Access Journals (Sweden)

    Vesta Sergeevna Matveeva

    2013-02-01

    Full Text Available All existing file browsers displays 3 timestamps for every file in file system NTFS. Nowadays there are a lot of utilities that can manipulate temporal attributes to conceal the traces of file using. However every file in NTFS has 8 timestamps that are stored in file record and used in detecting the fact of attributes substitution. The authors suggest a method of revealing original timestamps after replacement and automated variant of it in case of a set of files.

  18. 75 FR 62381 - Combined Notice of Filings #2

    Science.gov (United States)

    2010-10-08

    ... filing per 35.12: MeadWestvaco Virginia MBR Filing to be effective 9/ 28/2010. Filed Date: 09/29/2010... submits tariff filing per 35.12: City Power MBR Tariff to be effective 9/30/2010. Filed Date: 09/29/2010... Baseline MBR Tariff to be effective 9[sol]29[sol]2010. Filed Date: 09/29/2010. Accession Number: 20100929...

  19. Building Parts Inventory Files Using the AppleWorks Data Base Subprogram and Apple IIe or GS Computers.

    Science.gov (United States)

    Schlenker, Richard M.

    This manual is a "how to" training device for building database files using the AppleWorks program with an Apple IIe or Apple IIGS Computer with Duodisk or two disk drives and an 80-column card. The manual provides step-by-step directions, and includes 25 figures depicting the computer screen at the various stages of the database file…

  20. 76 FR 1416 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-01-10

    ... Energy Services Subsidiary No. 5 LLC, Emera Energy Services, Inc., Bangor Hydro Electric Company, Emera... Hydro Electric Company, et al. Filed Date: 12/29/2010. Accession Number: 20101229-5135. Comment Date: 5... electricity markets within the footprint of PJM Interconnection, LLC etc. Filed Date: 12/29/2010. Accession...

  1. 75 FR 1761 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-01-13

    .... Applicants: Cloud County Wind Farm, LLC, Pioneer Prairie Wind Farm I, LLC, Arlington Wind Power Project LLC... of the Localized Costs Sharing Agreement. Filed Date: 12/22/2009. Accession Number: 20091224-0002... Rate Schedule 1 effective 11/25/09. Filed Date: 12/22/2009. Accession Number: 20091224-0077. Comment...

  2. 76 FR 21722 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-04-18

    ... Notice of Cancellation of Reactive Power Rate Schedule. Filed Date: 04/07/2011. Accession Number...)(2)(iii): Revisions to the Tariff and OA re Emerg. Load Response Program ``Reporting'' to be... Reactive Rate Schedule Notice of Succession to be effective 3/9/2011. Filed Date: 04/07/2011. Accession...

  3. A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing.

    Science.gov (United States)

    Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang

    2017-07-24

    With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient.

  4. A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing

    Science.gov (United States)

    Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang

    2017-01-01

    With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient. PMID:28737733

  5. Accomplish the Application Area in Cloud Computing

    OpenAIRE

    Bansal, Nidhi; Awasthi, Amit

    2012-01-01

    In the cloud computing application area of accomplish, we find the fact that cloud computing covers a lot of areas are its main asset. At a top level, it is an approach to IT where many users, some even from different companies get access to shared IT resources such as servers, routers and various file extensions, instead of each having their own dedicated servers. This offers many advantages like lower costs and higher efficiency. Unfortunately there have been some high profile incidents whe...

  6. 78 FR 32383 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-05-30

    .... Description: Inquiry Response to be effective 5/12/2013. Filed Date: 5/21/13. Accession Number: 20130521-5167....ferc.gov/docs-filing/efiling/filing-req.pdf . For other information, call (866) 208-3676 (toll free...

  7. Prescription for trouble: Medicare Part D and patterns of computer and internet access among the elderly.

    Science.gov (United States)

    Wright, David W; Hill, Twyla J

    2009-01-01

    The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 specifically encourages Medicare enrollees to use the Internet to obtain information regarding the new prescription drug insurance plans and to enroll in a plan. This reliance on computer technology and the Internet leads to practical questions regarding implementation of the insurance coverage. For example, it seems unlikely that all Medicare enrollees have access to computers and the Internet or that they are all computer literate. This study uses the 2003 Current Population Survey to examine the effects of disability and income on computer access and Internet use among the elderly. Internet access declines with age and is exacerbated by disabilities. Also, decreases in income lead to decreases in computer ownership and use. Therefore, providing prescription drug coverage primarily through the Internet seems likely to maintain or increase stratification of access to health care, especially for low-income, disabled elderly, who are also a group most in need of health care access.

  8. 76 FR 58257 - Combined Notice of Filings #2

    Science.gov (United States)

    2011-09-20

    ... Hills Wind Farm, LLC MBR Tariff to be effective 10/31/2007. Filed Date: 09/12/2011. Accession Number... filing per 35.1: Smoky Hills Wind Project II, LLC MBR Tariff to be effective 10/20/2008. Filed Date: 09..., LLC submits tariff filing per 35.1: Enel Stillwater, LLC MBR Tariff to be effective 12/5/2008. Filed...

  9. 77 FR 28592 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-05-15

    ...: Middletown MBR Application to be effective 5/8/2012. Filed Date: 5/7/12. Accession Number: 20120507-5128..., LLC. Description: Southern Energy Initial MBR Filing to be effective 5/ 7/2012. Filed Date: 5/8/12... Company submits tariff filing per 35.37: MBR Triennial Filing--1st Rev MBR to be effective 9/30/2010...

  10. 76 FR 59676 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-09-27

    ... MBR Tariff to be effective 10/1/2011. Filed Date: 09/16/2011. Accession Number: 20110916-5146. Comment... 35.1: ONEOK Energy Services Company Baseline MBR Filing to be effective 9/16/2011. Filed Date: 09/16... Services Order No. 697 Compliance Filing of MBR Tariff to be effective 9/16/2011. Filed Date: 09/16/2011...

  11. No Special Equipment Required: The Accessibility Features Built into the Windows and Macintosh Operating Systems make Computers Accessible for Students with Special Needs

    Science.gov (United States)

    Kimball,Walter H.; Cohen,Libby G.; Dimmick,Deb; Mills,Rick

    2003-01-01

    The proliferation of computers and other electronic learning devices has made knowledge and communication accessible to people with a wide range of abilities. Both Windows and Macintosh computers have accessibility options to help with many different special needs. This documents discusses solutions for: (1) visual impairments; (2) hearing…

  12. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  13. High-Performance Secure Database Access Technologies for HEP Grids

    International Nuclear Information System (INIS)

    Vranicar, Matthew; Weicher, John

    2006-01-01

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist's computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that 'Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications'. There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure

  14. Secure Data Access Control for Fog Computing Based on Multi-Authority Attribute-Based Signcryption with Computation Outsourcing and Attribute Revocation.

    Science.gov (United States)

    Xu, Qian; Tan, Chengxiang; Fan, Zhijie; Zhu, Wenye; Xiao, Ya; Cheng, Fujia

    2018-05-17

    Nowadays, fog computing provides computation, storage, and application services to end users in the Internet of Things. One of the major concerns in fog computing systems is how fine-grained access control can be imposed. As a logical combination of attribute-based encryption and attribute-based signature, Attribute-based Signcryption (ABSC) can provide confidentiality and anonymous authentication for sensitive data and is more efficient than traditional "encrypt-then-sign" or "sign-then-encrypt" strategy. Thus, ABSC is suitable for fine-grained access control in a semi-trusted cloud environment and is gaining more and more attention recently. However, in many existing ABSC systems, the computation cost required for the end users in signcryption and designcryption is linear with the complexity of signing and encryption access policy. Moreover, only a single authority that is responsible for attribute management and key generation exists in the previous proposed ABSC schemes, whereas in reality, mostly, different authorities monitor different attributes of the user. In this paper, we propose OMDAC-ABSC, a novel data access control scheme based on Ciphertext-Policy ABSC, to provide data confidentiality, fine-grained control, and anonymous authentication in a multi-authority fog computing system. The signcryption and designcryption overhead for the user is significantly reduced by outsourcing the undesirable computation operations to fog nodes. The proposed scheme is proven to be secure in the standard model and can provide attribute revocation and public verifiability. The security analysis, asymptotic complexity comparison, and implementation results indicate that our construction can balance the security goals with practical efficiency in computation.

  15. 77 FR 3758 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-01-25

    ...: Attachment H Schedule 7 Compliance Filing to be effective 6/1/2011. Filed Date: 1/13/12. Accession Number... filings: Docket Numbers: QF12-159-000. Applicants: City of Kinston, NC. Description: FERC Form 556 of City...

  16. Computer network access to scientific information systems for minority universities

    Science.gov (United States)

    Thomas, Valerie L.; Wakim, Nagi T.

    1993-08-01

    The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.

  17. 77 FR 27221 - Combined Notice of Filings

    Science.gov (United States)

    2012-05-09

    ... Generator Status of Minonk Wind, LLC. Filed Date: 4/19/12. Accession Number: 20120419-5196. Comments Due: 5... Self-Certification of Exempt Wholesale Generator Status of Senate Wind, LLC. Filed Date: 4/19/12... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings Take notice...

  18. 78 FR 49498 - Combined Notice of Filings #2

    Science.gov (United States)

    2013-08-14

    ...-000. Applicants: Buffalo Dunes Wind Project, LLC. Description: Self-Certification of EG of Buffalo Dunes Wind Project, LLC. Filed Date: 8/6/13. Accession Number: 20130806-5148. Comments Due: 5 p.m. ET 8...: Tie Line Name Changes to be effective 10/6/2013. Filed Date: 8/6/13. Accession Number: 20130806-5086...

  19. Experiment in data tagging in information-accessing services containing energy-related data. Final report 1975--78

    International Nuclear Information System (INIS)

    1978-08-01

    This report describes the results of an experiment conducted by Chemical Abstracts Service (CAS), on the use of 'data tags' in a machine-readable output file for incorporation into an on-line search service. 'Data tags' are codes which uniquely identify specific types of numerical data in the corresponding source documents referenced in the file. Editorial and processing procedures were established for the identification of data types; the recording, editing, verification, and correction of the data tags; and their compilation into a special version of ENERGY, a CAS computer-readable abstract text file. Possible data tagging plans are described and criteria for extended studies in data tagging and accessing are outlined

  20. 78 FR 49503 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-08-14

    ... Notice of Succession and MBR Tariff Revisions to be effective 8/3/2013. Filed Date: 8/2/13. Accession...: ER13-2106-000. Applicants: NedPower Mount Storm, LLC. Description: New Baseline--NedPower MBR Tariff to... MBR Wholesale Power Sale Tariff to be effective 8/5/2013. Filed Date: 8/2/13. Accession Number...

  1. 76 FR 1418 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-01-10

    ... Northeast MBR Sellers submit their Triennial Market Power Analysis. Filed Date: 12/30/2010. Accession Number... MBR Tariff--Seller Category Changes to be effective 3/4/2011. Filed Date: 01/03/2011. Accession Number... 35.13(a)(2)(iii: CalPeak El Cajon--Amendment to MBR Tariff--Seller Category Changes to be effective 3...

  2. 77 FR 43820 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-07-26

    .... Docket Numbers: ER12-1946-001. Applicants: Duke Energy Beckjord, LLC. Description: Amendment to MBR...: Amendment to MBR Tariff Filing to be effective 10/1/ 2012. Filed Date: 7/18/12. Accession Number: 20120718... Creek, LLC. Description: Amendment to MBR Tariff Filing to be effective 10/1/ 2012. Filed Date: 7/18/12...

  3. MCBS Access to Care PUF

    Data.gov (United States)

    U.S. Department of Health & Human Services — The MCBS 2013 Access to Care public use file (MCBS PUF) provides the first publically available MCBS file for researchers interested in the health, health care use,...

  4. 75 FR 62371 - Combined Notice of Filings #2

    Science.gov (United States)

    2010-10-08

    .... Description: Weyerhaeuser NR Company submits tariff filing per 35.12: Weyerhaeuser NR Company MBR Tariff to be..., Inc. MBR eTariff Filing to be effective 9/30/2010. Filed Date: 09/30/2010. Accession Number: 20100930... 35.12: Chandler Wind Partners, LLC MBR Tariff to be effective 9/30/ 2010. Filed Date: 09/30/2010...

  5. The SPAN cookbook: A practical guide to accessing SPAN

    Science.gov (United States)

    Mason, Stephanie; Tencati, Ronald D.; Stern, David M.; Capps, Kimberly D.; Dorman, Gary; Peters, David J.

    1990-01-01

    This is a manual for remote users who wish to send electronic mail messages from the Space Physics Analysis Network (SPAN) to scientific colleagues on other computer networks and vice versa. In several instances more than one gateway has been included for the same network. Users are provided with an introduction to each network listed with helpful details about accessing the system and mail syntax examples. Also included is information on file transfers, remote logins, and help telephone numbers.

  6. Computer network for electric power control systems. Chubu denryoku (kabu) denryoku keito seigyoyo computer network

    Energy Technology Data Exchange (ETDEWEB)

    Tsuneizumi, T. (Chubu Electric Power Co. Inc., Nagoya (Japan)); Shimomura, S.; Miyamura, N. (Fuji Electric Co. Ltd., Tokyo (Japan))

    1992-06-03

    A computer network for electric power control system was developed that is applied with the open systems interconnection (OSI), an international standard for communications protocol. In structuring the OSI network, a direct session layer was accessed from the operation functions when high-speed small-capacity information is transmitted. File transfer, access and control having a function of collectively transferring large-capacity data were applied when low-speed large-capacity information is transmitted. A verification test for the realtime computer network (RCN) mounting regulation was conducted according to a verification model using a mini-computer, and a result that can satisfy practical performance was obtained. For application interface, kernel, health check and two-route transmission functions were provided as a connection control function, so were transmission verification function and late arrival abolishing function. In system mounting pattern, dualized communication server (CS) structure was adopted. A hardware structure may include a system to have the CS function contained in a host computer and a separate installation system. 5 figs., 6 tabs.

  7. 76 FR 16404 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-03-23

    ...: Amendment to MBR Tariff to be effective 5/13/2011. Filed Date: 03/14/2011. Accession Number: 20110314-5272... tariff filing per 35: Amendment to MBR Tariff to be effective 5/13/ 2011. Filed Date: 03/14/2011.... Subsidiary No. 2, Inc. submits tariff filing per 35: Amendment to MBR Tariff to be effective 5/13/ 2011...

  8. 77 FR 33206 - Combined Notice of Filings #2

    Science.gov (United States)

    2012-06-05

    ... tariff filing per 35: High Trail Wind Farm First Revised MBR to be effective 5/26/2012. Filed Date: 5/25... per 35: Old Trail Wind Farm First Revised MBR to be effective 5/26/2012. Filed Date: 5/25/12... First Revised MBR to be effective 6/1/2012. Filed Date: 5/25/12. Accession Number: 20120525-5088...

  9. Modification to the Monte N-Particle (MCNP) Visual Editor (MCNPVised) to read in Computer Aided Design (CAD) files

    International Nuclear Information System (INIS)

    Schwarz, Randy A.; Carter, Leeland L.

    2004-01-01

    Monte Carlo N-Particle Transport Code (MCNP) (Reference 1) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle (References 2 to 11) is recognized internationally as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant enhanced the capabilities of the MCNP Visual Editor to allow it to read in a 2D Computer Aided Design (CAD) file, allowing the user to modify and view the 2D CAD file and then electronically generate a valid MCNP input geometry with a user specified axial extent

  10. Shaping ability of the conventional nickel-titanium and reciprocating nickel-titanium file systems: a comparative study using micro-computed tomography.

    Science.gov (United States)

    Hwang, Young-Hye; Bae, Kwang-Shik; Baek, Seung-Ho; Kum, Kee-Yeon; Lee, WooCheol; Shon, Won-Jun; Chang, Seok Woo

    2014-08-01

    This study used micro-computed tomographic imaging to compare the shaping ability of Mtwo (VDW, Munich, Germany), a conventional nickel-titanium file system, and Reciproc (VDW), a reciprocating file system morphologically similar to Mtwo. Root canal shaping was performed on the mesiobuccal and distobuccal canals of extracted maxillary molars. In the RR group (n = 15), Reciproc was used in a reciprocating motion (150° counterclockwise/30° clockwise, 300 rpm); in the MR group, Mtwo was used in a reciprocating motion (150° clockwise/30° counterclockwise, 300 rpm); and in the MC group, Mtwo was used in a continuous rotating motion (300 rpm). Micro-computed tomographic images taken before and after canal shaping were used to analyze canal volume change and the degree of transportation at the cervical, middle, and apical levels. The time required for canal shaping was recorded. Afterward, each file was analyzed using scanning electron microscopy. No statistically significant differences were found among the 3 groups in the time for canal shaping or canal volume change (P > .05). Transportation values of the RR and MR groups were not significantly different at any level. However, the transportation value of the MC group was significantly higher than both the RR and MR groups at the cervical and apical levels (P file deformation was observed for 1 file in group RR (1/15), 3 files in group MR (3/15), and 5 files in group MC (5/15). In terms of shaping ability, Mtwo used in a reciprocating motion was not significantly different from the Reciproc system. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  11. “Future Directions”: m-government computer systems accessed via cloud computing – advantages and possible implementations

    OpenAIRE

    Daniela LIŢAN

    2015-01-01

    In recent years, the activities of companies and Public Administration had been automated and adapted to the current information system. Therefore, in this paper, I will present and exemplify the benefits of m-government computer systems development and implementation (which can be accessed from mobile devices and which are specific to the workflow of Public Administrations) starting from the “experience” of e-government systems implementation in the context of their access and usage through ...

  12. Comparison of apical centring ability between incisal-shifted access and traditional lingual access for maxillary anterior teeth.

    Science.gov (United States)

    Yahata, Yoshio; Masuda, Yoshiko; Komabayashi, Takashi

    2017-12-01

    The aim of this study was to compare the apical centring ability of incisal-shifted access (ISA) with that of traditional lingual access (TLA). Fifteen three-dimensional printed resin models were prepared from the computed tomography data for a human maxillary central incisor and divided into ISA (n = 7), TLA (n = 7) and control (n = 1) groups. After access preparation, these models were shaped to the working length using K-files up to #40, followed by step-back procedures. An apical portion of the model was removed at 0.5 mm coronal to the working length. Microscopic images of each cutting surface were taken to measure the preparation area and the distance of transportation. TLA created a larger preparation area than ISA (P < 0.05). The distance of transportation (mean ± standard deviation) was 0.4 ± 0.1 mm for ISA and 0.7 ± 0.1 mm for TLA (P < 0.05). Access cavity preparation has a significant effect on apical centring ability. ISA is beneficial to maintaining apical configuration. © 2017 Australian Society of Endodontology Inc.

  13. 77 FR 37397 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-06-21

    ..., Emera Energy U.S. Subsidiary No. 2, Inc., Bangor Hydro Electric Company. Description: Change in Status Filing of Bangor Hydro Electric Company, et al. Filed Date: 6/13/12. Accession Number: 20120613-5023... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 Take notice...

  14. A secure file manager for UNIX

    Energy Technology Data Exchange (ETDEWEB)

    DeVries, R.G.

    1990-12-31

    The development of a secure file management system for a UNIX-based computer facility with supercomputers and workstations is described. Specifically, UNIX in its usual form does not address: (1) Operation which would satisfy rigorous security requirements. (2) Online space management in an environment where total data demands would be many times the actual online capacity. (3) Making the file management system part of a computer network in which users of any computer in the local network could retrieve data generated on any other computer in the network. The characteristics of UNIX can be exploited to develop a portable, secure file manager which would operate on computer systems ranging from workstations to supercomputers. Implementation considerations making unusual use of UNIX features, rather than requiring extensive internal system changes, are described, and implementation using the Cray Research Inc. UNICOS operating system is outlined.

  15. Cross-Cultural Study into the use of Text to Speech with Electronic Files to aid access to Textbooks

    OpenAIRE

    Draffan, E.A.; Wald, Mike; Iwabuchi, Mamoru; Takahashi, Maiko; Nakamura, Kenryu

    2011-01-01

    Objective. At present, pupils, teachers and parents struggle with the lack of textbooks and supporting materials in accessible formats that can be used by pupils with visual or print impairment including specific reading difficulties such as dyslexia. Independent projects in Japan and the UK were conceived to assess whether the provision of textbooks and teaching materials as electronic files, along with technologies to convert and ‘read’ them could provide a new and sustainable model and enh...

  16. 76 FR 80921 - Combined Notice of Filings

    Science.gov (United States)

    2011-12-27

    .... Comments Due: 5 p.m. ET 12/27/11. Docket Numbers: RP12-243-000. Applicants: Young Gas Storage Company, Ltd... considered, but intervention is necessary to become a party to the proceeding. The filings are accessible in... can be found at: http://www.ferc.gov/docs-filing/efiling/filing-req.pdf . For other information, call...

  17. 77 FR 37392 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-06-21

    ... Requirements Depreciation Accrual Rates Filing to be effective 1/1/2012. Filed Date: 6/8/12. Accession Number...-000. Applicants: Southwestern Electric Power Company. Description: Accounting updates re CWIP...

  18. Preserving access to ALEPH computing environment via virtual machines

    International Nuclear Information System (INIS)

    Coscetti, Simone; Boccali, Tommaso; Arezzini, Silvia; Maggi, Marcello

    2014-01-01

    The ALEPH Collaboration [1] took data at the LEP (CERN) electron-positron collider in the period 1989-2000, producing more than 300 scientific papers. While most of the Collaboration activities stopped in the last years, the data collected still has physics potential, with new theoretical models emerging, which ask checks with data at the Z and WW production energies. An attempt to revive and preserve the ALEPH Computing Environment is presented; the aim is not only the preservation of the data files (usually called bit preservation), but of the full environment a physicist would need to perform brand new analyses. Technically, a Virtual Machine approach has been chosen, using the VirtualBox platform. Concerning simulated events, the full chain from event generators to physics plots is possible, and reprocessing of data events is also functioning. Interactive tools like the DALI event display can be used on both data and simulated events. The Virtual Machine approach is suited for both interactive usage, and for massive computing using Cloud like approaches.

  19. Parallel file system with metadata distributed across partitioned key-value store c

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  20. An accessibility solution of cloud computing by solar energy

    Directory of Open Access Journals (Sweden)

    Zuzana Priščáková

    2013-01-01

    Full Text Available Cloud computing is a modern innovative technology of solution of a problem with data storage, data processing, company infrastructure building and so on. Many companies worry over the changes by the implementation of this solution because these changes could have a negative impact on the company, or, in the case of establishment of new companies, this worry results from an unfamiliar environment. Data accessibility, integrity and security belong among basic problems of cloud computing. The aim of this paper is to offer and scientifically confirm a proposal of an accessibility solution of cloud by implementing of solar energy as a primary source. Problems with accessibility rise from power failures when data may be stolen or lost. Since cloud is often started from a server, the server dependence on power is strong. Modern conditions offer us a new, more innovative solution regarding the ecological as well as an economical company solution. The Sun as a steady source of energy offers us a possibility to produce necessary energy by a solar technique – solar panels. The connection of a solar panel as a primary source of energy for a server would remove its power dependence as well as possible failures. The power dependence would stay as a secondary source. Such an ecological solution would influence the economical side of company because the energy consumption would be lower. Besides a proposal of an accessibility solution, this paper involves a physical and mathematical solution to a calculation of solar energy showered on the Earth, a calculation of the panel size by cosines method and a simulation of these calculations in MATLAB conditions.

  1. 78 FR 19475 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-04-01

    ... Wholesale Generator Status of Alta Wind X, LLC. Filed Date: 3/21/13. Accession Number: 20130321-5127...: Notice of Self-Certification of Exempt Wholesale Generator Status of Alta Wind XI, LLC. Filed Date: 3/21... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 Take notice...

  2. 78 FR 62345 - Combined Notice of Filings #2

    Science.gov (United States)

    2013-10-18

    ... Generator Status of Miami Wind I LLC. Filed Date: 10/9/13. Accession Number: 20131009-5053. Comments Due: 5... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 2 Take notice that the Commission received the following exempt wholesale generator filings: Docket Numbers: EG14-3...

  3. Do Your School Policies Provide Equal Access to Computers? Are You Sure?

    Science.gov (United States)

    DuBois, Phyllis A.; Schubert, Jane G.

    1986-01-01

    Outlines how school policies can unintentionally perpetuate gender discrimination in student computer use and access. Describes four areas of administrative policies that can cause inequities and provides ways for administrators to counteract these policies. Includes discussion of a program to balance computer use, and an abstract of an article…

  4. Copyright and personal use of CERN’s computing infrastructure

    CERN Multimedia

    IT Department

    2009-01-01

    (La version française sera en ligne prochainement)The rules covering the personal use of CERN’s computing infrastructure are defined in Operational Circular No. 5 and its Subsidiary Rules (see http://cern.ch/ComputingRules). All users of CERN’s computing infrastructure must comply with these rules, whether they access CERN’s computing facilities from within the Organization’s site or at another location. In particular, OC5 clause 17 requires that proprietary rights (the rights in software, music, video, etc.) must be respected. The user is liable for damages resulting from non-compliance. Recently, there have been several violations of OC5, where copyright material was discovered on public world-readable disk space. Please ensure that all material under your responsibility (in particular in files owned by your account) respects proprietary rights, including with respect to the restriction of access by third parties. CERN Security Team

  5. 77 FR 23708 - Combined Notice of Filings #2

    Science.gov (United States)

    2012-04-20

    ... submits tariff filing per 35.13(a)(2)(iii: 11--20120413 I&M Oper Co MBR Conc to be effective 1/1/2012... filing per 35.13(a)(2)(iii: 12--20120413 KPCo Oper Co MBR Conc to be effective 1/ 1/2012. Filed Date: 4... OPCo Oper Co MBR Conc to be effective 1/ 1/2012. Filed Date: 4/13/12. Accession Number: 20120413-5179...

  6. 76 FR 10890 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-02-28

    ... Paris Amended and Restated Service Agreement to be effective 4/19/2011. Filed Date: 02/18/2011...: PacifiCorp submits tariff filing per 35.13(a)(2)(iii: PacifiCorp Energy Facilities Maintenance Agreement... Interconnection Agreement of Southwest Power Pool, Inc. Filed Date: 02/18/2011. Accession Number: 20110218-5086...

  7. LNS users primer for accessing government sites on the ARPA network. [MIT. -->. ANL, BNL, LBL, and New York Univ. Courant Inst

    Energy Technology Data Exchange (ETDEWEB)

    Kannel, M.

    1979-06-01

    This primer was developed as part of the study conducted by the Laboratory for Nuclear Science (LNS) on the feasibility of networks for computer resource sharing. The primer is an instructinal guide for the LNS user who would like to access and use computers at other government sites on the ARPA network. The format is a series of scenarios of actual recorded on-line terminal sessions' showing the novice user how to access the foreign site, obtain help documentation, run a simple program, and transfer files to and from the foreign site. Access to the ARPA network in these scenarios is via Multics or the Massachusetts Institute of Technology Terminal Interface Processor. The foreign government sites accessed are the computing facilities at Argonne National Laboratory, Brookhaven National Laboratory, Lawrence Livermore Laboratory, and New York University Courant Institute. This technique of auditing actual terminal sessions as a teaching aid can be extended to include other computing facilities as well as other networks.

  8. The Jade File System. Ph.D. Thesis

    Science.gov (United States)

    Rao, Herman Chung-Hwa

    1991-01-01

    File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its

  9. 78 FR 52171 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-08-22

    ...: ISO New England Inc. Description: Attachment A-1 to be effective 6/15/2013. Filed Date: 8/15/13... following PURPA 210(m)(3) filings: Docket Numbers: QM13-4-000. Applicants: City of Burlington, Vermont... of the City of Burlington, Vermont. Filed Date: 8/15/13. Accession Number: 20130815-5117. Comments...

  10. Implementing Journaling in a Linux Shared Disk File System

    Science.gov (United States)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  11. 78 FR 4141 - Combined Notice of Filings #2

    Science.gov (United States)

    2013-01-18

    .... Description: Notice of Succession to be effective 1/1/2013. Filed Date: 1/10/13. Accession Number: 20130110.... Description: PacifiCorp submits tariff filing per 35.15: Termination of BPA Umpqua Business Center...

  12. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-file Systems.

    Science.gov (United States)

    Prabhakar, Attiguppe R; Yavagal, Chandrashekar; Dixit, Kratika; Naik, Saraswathi V

    2016-01-01

    Primary root canals are considered to be most challenging due to their complex anatomy. "Wave one" and "one shape" are single-file systems with reciprocating and rotary motion respectively. The aim of this study was to evaluate and compare dentin thickness, centering ability, canal transportation, and instrumentation time of wave one and one shape files in primary root canals using a cone beam computed tomographic (CBCT) analysis. This is an experimental, in vitro study comparing the two groups. A total of 24 extracted human primary teeth with minimum 7 mm root length were included in the study. Cone beam computed tomographic images were taken before and after the instrumentation for each group. Dentin thickness, centering ability, canal transportation, and instrumentation times were evaluated for each group. A significant difference was found in instrumentation time and canal transportation measures between the two groups. Wave one showed less canal transportation as compared with one shape, and the mean instrumentation time of wave one was significantly less than one shape. Reciprocating single-file systems was found to be faster with much less procedural errors and can hence be recommended for shaping the root canals of primary teeth. How to cite this article: Prabhakar AR, Yavagal C, Dixit K, Naik SV. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-File Systems. Int J Clin Pediatr Dent 2016;9(1):45-49.

  13. 75 FR 32937 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-06-10

    ... Power, LLC. Description: CMS Generation Michigan Power, LLC submits tariff filing per 35: CMS Gen MI... Attachment X to the FERC Electric Tariff, Fourth Revised Volume No 1. Filed Date: 05/28/2010. Accession...

  14. 76 FR 35880 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-06-20

    ... Hemingway Point to Point Agreements to be effective 6/1/ 2011. Filed Date: 06/13/2011. Accession Number.... Applicants: PacifiCorp. Description: PacifiCorp submits tariff filing per 35.15: Idaho Power Hemingway Point...

  15. 76 FR 59672 - Combined Notice of Filings (September 19, 2011)

    Science.gov (United States)

    2011-09-27

    ... Natural Gas Company, LLC. Description: Southern Natural Gas Company, L.L.C. submits tariff filing per 154... Policies. Filed Date: 09/14/2011. Accession Number: 20110914-5143. Comment Date: 5 p.m. Eastern Time on.... Description: Midcontinent Express Pipeline LLC submits tariff filing per 154.204: Filing to Remove Expired...

  16. 76 FR 60014 - Combined Notice of Filings #2

    Science.gov (United States)

    2011-09-28

    .... Description: Attachment A to Petition Amendment of ORNI 14 LLC. Filed Date: 09/20/2011. Accession Number...: PacifiCorp submits tariff filing per 35.13(a)(2)(iii: Bountiful City Amended and Restated Parrish Sub...)(2)(iii: TrAILCo submits revisions to PJM Tariff Attachment H- 18 to be effective 11/19/2011. Filed...

  17. 75 FR 54610 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-09-08

    ... Mountain Power submits tariff filing per 35.12: Rocky Mountain Power MBR Tariff to be effective 8/27/2010... Rock Windpower LLC submits tariff filing per 35.12: Flat Rock Windpower LLC MBR Tariff to be effective... Farm, LLC MBR Tariff to be effective 8/27/ 2010. Filed Date: 08/27/2010. Accession Number: 20100827...

  18. An internet-based teaching file on clinical nuclear medicine

    International Nuclear Information System (INIS)

    Jiang Zhong; Wu Jinchang

    2001-01-01

    Objective: The goal of this project was to develop an internet-based interactive digital teaching file on nuclide imaging in clinical nuclear medicine, with the capability of access to internet. Methods: On the basis of academic teaching contents in nuclear medicine textbook for undergraduates who major in nuclear medicine, Frontpage 2000, HTML language, and JavaScript language in some parts of the contents, were utilized in the internet-based teaching file developed in this study. Results: A practical and comprehensive teaching file was accomplished and may get access with acceptable speed to internet. Besides basic teaching contents of nuclide imagings, a large number of typical and rare clinical cases, questionnaire with answers and update data in the field of nuclear medicine were included in the file. Conclusion: This teaching file meets its goal of providing an easy-to-use and internet-based digital teaching file, characteristically with the contents instant and enriched, and with the modes diversified and colorful

  19. Paging memory from random access memory to backing storage in a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  20. Development of improved methods for remote access of DIII-D data and data analysis

    International Nuclear Information System (INIS)

    Greene, K.L.; McHarg, B.B. Jr.

    1997-11-01

    The DIII-D tokamak is a national fusion research facility. There is an increasing need to access data from remote sites in order to facilitate data analysis by collaborative researchers at remote locations, both nationally and internationally. In the past, this has usually been done by remotely logging into computers at the DIII-D site. With the advent of faster networking and powerful computers at remote sites, it is becoming possible to access and analyze data from anywhere in the world as if the remote user were actually at the DIII-D site. The general mechanism for accessing DIII-D data has always been via the PTDATA subroutine. Substantial enhancements are being made to that routine to make it more useful in a non-local environment. In particular, a caching mechanism is being built into PTDATA to make network data access more efficient. Studies are also being made of using Distributed File System (DFS) disk storage in a Distributed Computing Environment (DCE). A data server has been created that will migrate, on request, shot data from the DIII-D environment into the DFS environment

  1. Operational facility-integrated computer system for safeguards

    International Nuclear Information System (INIS)

    Armento, W.J.; Brooksbank, R.E.; Krichinsky, A.M.

    1980-01-01

    A computer system for safeguards in an active, remotely operated, nuclear fuel processing pilot plant has been developed. This sytem maintains (1) comprehensive records of special nuclear materials, (2) automatically updated book inventory files, (3) material transfer catalogs, (4) timely inventory estimations, (5) sample transactions, (6) automatic, on-line volume balances and alarmings, and (7) terminal access and applications software monitoring and logging. Future development will include near-real-time SNM mass balancing as both a static, in-tank summation and a dynamic, in-line determination. It is planned to incorporate aspects of site security and physical protection into the computer monitoring

  2. 77 FR 20016 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-04-03

    ... Resource Management, LLC, Enserco Energy LLC. Description: Change in Status Filing of Twin Eagle Resource Management, LLC, et al. Filed Date: 3/26/12. Accession Number: 20120326-5132. Comments Due: 5 p.m. ET 4/16/12.... Description: PJM Interconnection, L.L.C. submits tariff filing per 35.13(a)(2)(iii: Queue Position O50...

  3. 76 FR 21720 - Combined Notice of Filings #2

    Science.gov (United States)

    2011-04-18

    ... submits tariff filing per 35.13(a)(2)(iii): Revision to Sempra Generation FERC Electric MBR Tariff to be... LLC FERC Electric MBR Tariff to be effective 4/5/2011. Filed Date: 04/05/2011. Accession Number...): Revision to Mesquite Power LLC FERC Electric MBR Tariff to be effective 4/5/2011. Filed Date: 04/05/2011...

  4. 76 FR 63292 - Combined Notice Of Filings #2

    Science.gov (United States)

    2011-10-12

    ... I, LLC submits tariff filing per 35: White Creek Wind I, LLC MBR Tariff to be effective 9/8/2011...: Smoky Hills Wind Farm, LLC submits tariff filing per 35: Smoky Hills Wind Farm, LLC MBR Tariff to be... Project II, LLC MBR Tariff to be effective 9/ 12/2011. Filed Date: 10/04/2011. Accession Number: 20111004...

  5. Generation of Gaussian 09 Input Files for the Computation of 1H and 13C NMR Chemical Shifts of Structures from a Spartan’14 Conformational Search

    OpenAIRE

    sprotocols

    2014-01-01

    Authors: Spencer Reisbick & Patrick Willoughby ### Abstract This protocol describes an approach to preparing a series of Gaussian 09 computational input files for an ensemble of conformers generated in Spartan’14. The resulting input files are necessary for computing optimum geometries, relative conformer energies, and NMR shielding tensors using Gaussian. Using the conformational search feature within Spartan’14, an ensemble of conformational isomers was obtained. To convert the str...

  6. Publication and Retrieval of Computational Chemical-Physical Data Via the Semantic Web. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Ostlund, Neil [Chemical Semantics, Inc., Gainesville, FL (United States)

    2017-07-20

    This research showed the feasibility of applying the concepts of the Semantic Web to Computation Chemistry. We have created the first web portal (www.chemsem.com) that allows data created in the calculations of quantum chemistry, and other such chemistry calculations to be placed on the web in a way that makes the data accessible to scientists in a semantic form never before possible. The semantic web nature of the portal allows data to be searched, found, and used as an advance over the usual approach of a relational database. The semantic data on our portal has the nature of a Giant Global Graph (GGG) that can be easily merged with related data and searched globally via a SPARQL Protocol and RDF Query Language (SPARQL) that makes global searches for data easier than with traditional methods. Our Semantic Web Portal requires that the data be understood by a computer and hence defined by an ontology (vocabulary). This ontology is used by the computer in understanding the data. We have created such an ontology for computational chemistry (purl.org/gc) that encapsulates a broad knowledge of the field of computational chemistry. We refer to this ontology as the Gainesville Core. While it is perhaps the first ontology for computational chemistry and is used by our portal, it is only a start of what must be a long multi-partner effort to define computational chemistry. In conjunction with the above efforts we have defined a new potential file standard (Common Standard for eXchange – CSX for computational chemistry data). This CSX file is the precursor of data in the Resource Description Framework (RDF) form that the semantic web requires. Our portal translates CSX files (as well as other computational chemistry data files) into RDF files that are part of the graph database that the semantic web employs. We propose a CSX file as a convenient way to encapsulate computational chemistry data.

  7. Research of Performance Linux Kernel File Systems

    Directory of Open Access Journals (Sweden)

    Andrey Vladimirovich Ostroukh

    2015-10-01

    Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.

  8. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  9. SmartVeh: Secure and Efficient Message Access Control and Authentication for Vehicular Cloud Computing.

    Science.gov (United States)

    Huang, Qinlong; Yang, Yixian; Shi, Yuxiang

    2018-02-24

    With the growing number of vehicles and popularity of various services in vehicular cloud computing (VCC), message exchanging among vehicles under traffic conditions and in emergency situations is one of the most pressing demands, and has attracted significant attention. However, it is an important challenge to authenticate the legitimate sources of broadcast messages and achieve fine-grained message access control. In this work, we propose SmartVeh, a secure and efficient message access control and authentication scheme in VCC. A hierarchical, attribute-based encryption technique is utilized to achieve fine-grained and flexible message sharing, which ensures that vehicles whose persistent or dynamic attributes satisfy the access policies can access the broadcast message with equipped on-board units (OBUs). Message authentication is enforced by integrating an attribute-based signature, which achieves message authentication and maintains the anonymity of the vehicles. In order to reduce the computations of the OBUs in the vehicles, we outsource the heavy computations of encryption, decryption and signing to a cloud server and road-side units. The theoretical analysis and simulation results reveal that our secure and efficient scheme is suitable for VCC.

  10. 77 FR 62505 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-10-15

    ...: Cancellation of First Revised Service Agreement No. 6--St. Cloud to be effective 12/31/2012. Filed Date: 10/3... Post Employment Benefits Costs for the year ending December 31, 2012. Filed Date: 10/3/12. Accession...

  11. 78 FR 35621 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-06-13

    .... Applicants: City of Anaheim, California. Description: Compliance Report to be effective N/A. Filed Date: 6/5... Attachment C (6/5/13) to be effective 6/ 6/2013. Filed Date: 6/5/13. Accession Number: 20130605-5067...

  12. 76 FR 12724 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-03-08

    ... ITO--RC to be effective 4/ 26/2011. Filed Date: 02/25/2011 Accession Number: 20110225-5145 Comment... Description: Pacific Gas and Electric Company submits tariff filing per 35.13(a)(2)(iii: Corrections to PG&E's...

  13. Exploiting volatile opportunistic computing resources with Lobster

    Science.gov (United States)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  14. PCE: web tools to compute protein continuum electrostatics

    Science.gov (United States)

    Miteva, Maria A.; Tufféry, Pierre; Villoutreix, Bruno O.

    2005-01-01

    PCE (protein continuum electrostatics) is an online service for protein electrostatic computations presently based on the MEAD (macroscopic electrostatics with atomic detail) package initially developed by D. Bashford [(2004) Front Biosci., 9, 1082–1099]. This computer method uses a macroscopic electrostatic model for the calculation of protein electrostatic properties, such as pKa values of titratable groups and electrostatic potentials. The MEAD package generates electrostatic energies via finite difference solution to the Poisson–Boltzmann equation. Users submit a PDB file and PCE returns potentials and pKa values as well as color (static or animated) figures displaying electrostatic potentials mapped on the molecular surface. This service is intended to facilitate electrostatics analyses of proteins and thereby broaden the accessibility to continuum electrostatics to the biological community. PCE can be accessed at . PMID:15980492

  15. COMPUTING SERVICES DURING THE ANNUAL CERN SHUTDOWN

    CERN Multimedia

    2001-01-01

    As in previous years, computing services run by IT division will be left running unattended during the annual shutdown. The following points should be noted. No interruptions are scheduled for local and wide area networking and the ACB, e-mail and unix interactive services. Unix batch services will be available but without access to manually mounted tapes. Dedicated Engineering services, general purpose database services and the Helpdesk will be closed during this period. An operator service will be maintained and can be reached at extension 75011 or by Email to computer.operations@cern.ch. Users should be aware that, except where there are special arrangements, any major problems that develop during this period will most likely be resolved only after CERN has reopened. In particular, we cannot guarantee backups for Home Directory files (for Unix or Windows) or for email folders. Any changes that you make to your files during this period may be lost in the event of a disk failure. Please note that all t...

  16. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  17. A brain computer interface-based explorer.

    Science.gov (United States)

    Bai, Lijuan; Yu, Tianyou; Li, Yuanqing

    2015-04-15

    In recent years, various applications of brain computer interfaces (BCIs) have been studied. In this paper, we present a hybrid BCI combining P300 and motor imagery to operate an explorer. Our system is mainly composed of a BCI mouse, a BCI speller and an explorer. Through this system, the user can access his computer and manipulate (open, close, copy, paste, and delete) files such as documents, pictures, music, movies and so on. The system has been tested with five subjects, and the experimental results show that the explorer can be successfully operated according to subjects' intentions. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. SCEAPI: A unified Restful Web API for High-Performance Computing

    Science.gov (United States)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  19. Extracting the Data From the LCM vk4 Formatted Output File

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-29

    These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and compute laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.

  20. Coral and artificial reef shape files, Broward County, Florida, (NODC Accession 0000244)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Coral reef and artificial reef location shape files and accompanying table files for reefs located off shore of Broward County, Florida. Accompanying "attribute"...

  1. The ENSDF radioactivity data base for IBM-PC and computer network access

    International Nuclear Information System (INIS)

    Ekstroem, P.; Spanier, L.

    1989-08-01

    A data base system for radioactivity gamma rays is described. A base with approximately 15000 gamma rays from 2777 decays is available for installation on the hard disk of a PC, and a complete system with approximately 73000 gamma rays is available for on-line access via the NORDic University computer NETwork (NORDUNET) and the Swedish University computer NETwork (SUNET)

  2. 76 FR 62801 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-10-11

    ... Pseudo PGA with Mesquite Solar 1 to be effective 11/1/2011. Filed Date: 09/30/2011. Accession Number... Electric Company submits tariff filing per 35.13(a)(2)(iii: Western USBR TFA for Red Bluff Pumping Plant to...

  3. 76 FR 23577 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-04-27

    ... Interconnection, L.L.C. submits tariff filing per 35.13(a)(2)(iii: Queue No. W3-124--Original Service Agreement No... Transmission Agreement with Auburndale Pwr Partners to be effective 5/1/2011. Filed Date: 04/21/2011. Accession... tariff filing per 35.13(a)(1): 04--21--11 Paris Rate Schedule 407 Settlement to be effective 3/31/2011...

  4. 76 FR 42704 - Sky River LLC; Notice of Filing

    Science.gov (United States)

    2011-07-19

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket Nos. ER11-3277-000; ER11-3277-001] Sky River LLC; Notice of Filing Take notice that, on July 8, 2011, Sky River LLC filed to amend its Open Access Transmission Tariff (OATT) filing, submitted on April 1, 2011 and amended on April 7...

  5. Grid Data Access on Widely Distributed Worker Nodes Using Scalla and SRM

    International Nuclear Information System (INIS)

    Jakl, Pavel; Lauret, Jerome; Hanushevsky, Andrew; Shoshani, Arie; Sim, Alex; Gu, Junmin

    2011-01-01

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.

  6. Grid data access on widely distributed worker nodes using scalla and SRM

    International Nuclear Information System (INIS)

    Jakl, P; Lauret, J; Hanushevsky, A; Shoshani, A; Sim, A; Gu, J

    2008-01-01

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations

  7. Grid Data Access on Widely Distributed Worker Nodes Using Scalla and SRM

    Energy Technology Data Exchange (ETDEWEB)

    Jakl, Pavel; /Prague, Inst. Phys.; Lauret, Jerome; /Brookhaven; Hanushevsky, Andrew; /SLAC; Shoshani, Arie; /LBL, Berkeley; Sim, Alex; /LBL, Berkeley; Gu, Junmin; /LBL, Berkeley

    2011-11-10

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.

  8. Parallel file system performances in fusion data storage

    Energy Technology Data Exchange (ETDEWEB)

    Iannone, F., E-mail: francesco.iannone@enea.it [Associazione EURATOM-ENEA sulla Fusione, C.R.ENEA Frascati, via E.Fermi, 45 - 00044 Frascati, Rome (Italy); Podda, S.; Bracco, G. [ENEA Information Communication Tecnologies, Lungotevere Thaon di Revel, 76 - 00196 Rome (Italy); Manduchi, G. [Associazione EURATOM-ENEA sulla Fusione, Consorzio RFX, Corso Stati Uniti, 4 - 35127 Padua (Italy); Maslennikov, A. [CASPUR Inter-University Consortium for the Application of Super-Computing for Research, via dei Tizii, 6b - 00185 Rome (Italy); Migliori, S. [ENEA Information Communication Tecnologies, Lungotevere Thaon di Revel, 76 - 00196 Rome (Italy); Wolkersdorfer, K. [Juelich Supercomputing Centre-FZJ, D-52425 Juelich (Germany)

    2012-12-15

    High I/O flow rates, up to 10 GB/s, are required in large fusion Tokamak experiments like ITER where hundreds of nodes store simultaneously large amounts of data acquired during the plasma discharges. Typical network topologies such as linear arrays (systolic), rings, meshes (2-D arrays), tori (3-D arrays), trees, butterfly, hypercube in combination with high speed data transports like Infiniband or 10G-Ethernet, are the main areas in which the effort to overcome the so-called parallel I/O bottlenecks is most focused. The high I/O flow rates were modelled in an emulated testbed based on the parallel file systems such as Lustre and GPFS, commonly used in High Performance Computing. The test runs on High Performance Computing-For Fusion (8640 cores) and ENEA CRESCO (3392 cores) supercomputers. Message Passing Interface based applications were developed to emulate parallel I/O on Lustre and GPFS using data archival and access solutions like MDSPLUS and Universal Access Layer. These methods of data storage organization are widely diffused in nuclear fusion experiments and are being developed within the EFDA Integrated Tokamak Modelling - Task Force; the authors tried to evaluate their behaviour in a realistic emulation setup.

  9. Strategies for Sharing Seismic Data Among Multiple Computer Platforms

    Science.gov (United States)

    Baker, L. M.; Fletcher, J. B.

    2001-12-01

    Seismic waveform data is readily available from a variety of sources, but it often comes in a distinct, instrument-specific data format. For example, data may be from portable seismographs, such as those made by Refraction Technology or Kinemetrics, from permanent seismograph arrays, such as the USGS Parkfield Dense Array, from public data centers, such as the IRIS Data Center, or from personal communication with other researchers through e-mail or ftp. A computer must be selected to import the data - usually whichever is the most suitable for reading the originating format. However, the computer best suited for a specific analysis may not be the same. When copies of the data are then made for analysis, a proliferation of copies of the same data results, in possibly incompatible, computer-specific formats. In addition, if an error is detected and corrected in one copy, or some other change is made, all the other copies must be updated to preserve their validity. Keeping track of what data is available, where it is located, and which copy is authoritative requires an effort that is easy to neglect. We solve this problem by importing waveform data to a shared network file server that is accessible to all our computers on our campus LAN. We use a Network Appliance file server running Sun's Network File System (NFS) software. Using an NFS client software package on each analysis computer, waveform data can then be read by our MatLab or Fortran applications without first copying the data. Since there is a single copy of the waveform data in a single location, the NFS file system hierarchy provides an implicit complete waveform data catalog and the single copy is inherently authoritative. Another part of our solution is to convert the original data into a blocked-binary format (known historically as USGS DR100 or VFBB format) that is interpreted by MatLab or Fortran library routines available on each computer so that the idiosyncrasies of each machine are not visible to

  10. 75 FR 33294 - Combined Notice of Filings No. 1

    Science.gov (United States)

    2010-06-11

    ...: Columbia Gulf Transmission Company submits tariff filing per 154.204: IFF Compliance to be effective 7/1.... The filings in the above proceedings are accessible in the Commission's eLibrary system by clicking on...

  11. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    Science.gov (United States)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available

  12. User's manual for SPLPLOT-2: a computer code for data plotting and editing in conversational mode

    International Nuclear Information System (INIS)

    Muramatsu, Ken; Matsumoto, Kiyoshi; Kohsaka, Atsuo; Maniwa, Masaki.

    1985-07-01

    The computer code SPLPLOT-2 for plotting and data editing has been developed as a part of the code package: SPLPACK-1. The SPLPLOT-2 code has capabilities of both conversational and batch processings. This report describes the user's manual for SPLPLOT-2. The following improvements have been made in the SPLPLOT-2. (1) It has capabilities of both conversational and batch processings, (2) function of conversion of files from the input SPL (Standard PLotter) files to internal work files have been implemented to reduce number of time consuming access to the input SPL files, (3) user supplied subroutines can be assigned for data editing from the SPL files, (4) in addition to the two-dimensional graphs, streamline graphs, contour line graphs and bird's-eye view graphs can be drawn. (author)

  13. Design and application of remote file management system

    International Nuclear Information System (INIS)

    Zhu Haijun; Liu Dekang; Shen liren

    2006-01-01

    File transfer protocol can help users transfer files between computers on internet. FTP can not fulfill the needs of users in special occasions, so it needs programmer define file transfer protocol himself based on users. The method or realization and application for user-defined file transfer protocol is introduced. (authors)

  14. Package for the BESM-6 computer for particles momenta measuring in nuclei emulsions by semiautomatic microscope

    International Nuclear Information System (INIS)

    Leskin, V.A.; Saltykov, A.I.; Shabratova, G.S.

    1980-01-01

    Computer codes for using on the BESM-6 computer have been developed. The information obtained by semiautomatic measuring in nuclear emulsions are processed, and then the information from paper tape are checked and the diagnostics are printed if the errors in the information occu.,. Data input to the BESM-6 computer is written to the magnetic tape as the direct access files. The data not containing errors are used in calculations of particle momentum by multiple-scattering method

  15. 77 FR 43585 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-07-25

    ...-000. Applicants: Mehoopany Wind Energy LLC. Description: Notice of Self-Certification of Exempt Wholesale Generator Status of Mehoopany Wind Energy LLC. Filed Date: 7/17/12. Accession Number: 20120717... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 Take notice...

  16. 78 FR 15361 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-03-11

    ... electric reliability filings: Docket Numbers: RR13-2-000. Applicants: North American Electric Reliability Corporation, Description: Petition of North American Electric Reliability Corporation for Approval of Revisions to the NERC Standard Processes Manual. Filed Date: 2/28/13. Accession Number: 20130228-5212...

  17. Forecasting Model for Network Throughput of Remote Data Access in Computing Grids

    CERN Document Server

    Begy, Volodimir; The ATLAS collaboration

    2018-01-01

    Computing grids are one of the key enablers of eScience. Researchers from many fields (e.g. High Energy Physics, Bioinformatics, Climatology, etc.) employ grids to run computational jobs in a highly distributed manner. The current state of the art approach for data access in the grid is data placement: a job is scheduled to run at a specific data center, and its execution starts only when the complete input data has been transferred there. This approach has two major disadvantages: (1) the jobs are staying idle while waiting for the input data; (2) due to the limited infrastructure resources, the distributed data management system handling the data placement, may queue the transfers up to several days. An alternative approach is remote data access: a job may stream the input data directly from storage elements, which may be located at local or remote data centers. Remote data access brings two innovative benefits: (1) the jobs can be executed asynchronously with respect to the data transfer; (2) when combined...

  18. Neural Network Design on the SRC-6 Reconfigurable Computer

    Science.gov (United States)

    2006-12-01

    fingerprint identification. In this field, automatic identification methods are used to save time, especially for the purpose of fingerprint matching in...grid widths and lengths and therefore was useful in producing an accurate canvas with which to create sample training images. The added benefit of...tools available free of charge and readily accessible on the computer, it was simple to design bitmap data files visually on a canvas and then

  19. 75 FR 76721 - Combined Notice of Filings No. 1

    Science.gov (United States)

    2010-12-09

    ... Power III, LLC submits tariff filing per 35.12: MBR Application of Evergreen Wind Power III, LLC to be... MBR Tariff to be effective 10/30/2010. Filed Date: 11/30/2010. Accession Number: 20101130-5045...: Alta Wind IV, LLC. Description: Alta Wind IV, LLC submits tariff filing per 35.1: Alta Wind IV, LLC MBR...

  20. Lifelong mobile learning: Increasing accessibility and flexibility with tablet computers and ebooks

    OpenAIRE

    Kalz, Marco

    2011-01-01

    Kalz, M. (2011, 1 September). Lifelong mobile learning: Increasing accessibility and flexibility with tablet computers and ebooks. Presentation provided during the opening ceremony of the iPad pilot for schakelzone rechten, Utrecht, The Netherlands.

  1. Common data model access; a unified layer to access data from data analysis point of view

    International Nuclear Information System (INIS)

    Poirier, S.; Buteau, A.; Ounsy, M.; Rodriguez, C.; Hauser, N.; Lam, T.; Xiong, N.

    2012-01-01

    For almost 20 years, the scientific community of neutron and synchrotron institutes have been dreaming of a common data format for exchanging experimental results and applications for reducing and analyzing the data. Using HDF5 as a data container has become the standard in many facilities. The big issue is the standardization of the data organization (schema) within the HDF5 container. By introducing a new level of indirection for data access, the Common-Data-Model-Access (CDMA) framework proposes a solution and allows separation of responsibilities between data reduction developers and the institute. Data reduction developers are responsible for data reduction code; the institute provides a plug-in to access the data. The CDMA is a core API that accesses data through a data format plug-in mechanism and scientific application definitions (sets of keywords) coming from a consensus between scientists and institutes. Using a innovative 'mapping' system between application definitions and physical data organizations, the CDMA allows data reduction application development independent of the data file container AND schema. Each institute develops a data access plug-in for its own data file formats along with the mapping between application definitions and its data files. Thus data reduction applications can be developed from a strictly scientific point of view and are immediately able to process data acquired from several institutes. (authors)

  2. A Centralized Control and Dynamic Dispatch Architecture for File Integrity Analysis

    Directory of Open Access Journals (Sweden)

    Ronald DeMara

    2006-02-01

    Full Text Available The ability to monitor computer file systems for unauthorized changes is a powerful administrative tool. Ideally this task could be performed remotely under the direction of the administrator to allow on-demand checking, and use of tailorable reporting and exception policies targeted to adjustable groups of network elements. This paper introduces M-FICA, a Mobile File Integrity and Consistency Analyzer as a prototype to achieve this capability using mobile agents. The M-FICA file tampering detection approach uses MD5 message digests to identify file changes. Two agent types, Initiator and Examiner, are used to perform file integrity tasks. An Initiator travels to client systems, computes a file digest, then stores those digests in a database file located on write-once media. An Examiner agent computes a new digest to compare with the original digests in the database file. Changes in digest values indicate that the file contents have been modified. The design and evaluation results for a prototype developed in the Concordia agent framework are described.

  3. File: nuclear safety and transparency

    International Nuclear Information System (INIS)

    Martinez, J.P.; Etchegoyen, A.; Jeandron, C.

    2001-01-01

    Several experiences of nuclear safety and transparency are related in this file. Public information, access to documents, transparency in nuclear regulation are such subjects developed in this debate. (N.C.)

  4. CINDA 83 (1977-1983). The index to literature and computer files on microscopic neutron data

    International Nuclear Information System (INIS)

    1983-01-01

    CINDA, the Computer Index of Neutron Data, contains bibliographical references to measurements, calculations, reviews and evaluations of neutron cross-sections and other microscopic neutron data; it includes also index references to computer libraries of numerical neutron data exchanged between four regional neutron data centres. The present issue, CINDA 83, is an index to the literature on neutron data published after 1976. The basic volume, CINDA-A, together with the present issue, contains the full CINDA file as of 1 April 1983. A supplement to CINDA 83 is foreseen for fall 1983. Next year's issue, which is envisaged to be published in June 1984, will again cover all relevant literature that has appeared after 1976

  5. The Ability of implementing Cloud Computing in Higher Education - KRG

    Directory of Open Access Journals (Sweden)

    Zanyar Ali Ahmed

    2017-06-01

    Full Text Available Cloud computing is a new technology. CC is an online service can store and retrieve information, without the requirement for physical access to the files on hard drives. The information is available on a system, server where it can be accessed by clients when it’s needed. Lack of the ICT infrastructure of universities of the Kurdistan Regional Government (KRG can use  this new technology, because of economical advantages, enhanced data managements, better maintenance, high performance, improve availability and accessibility therefore achieving an easy maintenance  of organizational  institutes. The aim of this research is to find the ability and possibility to implement the cloud computing in higher education of the KRG. This research will help the universities to start establishing a cloud computing in their services. A survey has been conducted to evaluate the CC services that have been applied to KRG universities have by using cloud computing services. The results showed that the most of KRG universities are using SaaS. MHE-KRG universities and institutions are confronting many challenges and concerns in term of security, user privacy, lack of integration with current systems, and data and documents ownership.

  6. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files

    International Nuclear Information System (INIS)

    Randolph Schwarz; Leland L. Carter; Alysia Schwarz

    2005-01-01

    Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry

  7. NASA ARCH- A FILE ARCHIVAL SYSTEM FOR THE DEC VAX

    Science.gov (United States)

    Scott, P. J.

    1994-01-01

    The function of the NASA ARCH system is to provide a permanent storage area for files that are infrequently accessed. The NASA ARCH routines were designed to provide a simple mechanism by which users can easily store and retrieve files. The user treats NASA ARCH as the interface to a black box where files are stored. There are only five NASA ARCH user commands, even though NASA ARCH employs standard VMS directives and the VAX BACKUP utility. Special care is taken to provide the security needed to insure file integrity over a period of years. The archived files may exist in any of three storage areas: a temporary buffer, the main buffer, and a magnetic tape library. When the main buffer fills up, it is transferred to permanent magnetic tape storage and deleted from disk. Files may be restored from any of the three storage areas. A single file, multiple files, or entire directories can be stored and retrieved. archived entities hold the same name, extension, version number, and VMS file protection scheme as they had in the user's account prior to archival. NASA ARCH is capable of handling up to 7 directory levels. Wildcards are supported. User commands include TEMPCOPY, DISKCOPY, DELETE, RESTORE, and DIRECTORY. The DIRECTORY command searches a directory of savesets covering all three archival areas, listing matches according to area, date, filename, or other criteria supplied by the user. The system manager commands include 1) ARCHIVE- to transfer the main buffer to duplicate magnetic tapes, 2) REPORTto determine when the main buffer is full enough to archive, 3) INCREMENT- to back up the partially filled main buffer, and 4) FULLBACKUP- to back up the entire main buffer. On-line help files are provided for all NASA ARCH commands. NASA ARCH is written in DEC VAX DCL for interactive execution and has been implemented on a DEC VAX computer operating under VMS 4.X. This program was developed in 1985.

  8. 78 FR 28210 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-05-14

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 Take notice... Company of New Mexico. Description: City of Gallup Network Integration Transmission Service Agreement to..., Section III--Distribution of Revenues to be effective 7/1/2013. Filed Date: 4/30/13. Accession Number...

  9. 77 FR 39491 - Combined Notice of Filings #2

    Science.gov (United States)

    2012-07-03

    ...-000. Applicants: Flat Ridge 2 Wind Energy LLC. Description: Notice of Self-Certification of Exempt Wholesale Generator Status of Flat Ridge 2 Wind Energy LLC. Filed Date: 6/26/12. Accession Number: 20120626... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 2 Take notice...

  10. 77 FR 15747 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-03-16

    ... Wholesale Generator Status of Magic Valley Wind Farm I, LLC. Filed Date: 3/7/12. Accession Number: 20120307..., LLC. Description: Notice of Self-Certification of Exempt Wholesale Generator Status of Wildcat Wind... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 Take notice...

  11. 75 FR 30810 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-06-02

    ...: ER09-934-004; ER09-936-001. Applicants: Bangor Hydro Electric Company. Description: Offer of Settlement of Bangor Hydro Electric Company. Filed Date: 05/24/2010. Accession Number: 20100524-5035. Comment...: CMS Energy Resource Management Company submits tariff filing under Schedule No. 1 Electric Tariff, to...

  12. 77 FR 43070 - Combined Notice of Filings #2

    Science.gov (United States)

    2012-07-23

    ...; ER10-2752-003. Applicants: Bangor Hydro Electric Company. Description: Notice of Changes in Status of Bangor Hydro Electric Company, et al. Filed Date: 7/16/12. Accession Number: 20120716-5085. Comments Due... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 2 Take notice...

  13. 76 FR 62790 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-10-11

    ... Tuesday, October 18, 2011. Docket Numbers: ER11-4651-000. Applicants: Ford Motor Company. Description: Notice of Termination of Ford Motor Company. Filed Date: 09/27/2011. Accession Numbers: 20110927-5136... England Power Company. Description: New England Power Company submits tariff filing per 35.17(b...

  14. 76 FR 56191 - Combined Notice of Filings #2

    Science.gov (United States)

    2011-09-12

    ... on Friday, September 23, 2011. Docket Numbers: ER11-4431-000. Applicants: Lucky Lady Oil Company. Description: Lucky Lady Oil Company Cancellation Notice. Filed Date: 09/02/2011. Accession Number: 20110902... that the Commission received the following electric rate filings: Docket Numbers: ER11-2970-001. [[Page...

  15. CINDA 99, supplement 2 to CINDA 97 (1988-1999). The index to literature and computer files on microscopic neutron data

    International Nuclear Information System (INIS)

    1999-01-01

    CINDA, the Computer Index of Neutron Data, contains bibliographical references to measurements, calculations, reviews and evaluations of neutron cross-sections and other microscopic neutron data; it includes also index references to computer libraries of numerical neutron data available from four regional neutron data centres. The present issue, CINDA 99, is the second supplement to CINDA 97, the index to the literature on neutron data published after 1987. It supersedes the first supplement, CINDA 98. The complete CINDA file as of 1 June 1999 is contained in: the archival issue CINDA-A (5 volumes, 1990), CINDA 97 and the current issue CINDA 99. The compilation and publication of CINDA are the result of worldwide co-operation involving the following four data centres. Each centre is responsible for compiling the CINDA entries from the literature published in a defined geographical area given in brackets below: the USA National Nuclear Data Center at the Brookhaven National Laboratory, USA (United States of America and Canada); the Russian Nuclear Data Centre at the Fiziko-Energeticheskij Institut, Obninsk, Russian Federation (former USSR countries); the NEA Data Bank in Paris, France (European OECD member countries in Western Europe and Japan); and the IAEA Nuclear Data Section in Vienna, Austria (all other countries in Eastern Europe, Asia, Australia, Africa, Central and South America; also IAEA publications and translation journals). Besides the published CINDA books, up-to-date computer retrievals for specified CINDA information are currently available on request from the responsible CINDA centres, or via direct access to the on-line services as described in this publication

  16. Access to and use of computers among clinical dental students of ...

    African Journals Online (AJOL)

    Access to and use of computers among clinical dental students of the University of Lagos. PO Ayanbadejo, OO Sofola, OG Uti. Abstract. No Abstract. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. Article Metrics. No metrics found. Metrics powered by PLOS ALM

  17. Dynamic Non-Hierarchical File Systems for Exascale Storage

    Energy Technology Data Exchange (ETDEWEB)

    Long, Darrell E. [Univ. of California, Santa Cruz, CA (United States); Miller, Ethan L [Univ. of California, Santa Cruz, CA (United States)

    2015-02-24

    This constitutes the final report for “Dynamic Non-Hierarchical File Systems for Exascale Storage”. The ultimate goal of this project was to improve data management in scientific computing and high-end computing (HEC) applications, and to achieve this goal we proposed: to develop the first, HEC-targeted, file system featuring rich metadata and provenance collection, extreme scalability, and future storage hardware integration as core design goals, and to evaluate and develop a flexible non-hierarchical file system interface suitable for providing more powerful and intuitive data management interfaces to HEC and scientific computing users. Data management is swiftly becoming a serious problem in the scientific community – while copious amounts of data are good for obtaining results, finding the right data is often daunting and sometimes impossible. Scientists participating in a Department of Energy workshop noted that most of their time was spent “...finding, processing, organizing, and moving data and it’s going to get much worse”. Scientists should not be forced to become data mining experts in order to retrieve the data they want, nor should they be expected to remember the naming convention they used several years ago for a set of experiments they now wish to revisit. Ideally, locating the data you need would be as easy as browsing the web. Unfortunately, existing data management approaches are usually based on hierarchical naming, a 40 year-old technology designed to manage thousands of files, not exabytes of data. Today’s systems do not take advantage of the rich array of metadata that current high-end computing (HEC) file systems can gather, including content-based metadata and provenance1 information. As a result, current metadata search approaches are typically ad hoc and often work by providing a parallel management system to the “main” file system, as is done in Linux (the locate utility), personal computers, and enterprise search

  18. 75 FR 74036 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-11-30

    .... Description: McNees Wallace & Nurick LLC council to The Trustees of the University of PA, a PA Non-Profit Corp... Interconnection Financial Security of Calpine Corporation. Filed Date: 11/10/2010. Accession Number: 20101110-5186... on the Applicant. In reference to filings initiating a new proceeding, interventions or protests...

  19. 78 FR 17391 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-03-21

    ... Electric Company, Emera Energy Services, Inc., Emera Energy U.S. Subsidiary No. 1, Inc., Emera Energy U.S... in Status of Bangor Hydro Electric Company, et al. Filed Date: 3/13/13. Accession Number: 20130313... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 Take notice...

  20. 78 FR 6815 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-01-31

    .... Applicants: Bangor Hydro Electric Company, Emera Energy U.S. Subsidiary No. 1, Inc, Emera Energy U.S... Status of Bangor Hydro Electric Company, et al. Filed Date: 1/22/13. Accession Number: 20130122-5373... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 Take notice...

  1. 77 FR 13114 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-03-05

    .../12 Docket Numbers: ER12-1013-001 Applicants: Physical Systems Integration, LLC Description: Physical Systems Integration, LLC--Amendment to MBR Application to be effective 3/1/2012. Filed Date: 2/24/12... Market-Based Rate Tariff of Hampton Lumber Mills-Washington, Inc. Filed Date: 2/27/12 Accession Number...

  2. Digital teaching file. Concept, implementation, and experiences in a university setting

    International Nuclear Information System (INIS)

    Trumm, C.; Wirth, S.; Treitl, M.; Lucke, A.; Kuettner, B.; Pander, E.; Clevert, D.-A.; Glaser, C.; Reiser, M.; Dugas, M.

    2005-01-01

    Film-based teaching files require a substantial investment in human, logistic, and financial resources. The combination of computer and network technology facilitates the workflow integration of distributing radiologic teaching cases within an institution (intranet) or via the World Wide Web (Internet). A digital teaching file (DTF) should include the following basic functions: image import from different sources and of different formats, editing of imported images, uniform case classification, quality control (peer review), a controlled access of different user groups (in-house and external), and an efficient retrieval strategy. The portable network graphics image format (PNG) is especially suitable for DTFs because of several features: pixel support, 2D-interlacing, gamma correction, and lossless compression. The American College of Radiology (ACR) ''Index for Radiological Diagnoses'' is hierarchically organized and thus an ideal classification system for a DTF. Computer-based training (CBT) in radiology is described in numerous publications, from supplementing traditional learning methods to certified education via the Internet. Attractiveness of a CBT application can be increased by integration of graphical and interactive elements but makes workflow integration of daily case input more difficult. Our DTF was built with established Internet instruments and integrated into a heterogeneous PACS/RIS environment. It facilitates a quick transfer (DICOM S end) of selected images at the time of interpretation to the DTF and access to the DTF application at any time anywhere within the university hospital intranet employing a standard web browser. A DTF is a small but important building block in an institutional strategy of knowledge management. (orig.) [de

  3. Evaluation of External Memory Access Performance on a High-End FPGA Hybrid Computer

    Directory of Open Access Journals (Sweden)

    Konstantinos Kalaitzis

    2016-10-01

    Full Text Available The motivation of this research was to evaluate the main memory performance of a hybrid super computer such as the Convey HC-x, and ascertain how the controller performs in several access scenarios, vis-à-vis hand-coded memory prefetches. Such memory patterns are very useful in stencil computations. The theoretical bandwidth of the memory of the Convey is compared with the results of our measurements. The accurate study of the memory subsystem is particularly useful for users when they are developing their application-specific personality. Experiments were performed to measure the bandwidth between the coprocessor and the memory subsystem. The experiments aimed mainly at measuring the reading access speed of the memory from Application Engines (FPGAs. Different ways of accessing data were used in order to find the most efficient way to access memory. This way was proposed for future work in the Convey HC-x. When performing a series of accesses to memory, non-uniform latencies occur. The Memory Controller of the Convey HC-x in the coprocessor attempts to cover this latency. We measure memory efficiency as a ratio of the number of memory accesses and the number of execution cycles. The result of this measurement converges to one in most cases. In addition, we performed experiments with hand-coded memory accesses. The analysis of the experimental results shows how the memory subsystem and Memory Controllers work. From this work we conclude that the memory controllers do an excellent job, largely because (transparently to the user they seem to cache large amounts of data, and hence hand-coding is not needed in most situations.

  4. Secure external access to CERN's services to replace VPN

    CERN Multimedia

    2005-01-01

    CERN has recently experienced several computer security incidents caused by people opening VPN connections and (unknown to them) allowing malicious software to enter CERN. VPN should be used to connect to CERN only in extreme and exceptional circumstances and it is formally discouraged as a general solution. If incidents continue, the availability of the service will need to be reviewed. Recommended methods of connecting to CERN from the Internet for common functionalities such as e-mail, access to CERN web or file servers and interactive sessions on CERN systems are described at http://cern.ch/security/vpn

  5. 76 FR 37111 - Access to Confidential Business Information by Computer Sciences Corporation and Its Identified...

    Science.gov (United States)

    2011-06-24

    ... Business Information by Computer Sciences Corporation and Its Identified Subcontractors AGENCY: Environmental Protection Agency (EPA). ACTION: Notice. SUMMARY: EPA has authorized its contractor, Computer Sciences Corporation of Chantilly, VA and Its Identified Subcontractors, to access information which has...

  6. Computational system to create an entry file for replicating I-125 seeds simulating brachytherapy case studies using the MCNPX code

    Directory of Open Access Journals (Sweden)

    Leonardo da Silva Boia

    2014-03-01

    Full Text Available Purpose: A computational system was developed for this paper in the C++ programming language, to create a 125I radioactive seed entry file, based on the positioning of a virtual grid (template in voxel geometries, with the purpose of performing prostate cancer treatment simulations using the MCNPX code.Methods: The system is fed with information from the planning system with regard to each seed’s location and its depth, and an entry file is automatically created with all the cards (instructions for each seed regarding their cell blocks and surfaces spread out spatially in the 3D environment. The system provides with precision a reproduction of the clinical scenario for the MCNPX code’s simulation environment, thereby allowing the technique’s in-depth study.Results and Conclusion: The preliminary results from this study showed that the lateral penumbra of uniform scanning proton beams was less sensitive In order to validate the computational system, an entry file was created with 88 125I seeds that were inserted in the phantom’s MAX06 prostate region with initial activity determined for the seeds at the 0.27 mCi value. Isodose curves were obtained in all the prostate slices in 5 mm steps in the 7 to 10 cm interval, totaling 7 slices. Variance reduction techniques were applied in order to optimize computational time and the reduction of uncertainties such as photon and electron energy interruptions in 4 keV and forced collisions regarding cells of interest. Through the acquisition of isodose curves, the results obtained show that hot spots have values above 300 Gy, as anticipated in literature, stressing the importance of the sources’ correct positioning, in which the computational system developed provides, in order not to release excessive doses in adjacent risk organs. The 144 Gy prescription curve showed in the validation process that it covers perfectly a large percentage of the volume, at the same time that it demonstrates a large

  7. Schools (Students) Exchanging CAD/CAM Files over the Internet.

    Science.gov (United States)

    Mahoney, Gary S.; Smallwood, James E.

    This document discusses how students and schools can benefit from exchanging computer-aided design/computer-aided manufacturing (CAD/CAM) files over the Internet, explains how files are exchanged, and examines the problem of selected hardware/software incompatibility. Key terms associated with information search services are defined, and several…

  8. 78 FR 23969 - Ewan 1, INC. n/k/a AccessKey IP, Inc.; Order of Suspension of Trading

    Science.gov (United States)

    2013-04-23

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Ewan 1, INC. n/k/a AccessKey IP, Inc.; Order of Suspension of Trading April 19, 2013. It appears to the Securities and Exchange Commission that... AccessKey IP, Inc. (``AccessKey'') because it has not filed a periodic report since it filed its...

  9. 75 FR 71114 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-11-22

    ... Numbers: ER11-2060-000. Applicants: Edison Mission Marketing & Trading, Inc., Exelon Generation Company... Edison Mission Marketing & Trading, Inc., et. al. Filed Date: 11/09/2010. Accession Number: 20101109-5194....13(a)(2)(iii: Submission of Changes to Pricing Zone Rates--OMPA to be effective 7/26/2010. Filed Date...

  10. 77 FR 33208 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-06-05

    .... Comments Due: 5 p.m. ET 6/12/12. Docket Numbers: ER12-1832-000. Applicants: Lucky Corridor, LLC... that the Commission received the following exempt wholesale generator filings: Docket Numbers: EG12-69... Generator Status of Shooting Star Wind Project, LLC. Filed Date: 5/22/12. Accession Number: 20120522-5163...

  11. 77 FR 12276 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-02-29

    .... Docket Numbers: ER12-610-001. Applicants: Shiloh III Lessee, LLC. Description: Shiloh III Lessee MBR...: Perrin Ranch Wind, LLC Second Amendment to MBR Application to be effective 1/1/2012. Filed Date: 2/17/12... Cimarron Renewable Energy Company, LLC's MBR Application. Filed Date: 2/9/12. Accession Number: 20120209...

  12. Computerized index for teaching files

    International Nuclear Information System (INIS)

    Bramble, J.M.

    1989-01-01

    A computerized index can be used to retrieve cases from a teaching file that have radiographic findings similar to an unknown case. The probability that a user will review cases with a correct diagnosis was estimated with use of radiographic findings of arthritis in hand radiographs of 110 cases from a teaching file. The nearest-neighbor classification algorithm was used as a computer index to 110 cases of arthritis. Each case was treated as an unknown and inputted to the computer index. The accuracy of the computer index in retrieving cases with the same diagnosis (including rheumatoid arthritis, gout, psoriatic arthritis, inflammatory osteoarthritis, and pyrophosphate arthropathy) was measured. A Bayes classifier algorithm was also tested on the same database. Results are presented. The nearest-neighbor algorithm was 83%. By comparison, the estimated accuracy of the Bayes classifier algorithm was 78%. Conclusions: A computerized index to a teaching file based on the nearest-neighbor algorithm should allow the user to review cases with the correct diagnosis of an unknown case, by entering the findings of the unknown case

  13. The Global File System

    Science.gov (United States)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  14. Understanding and Improving Blind Students' Access to Visual Information in Computer Science Education

    Science.gov (United States)

    Baker, Catherine M.

    Teaching people with disabilities tech skills empowers them to create solutions to problems they encounter and prepares them for careers. However, computer science is typically taught in a highly visual manner which can present barriers for people who are blind. The goal of this dissertation is to understand and decrease those barriers. The first projects I present looked at the barriers that blind students face. I first present the results of my survey and interviews with blind students with degrees in computer science or related fields. This work highlighted the many barriers that these blind students faced. I then followed-up on one of the barriers mentioned, access to technology, by doing a preliminary accessibility evaluation of six popular integrated development environments (IDEs) and code editors. I found that half were unusable and all had some inaccessible portions. As access to visual information is a barrier in computer science education, I present three projects I have done to decrease this barrier. The first project is Tactile Graphics with a Voice (TGV). This project investigated an alternative to Braille labels for those who do not know Braille and showed that TGV was a potential alternative. The next project was StructJumper, which created a modified abstract syntax tree that blind programmers could use to navigate through code with their screen reader. The evaluation showed that users could navigate more quickly and easily determine the relationships of lines of code when they were using StructJumper compared to when they were not. Finally, I present a tool for dynamic graphs (the type with nodes and edges) which had two different modes for handling focus changes when moving between graphs. I found that the modes support different approaches for exploring the graphs and therefore preferences are mixed based on the user's preferred approach. However, both modes had similar accuracy in completing the tasks. These projects are a first step towards

  15. Zebra: A striped network file system

    Science.gov (United States)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  16. Cone-beam Computed Tomographic Assessment of Canal Centering Ability and Transportation after Preparation with Twisted File and Bio RaCe Instrumentation.

    Directory of Open Access Journals (Sweden)

    Kiamars Honardar

    2014-08-01

    Full Text Available Use of rotary Nickel-Titanium (NiTi instruments for endodontic preparation has introduced a new era in endodontic practice, but this issue has undergone dramatic modifications in order to achieve improved shaping abilities. Cone-beam computed tomography (CBCT has made it possible to accurately evaluate geometrical changes following canal preparation. This study was carried out to compare canal centering ability and transportation of Twisted File and BioRaCe rotary systems by means of cone-beam computed tomography.Thirty root canals from freshly extracted mandibular and maxillary teeth were selected. Teeth were mounted and scanned before and after preparation by CBCT at different apical levels. Specimens were divided into 2 groups of 15. In the first group Twisted File and in the second, BioRaCe was used for canal preparation. Canal transportation and centering ability after preparation were assessed by NNT Viewer and Photoshop CS4 software. Statistical analysis was performed using t-test and two-way ANOVA.All samples showed deviations from the original axes of the canals. No significant differences were detected between the two rotary NiTi instruments for canal centering ability in all sections. Regarding canal transportation however, a significant difference was seen in the BioRaCe group at 7.5mm from the apex.Under the conditions of this in vitro study, Twisted File and BioRaCe rotary NiTi files retained original canal geometry.

  17. Intelligent Access to Sequence and Structure Databases (IASSD) - an interface for accessing information from major web databases.

    Science.gov (United States)

    Ganguli, Sayak; Gupta, Manoj Kumar; Basu, Protip; Banik, Rahul; Singh, Pankaj Kumar; Vishal, Vineet; Bera, Abhisek Ranjan; Chakraborty, Hirak Jyoti; Das, Sasti Gopal

    2014-01-01

    With the advent of age of big data and advances in high throughput technology accessing data has become one of the most important step in the entire knowledge discovery process. Most users are not able to decipher the query result that is obtained when non specific keywords or a combination of keywords are used. Intelligent access to sequence and structure databases (IASSD) is a desktop application for windows operating system. It is written in Java and utilizes the web service description language (wsdl) files and Jar files of E-utilities of various databases such as National Centre for Biotechnology Information (NCBI) and Protein Data Bank (PDB). Apart from that IASSD allows the user to view protein structure using a JMOL application which supports conditional editing. The Jar file is freely available through e-mail from the corresponding author.

  18. 77 FR 20813 - Combined Notice of Filings #2

    Science.gov (United States)

    2012-04-06

    ..., Inc. submits tariff filing per 35: 03-30-12 ATXI Attachment O and GG Compliance to be effective 3/1... City of Pella and MEC to be effective 4/1/2012. Filed Date: 3/30/12. Accession Number: 20120330-5080...., PPL Electric Utilities Corporation. Description: PPL Electric submits revisions to OATT Attachment H...

  19. 78 FR 76607 - Combined Notice of Filings #2

    Science.gov (United States)

    2013-12-18

    ... Splitter Wind Farm, LLC submits Second Revised MBR to be effective 12/10/2013. Filed Date: 12/9/13...: Sagebrush Power Partners, LLC. Description: Sagebrush Power Partners, LLC submits First Rev MBR to be... Solutions LLC submits First Revised MBR Tariff to be effective 12/10/2013. Filed Date: 12/9/13. Accession...

  20. 78 FR 76608 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-12-18

    ... Windpower LLC. Description: Second Revised MBR to be effective 12/7/2013. Filed Date: 12/6/13. Accession... Canyon Windpower II LLC. Description: Second Revised MBR Tariff to be effective 12/7/2013. Filed Date: 12...-000. Applicants: High Trail Wind Farm, LLC. Description: Second Revised MBR Tariff to be effective 12...

  1. Photonic-assisted ultrafast THz wireless access

    DEFF Research Database (Denmark)

    Yu, Xianbin; Chen, Ying; Galili, Michael

    THz technology has been considered feasible for ultrafast wireless data communi- cation, to meet the increasing demand on next-generation fast wireless access, e.g., huge data file transferring and fast mobile data stream access. This talk reviews recent progress in high-speed THz wireless...

  2. Evaluation of clinical data in childhood asthma. Application of a computer file system

    International Nuclear Information System (INIS)

    Fife, D.; Twarog, F.J.; Geha, R.S.

    1983-01-01

    A computer file system was used in our pediatric allergy clinic to assess the value of chest roentgenograms and hemoglobin determinations used in the examination of patients and to correlate exposure to pets and forced hot air with the severity of asthma. Among 889 children with asthma, 20.7% had abnormal chest roentgenographic findings, excluding hyperinflation and peribronchial thickening, and 0.7% had abnormal hemoglobin values. Environmental exposure to pets or forced hot air was not associated with increased severity of asthma, as assessed by five measures of outcome: number of medications administered, requirement for corticosteroids, frequency of clinic visits, frequency of emergency room visits, and frequency of hospitalizations

  3. Optimizing Input/Output Using Adaptive File System Policies

    Science.gov (United States)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  4. 75 FR 37789 - Orlando Utilities Commission; Notice of Filing

    Science.gov (United States)

    2010-06-30

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. NJ10-2-000] Orlando Utilities Commission; Notice of Filing June 23, 2010. Take notice that on June 11, 2010, the Orlando Utilities Commission filed, pro forma revised tariff sheets for inclusion in its open access transmission...

  5. Privacy authentication using key attribute-based encryption in mobile cloud computing

    Science.gov (United States)

    Mohan Kumar, M.; Vijayan, R.

    2017-11-01

    Mobile Cloud Computing is becoming more popular in nowadays were users of smartphones are getting increased. So, the security level of cloud computing as to be increased. Privacy Authentication using key-attribute based encryption helps the users for business development were the data sharing with the organization using the cloud in a secured manner. In Privacy Authentication the sender of data will have permission to add their receivers to whom the data access provided for others the access denied. In sender application, the user can choose the file which is to be sent to receivers and then that data will be encrypted using Key-attribute based encryption using AES algorithm. In which cipher created, and that stored in Amazon Cloud along with key value and the receiver list.

  6. Remote I/O : fast access to distant storage.

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Kohr, D., Jr.; Krishnaiyer, R.; Mogill, J.

    1997-12-17

    As high-speed networks make it easier to use distributed resources, it becomes increasingly common that applications and their data are not colocated. Users have traditionally addressed this problem by manually staging data to and from remote computers. We argue instead for a new remote I/O paradigm in which programs use familiar parallel I/O interfaces to access remote file systems. In addition to simplifying remote execution, remote I/O can improve performance relative to staging by overlapping computation and data transfer or by reducing communication requirements. However, remote I/O also introduces new technical challenges in the areas of portability, performance, and integration with distributed computing systems. We propose techniques designed to address these challenges and describe a remote I/O library called RIO that we have developed to evaluate the effectiveness of these techniques. RIO addresses issues of portability by adopting the quasi-standard MPI-IO interface and by defining a RIO device and RIO server within the ADIO abstract I/O device architecture. It addresses performance issues by providing traditional I/O optimizations such as asynchronous operations and through implementation techniques such as buffering and message forwarding to off load communication overheads. RIO uses the Nexus communication library to obtain access to configuration and security mechanisms provided by the Globus wide area computing tool kit. Microbenchmarks and application experiments demonstrate that our techniques achieve acceptable performance in most situations and can improve turnaround time relative to staging.

  7. Survey of Canadian Myotonic Dystrophy Patients' Access to Computer Technology.

    Science.gov (United States)

    Climans, Seth A; Piechowicz, Christine; Koopman, Wilma J; Venance, Shannon L

    2017-09-01

    Myotonic dystrophy type 1 is an autosomal dominant condition affecting distal hand strength, energy, and cognition. Increasingly, patients and families are seeking information online. An online neuromuscular patient portal under development can help patients access resources and interact with each other regardless of location. It is unknown how individuals living with myotonic dystrophy interact with technology and whether barriers to access exist. We aimed to characterize technology use among participants with myotonic dystrophy and to determine whether there is interest in a patient portal. Surveys were mailed to 156 participants with myotonic dystrophy type 1 registered with the Canadian Neuromuscular Disease Registry. Seventy-five participants (60% female) responded; almost half were younger than 46 years. Most (84%) used the internet; almost half of the responders (47%) used social media. The complexity and cost of technology were commonly cited reasons not to use technology. The majority of responders (76%) were interested in a myotonic dystrophy patient portal. Patients in a Canada-wide registry of myotonic dystrophy have access to and use technology such as computers and mobile phones. These patients expressed interest in a portal that would provide them with an opportunity to network with others with myotonic dystrophy and to access information about the disease.

  8. A portable grid-enabled computing system for a nuclear material study

    International Nuclear Information System (INIS)

    Tsujita, Yuichi; Arima, Tatsumi; Takekawa, Takayuki; Suzuki, Yoshio

    2010-01-01

    We have built a portable grid-enabled computing system specialized for our molecular dynamics (MD) simulation program to study Pu material easily. Experimental approach to reveal properties of Pu materials is often accompanied by some difficulties such as radiotoxicity of actinides. Since a computational approach reveals new aspects to researchers without such radioactive facilities, we address an MD computation. In order to have more realistic results about e.g., melting point or thermal conductivity, we need a large scale of parallel computations. Most of application users who don't have supercomputers in their institutes should use a remote supercomputer. For such users, we have developed the portable and secured grid-enabled computing system to utilize a grid computing infrastructure provided by Information Technology Based Laboratory (ITBL). This system enables us to access remote supercomputers in the ITBL system seamlessly from a client PC through its graphical user interface (GUI). Typically it enables seamless file accesses on the GUI. Furthermore monitoring of standard output or standard error is available to see progress of an executed program. Since the system provides fruitful functionalities which are useful for parallel computing on a remote supercomputer, application users can concentrate on their researches. (author)

  9. 76 FR 16621 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-03-24

    ... submits tariff filing per 35.13(a)(2)(iii: Crete Energy Venture, LLC Reactive Service Rate Schedule to be... Transfer Reactive Power Revenue Requirement to be effective 6/1/2011. Filed Date: 03/16/2011. Accession..., DC. There is an eSubscription link on the Web site that enables subscribers to receive e-mail...

  10. 78 FR 39722 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-07-02

    ... Solar, LLC. Description: Notice of Change in Status of the EDF-RE MBR Companies. Filed Date: 6/21/13.... Docket Numbers: ER13-1747-000. Applicants: eBay Inc. Description: eBay Inc. MBR Application and Initial MBR Tariff to be effective 8/26/2013. Filed Date: 6/21/13. Accession Number: 20130621-5118. Comments...

  11. Android Access Control Extension

    Directory of Open Access Journals (Sweden)

    Anton Baláž

    2015-12-01

    Full Text Available The main objective of this work is to analyze and extend security model of mobile devices running on Android OS. Provided security extension is a Linux kernel security module that allows the system administrator to restrict program's capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths. Module supplements the traditional Android capability access control model by providing mandatory access control (MAC based on path. This extension increases security of access to system objects in a device and allows creating security sandboxes per application.

  12. TRANSNET -- access to radioactive and hazardous materials transportation codes and databases

    International Nuclear Information System (INIS)

    Cashwell, J.W.

    1992-01-01

    TRANSNET has been developed and maintained by Sandia National Laboratories under the sponsorship of the United States Department of Energy (DOE) Office of Environmental Restoration and Waste Management to permit outside access to computerized routing, risk and systems analysis models, and associated databases. The goal of the TRANSNET system is to enable transfer of transportation analytical methods and data to qualified users by permitting direct, timely access to the up-to-date versions of the codes and data. The TRANSNET facility comprises a dedicated computer with telephone ports on which these codes and databases are adapted, modified, and maintained. To permit the widest spectrum of outside users, TRANSNET is designed to minimize hardware and documentation requirements. The user is thus required to have an IBM-compatible personal computer, Hayes-compatible modem with communications software, and a telephone. Maintenance and operation of the TRANSNET facility are underwritten by the program sponsor(s) as are updates to the respective models and data, thus the only charges to the user of the system are telephone hookup charges. TRANSNET provides access to the most recent versions of the models and data developed by or for Sandia National Laboratories. Code modifications that have been made since the last published documentation are noted to the user on the introductory screens. User friendly interfaces have been developed for each of the codes and databases on TRANSNET. In addition, users are provided with default input data sets for typical problems which can either be used directly or edited. Direct transfers of analytical or data files between codes are provided to permit the user to perform complex analyses with a minimum of input. Recent developments to the TRANSNET system include use of the system to directly pass data files between both national and international users as well as development and integration of graphical depiction techniques

  13. CRYPTOGRAPHIC SECURE CLOUD STORAGE MODEL WITH ANONYMOUS AUTHENTICATION AND AUTOMATIC FILE RECOVERY

    Directory of Open Access Journals (Sweden)

    Sowmiya Murthy

    2014-10-01

    Full Text Available We propose a secure cloud storage model that addresses security and storage issues for cloud computing environments. Security is achieved by anonymous authentication which ensures that cloud users remain anonymous while getting duly authenticated. For achieving this goal, we propose a digital signature based authentication scheme with a decentralized architecture for distributed key management with multiple Key Distribution Centers. Homomorphic encryption scheme using Paillier public key cryptosystem is used for encrypting the data that is stored in the cloud. We incorporate a query driven approach for validating the access policies defined by an individual user for his/her data i.e. the access is granted to a requester only if his credentials matches with the hidden access policy. Further, since data is vulnerable to losses or damages due to the vagaries of the network, we propose an automatic retrieval mechanism where lost data is recovered by data replication and file replacement with string matching algorithm. We describe a prototype implementation of our proposed model.

  14. CERN Confirms commitment to Open Access

    CERN Multimedia

    2005-01-01

    The CERN Library Information desk.At a meeting on the Wednesday before Easter, the Executive Committee endorsed a policy of open access to all the laboratory's results, as expressed in the document ‘Continuing CERN action on Open Access' (http://cds.cern.ch/record/828991/files/open-2005-006.pdf), released by its Scientific Information Policy Board (SIPB) earlier in the month. "This underlines CERN's commitment to sharing the excitement of fundamental research with as wide an audience as possible", said Guido Altarelli, current SIPB chairman. Open Access to scientific knowledge is today the goal of an increasing component of the worldwide scientific community. It is a concept, made possible by new electronic tools, which would bring enormous benefits to all readers by giving them free access to research results. CERN has implicitly supported such moves from its very beginning. Its Convention (http://cds.cern.ch/record/330625/files/cm-p00046871.pdf), adopted in 1953, requires openness, stipulating that "......

  15. ERX: a software for editing files containing X-ray spectra to be used in exposure computational models

    International Nuclear Information System (INIS)

    Cabral, Manuela O.M.; Vieira, Jose W.; Silva, Alysson G.; Leal Neto, Viriato; Oliveira, Alex C.H.; Lima, Fernando R.A.

    2011-01-01

    Exposure Computational Models (ECMs) are utilities that simulate situations in which occurs irradiation in a given environment. An ECM is composed primarily by an anthropomorphic model (phantom), and a Monte Carlo code (MC). This paper presents a tutorial of the software Espectro de Raios-X (ERX). This software performs reading and numerical and graphical analysis of text files containing diagnostic X-ray spectra for use in algorithms of radioactive sources in the ECMs of a Grupo de Dosimetria Numerica. The ERX allows the user to select one among several X-ray spectrums in the energy range Diagnostic radiology X-Ray most commonly used in radiology clinics. In the current version of the ERX there are two types of input files: the contained in mspectra.dat file and the resulting of MC simulations in Geant4. The software allows the construction of charts of the Probability Density Function (PDF) and Cumulative Distribution Function (CDF) of a selected spectrum as well as the table with the values of these functions and the spectrum. In addition, the ERX allows the user to make comparative analysis between the PDF graphics of the two catalogs of spectra available, besides being can perform dosimetric evaluations with the selected spectrum. A software of this kind is an important computational tool for researchers in numerical dosimetry because of the diversity of Diagnostic radiology X-Ray machines, which implies in a mass of input data highly diverse. And because of this, the ERX provides independence to the group related to the data origin that is contained in the catalogs created, not being necessary to resort to others. (author)

  16. Migration Performance for Legacy Data Access

    Directory of Open Access Journals (Sweden)

    Kam Woods

    2008-12-01

    Full Text Available We present performance data relating to the use of migration in a system we are creating to provide web access to heterogeneous document collections in legacy formats. Our goal is to enable sustained access to collections such as these when faced with increasing obsolescence of the necessary supporting applications and operating systems. Our system allows searching and browsing of the original files within their original contexts utilizing binary images of the original media. The system uses static and dynamic file migration to enhance collection browsing, and emulation to support both the use of legacy programs to access data and long-term preservation of the migration software. While we provide an overview of the architectural issues in building such a system, the focus of this paper is an in-depth analysis of file migration using data gathered from testing our software on 1,885 CD-ROMs and DVDs. These media are among the thousands of collections of social and scientific data distributed by the United States Government Printing Office (GPO on legacy media (CD-ROM, DVD, floppy disk under the Federal Depository Library Program (FDLP over the past 20 years.

  17. 78 FR 56223 - Combined Notice of Filings #2

    Science.gov (United States)

    2013-09-12

    ...-5229. Comments Due: 5 p.m. ET 9/26/13. Docket Numbers: ER13-1346-000. Applicants: Mesa Wind Power Corporation. Description: Mesa Wind Refund Report to be effective 9/4/2013. Filed Date: 9/5/13. Accession.... Applicants: Duke Energy Progress, Inc. Description: MBR Name Change to be effective 10/25/2013. Filed Date: 9...

  18. 78 FR 57146 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-09-17

    ... Management, LLC, GenOn Mid-Atlantic, LLC, Green Mountain Energy Company, High Plains Ranch II, LLC, Huntley... Revised Service Agreement No. 3452; Queue No. Y1-020 to be effective 8/8/2013. Filed Date: 9/9/13... Agreement No. 3639--Queue Position W4-038 to be effective 8/8/2013. Filed Date: 9/9/13. Accession Number...

  19. 76 FR 69257 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-11-08

    ..., Liberty Electric Power, LLC, Empire Generating Co, LLC, ECP Energy I, LLC, EquiPower Resources Management... Service Agreement No. 3085--Queue No. W3-156 to be effective 9/26/2011. Filed Date: 10/26/2011. Accession... tariff filing per 35.13(a)(2)(iii: Original Service Agreement No. 3089--Queue No. W3-029 to be effective...

  20. Geothermal-energy files in computer storage: sites, cities, and industries

    Energy Technology Data Exchange (ETDEWEB)

    O' Dea, P.L.

    1981-12-01

    The site, city, and industrial files are described. The data presented are from the hydrothermal site file containing about three thousand records which describe some of the principal physical features of hydrothermal resources in the United States. Data elements include: latitude, longitude, township, range, section, surface temperature, subsurface temperature, the field potential, and well depth for commercialization. (MHR)

  1. Internet Use and Access Among Pregnant Women via Computer and Mobile Phone: Implications for Delivery of Perinatal Care.

    Science.gov (United States)

    Peragallo Urrutia, Rachel; Berger, Alexander A; Ivins, Amber A; Beckham, A Jenna; Thorp, John M; Nicholson, Wanda K

    2015-03-30

    The use of Internet-based behavioral programs may be an efficient, flexible method to enhance prenatal care and improve pregnancy outcomes. There are few data about access to, and use of, the Internet via computers and mobile phones among pregnant women. We describe pregnant women's access to, and use of, computers, mobile phones, and computer technologies (eg, Internet, blogs, chat rooms) in a southern United States population. We describe the willingness of pregnant women to participate in Internet-supported weight-loss interventions delivered via computers or mobile phones. We conducted a cross-sectional survey among 100 pregnant women at a tertiary referral center ultrasound clinic in the southeast United States. Data were analyzed using Stata version 10 (StataCorp) and R (R Core Team 2013). Means and frequency procedures were used to describe demographic characteristics, access to computers and mobile phones, and use of specific Internet modalities. Chi-square testing was used to determine whether there were differences in technology access and Internet modality use according to age, race/ethnicity, income, or children in the home. The Fisher's exact test was used to describe preferences to participate in Internet-based postpartum weight-loss interventions via computer versus mobile phone. Logistic regression was used to determine demographic characteristics associated with these preferences. The study sample was 61.0% white, 26.0% black, 6.0% Hispanic, and 7.0% Asian with a mean age of 31.0 (SD 5.1). Most participants had access to a computer (89/100, 89.0%) or mobile phone (88/100, 88.0%) for at least 8 hours per week. Access remained high (>74%) across age groups, racial/ethnic groups, income levels, and number of children in the home. Internet/Web (94/100, 94.0%), email (90/100, 90.0%), and Facebook (50/100, 50.0%) were the most commonly used Internet technologies. Women aged less than 30 years were more likely to report use of Twitter and chat rooms

  2. FileMaker Pro 11 The Missing Manual

    CERN Document Server

    Prosser, Susan

    2010-01-01

    This hands-on, friendly guide shows you how to harness FileMaker's power to make your information work for you. With a few mouse clicks, the FileMaker Pro 11 database helps you create and print corporate reports, manage a mailing list, or run your entire business. FileMaker Pro 11: The Missing Manual helps you get started, build your database, and produce results, whether you're running a business, pursuing a hobby, or planning your retirement. It's a thorough, accessible guide for new, non-technical users, as well as those with more experience. Start up: Get your first database up and runnin

  3. Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning tools like Spark, Jupyter, R, S...

  4. The NCAR Research Data Archive's Hybrid Approach for Data Discovery and Access

    Science.gov (United States)

    Schuster, D.; Worley, S. J.

    2013-12-01

    The NCAR Research Data Archive (RDA http://rda.ucar.edu) maintains a variety of data discovery and access capabilities for it's 600+ dataset collections to support the varying needs of a diverse user community. In-house developed and standards-based community tools offer services to more than 10,000 users annually. By number of users the largest group is external and access the RDA through web based protocols; the internal NCAR HPC users are fewer in number, but typically access more data volume. This paper will detail the data discovery and access services maintained by the RDA to support both user groups, and show metrics that illustrate how the community is using the services. The distributed search capability enabled by standards-based community tools, such as Geoportal and an OAI-PMH access point that serves multiple metadata standards, provide pathways for external users to initially discover RDA holdings. From here, in-house developed web interfaces leverage primary discovery level metadata databases that support keyword and faceted searches. Internal NCAR HPC users, or those familiar with the RDA, may go directly to the dataset collection of interest and refine their search based on rich file collection metadata. Multiple levels of metadata have proven to be invaluable for discovery within terabyte-sized archives composed of many atmospheric or oceanic levels, hundreds of parameters, and often numerous grid and time resolutions. Once users find the data they want, their access needs may vary as well. A THREDDS data server running on targeted dataset collections enables remote file access through OPENDAP and other web based protocols primarily for external users. In-house developed tools give all users the capability to submit data subset extraction and format conversion requests through scalable, HPC based delayed mode batch processing. Users can monitor their RDA-based data processing progress and receive instructions on how to access the data when it is

  5. MR-AFS: a global hierarchical file-system

    International Nuclear Information System (INIS)

    Reuter, H.

    2000-01-01

    The next generation of fusion experiments will use object-oriented technology creating the need for world wide sharing of an underlying hierarchical file-system. The Andrew file system (AFS) is a well known and widely spread global distributed file-system. Multiple-resident-AFS (MR-AFS) combines the features of AFS with hierarchical storage management systems. Files in MR-AFS therefore may be migrated on secondary storage, such as roboted tape libraries. MR-AFS is in use at IPP for the current experiments and data originating from super-computer applications. Experiences and scalability issues are discussed

  6. DMFS: A Data Migration File System for NetBSD

    Science.gov (United States)

    Studenmund, William

    2000-01-01

    I have recently developed DMFS, a Data Migration File System, for NetBSD. This file system provides kernel support for the data migration system being developed by my research group at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal metadata in a flat file, which resides on a separate file system. This paper will first describe our data migration system to provide a context for DMFS, then it will describe DMFS. It also will describe the changes to NetBSD needed to make DMFS work. Then it will give an overview of the file archival and restoration procedures, and describe how some typical user actions are modified by DMFS. Lastly, the paper will present simple performance measurements which indicate that there is little performance loss due to the use of the DMFS layer.

  7. Spin-transfer torque magnetoresistive random-access memory technologies for normally off computing (invited)

    International Nuclear Information System (INIS)

    Ando, K.; Yuasa, S.; Fujita, S.; Ito, J.; Yoda, H.; Suzuki, Y.; Nakatani, Y.; Miyazaki, T.

    2014-01-01

    Most parts of present computer systems are made of volatile devices, and the power to supply them to avoid information loss causes huge energy losses. We can eliminate this meaningless energy loss by utilizing the non-volatile function of advanced spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology and create a new type of computer, i.e., normally off computers. Critical tasks to achieve normally off computers are implementations of STT-MRAM technologies in the main memory and low-level cache memories. STT-MRAM technology for applications to the main memory has been successfully developed by using perpendicular STT-MRAMs, and faster STT-MRAM technologies for applications to the cache memory are now being developed. The present status of STT-MRAMs and challenges that remain for normally off computers are discussed

  8. AliEnFS - a Linux File System for the AliEn Grid Services

    OpenAIRE

    Peters, Andreas J.; Saiz, P.; Buncic, P.

    2003-01-01

    Among the services offered by the AliEn (ALICE Environment http://alien.cern.ch) Grid framework there is a virtual file catalogue to allow transparent access to distributed data-sets using various file transfer protocols. $alienfs$ (AliEn File System) integrates the AliEn file catalogue as a new file system type into the Linux kernel using LUFS, a hybrid user space file system framework (Open Source http://lufs.sourceforge.net). LUFS uses a special kernel interface level called VFS (Virtual F...

  9. 75 FR 38803 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-07-06

    ... Tinker Gen Co.; Algonquin Energy Services Inc.; Algonquin Northern Maine Gen Co. Description: Algonquin Tinker Gen Co et al. resubmits Substitute Second et al. to FERC Electric Tariff, Fourth Revised Volume 1... X of the Midwest ISO's Open Access Transmission etc. Filed Date: 06/23/2010. Accession Number...

  10. Perceptions of Cataloguers and End-Users towards Bilingual Authority Files.

    Science.gov (United States)

    Abdoulaye, Kaba

    2002-01-01

    Analyzes and describes bilingual authority files at the main library of the International Islamic University of Malaysia. Highlights include a review of multilingual research; perceptions of end users and catalogers; problems with bilingual files; and use of the OPAC (online public access catalog) by users. (Author/LRW)

  11. Computer Security: confidentiality is everybody’s business

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    Recently, a zip file with confidential information was mistakenly made public on one of CERN’s websites. Although the file was only intended for members of an internal committee, when placing it onto the CERN website, someone made a mistake when setting the access permissions and, thus, made the file accessible to everyone visiting the site!   Unfortunately, this is but one example of such mistakes. We have seen other documents made accessible to a much wider audience than originally intended… CERN takes serious measures to ensure the confidentiality of data. Confidential or “sensitive” documents (following the nomenclature set out in the CERN Data Protection Policy) deserve professional handling and access protections given only to the people who really need to access them. As such, they must not be widely circulated as attachments in e-mails and, most definitely, must not be stored on random public websites for the sole purpose of shari...

  12. Optimizing Instruction Scheduling and Register Allocation for Register-File-Connected Clustered VLIW Architectures

    Science.gov (United States)

    Tang, Haijing; Wang, Siye; Zhang, Yanjun

    2013-01-01

    Clustering has become a common trend in very long instruction words (VLIW) architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC) VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC) VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file. PMID:23970841

  13. Optimizing Instruction Scheduling and Register Allocation for Register-File-Connected Clustered VLIW Architectures

    Directory of Open Access Journals (Sweden)

    Haijing Tang

    2013-01-01

    Full Text Available Clustering has become a common trend in very long instruction words (VLIW architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file.

  14. Virtual file system for PSDS

    Science.gov (United States)

    Runnels, Tyson D.

    1993-01-01

    This is a case study. It deals with the use of a 'virtual file system' (VFS) for Boeing's UNIX-based Product Standards Data System (PSDS). One of the objectives of PSDS is to store digital standards documents. The file-storage requirements are that the files must be rapidly accessible, stored for long periods of time - as though they were paper, protected from disaster, and accumulative to about 80 billion characters (80 gigabytes). This volume of data will be approached in the first two years of the project's operation. The approach chosen is to install a hierarchical file migration system using optical disk cartridges. Files are migrated from high-performance media to lower performance optical media based on a least-frequency-used algorithm. The optical media are less expensive per character stored and are removable. Vital statistics about the removable optical disk cartridges are maintained in a database. The assembly of hardware and software acts as a single virtual file system transparent to the PSDS user. The files are copied to 'backup-and-recover' media whose vital statistics are also stored in the database. Seventeen months into operation, PSDS is storing 49 gigabytes. A number of operational and performance problems were overcome. Costs are under control. New and/or alternative uses for the VFS are being considered.

  15. FileMaker Pro 9

    CERN Document Server

    Coffey, Geoff

    2007-01-01

    FileMaker Pro 9: The Missing Manual is the clear, thorough and accessible guide to the latest version of this popular desktop database program. FileMaker Pro lets you do almost anything with the information you give it. You can print corporate reports, plan your retirement, or run a small country -- if you know what you're doing. This book helps non-technical folks like you get in, get your database built, and get the results you need. Pronto.The new edition gives novices and experienced users the scoop on versions 8.5 and 9. It offers complete coverage of timesaving new features such as the Q

  16. SuperB R&D computing program: HTTP direct access to distributed resources

    Science.gov (United States)

    Fella, A.; Bianchi, F.; Ciaschini, V.; Corvo, M.; Delprete, D.; Diacono, D.; Di Simone, A.; Franchini, P.; Donvito, G.; Giacomini, F.; Gianoli, A.; Longo, S.; Luitz, S.; Luppi, E.; Manzali, M.; Pardi, S.; Perez, A.; Rama, M.; Russo, G.; Santeramo, B.; Stroili, R.; Tomassetti, L.

    2012-12-01

    The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 1036cm-2s-1. The increasing network performance also in the Wide Area Network environment and the capability to read data remotely with good efficiency are providing new possibilities and opening new scenarios in the data access field. Subjects like data access and data availability in a distributed environment are key points in the definition of the computing model for an HEP experiment like SuperB. R&D efforts in such a field have been brought on during the last year in order to release the Computing Technical Design Report within 2013. WAN direct access to data has been identified as one of the more interesting viable option; robust and reliable protocols as HTTP/WebDAV and xrootd are the subjects of a specific R&D line in a mid-term scenario. In this work we present the R&D results obtained in the study of new data access technologies for typical HEP use cases, focusing on specific protocols such as HTTP and WebDAV in Wide Area Network scenarios. Reports on efficiency, performance and reliability tests performed in a data analysis context have been described. Future R&D plan includes HTTP and xrootd protocols comparison tests, in terms of performance, efficiency, security and features available.

  17. Workarounds to computer access in healthcare organizations: you want my password or a dead patient?

    Science.gov (United States)

    Koppel, Ross; Smith, Sean; Blythe, Jim; Kothari, Vijay

    2015-01-01

    Workarounds to computer access in healthcare are sufficiently common that they often go unnoticed. Clinicians focus on patient care, not cybersecurity. We argue and demonstrate that understanding workarounds to healthcare workers' computer access requires not only analyses of computer rules, but also interviews and observations with clinicians. In addition, we illustrate the value of shadowing clinicians and conducing focus groups to understand their motivations and tradeoffs for circumvention. Ethnographic investigation of the medical workplace emerges as a critical method of research because in the inevitable conflict between even well-intended people versus the machines, it's the people who are the more creative, flexible, and motivated. We conducted interviews and observations with hundreds of medical workers and with 19 cybersecurity experts, CIOs, CMIOs, CTO, and IT workers to obtain their perceptions of computer security. We also shadowed clinicians as they worked. We present dozens of ways workers ingeniously circumvent security rules. The clinicians we studied were not "black hat" hackers, but just professionals seeking to accomplish their work despite the security technologies and regulations.

  18. Combining sync&share functionality with filesystem-like access

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    In our presentation we will analyse approaches to combine the sync & share functionality with file system-like access to data. While relatively small data volumes (GBs) can be distributed by sync&share application across user devices such as PCs, laptops and mobiles, interacting with really large data volumes (TBs, PBs) may require additional remote data access mechanism such as filesystem-like interface. We will discuss several ways for offering filesystem-like access in addition to sync & share functionality. Todays sync & share solutions may employ various data organisation in the back-end including local and distributed file systems and object stores. Therefore various approaches to providing the client with filesystem-like access are necessary in these systems. We will present possible options to integrate the filesystem-like access with sync&share functionality in the popular sync&share system. We will also show a NDS2 project solution where data backups and archives are kept sec...

  19. 76 FR 52650 - Federal Energy Regulatory Commission Combined Notice of Filings #1

    Science.gov (United States)

    2011-08-23

    ... Depreciation Rate Update) to be effective 1/1/2012. Filed Date: 08/11/2011. Accession Number: 20110811-5114...)(iii: JEA Scherer Unit 4 TSA Amendment Filing (SEGCO Depreciation Rate Update) to be effective 1/1/2012...

  20. Impact of the Digital Divide on Computer Use and Internet Access on the Poor in Nigeria

    Science.gov (United States)

    Tayo, Omolara; Thompson, Randall; Thompson, Elizabeth

    2016-01-01

    We recruited 20 community members in Ido Local Government Area, Oyo state and Yewa Local Government Area, Ogun state in Nigeria to explore experiences and perceptions of Internet access and computer use. Face-to-face interviews were conducted using open-ended questions to collect qualitative data regarding accessibility of information and…

  1. JNDC FP decay data file

    International Nuclear Information System (INIS)

    Yamamoto, Tohru; Akiyama, Masatsugu

    1981-02-01

    The decay data file for fission product nuclides (FP DECAY DATA FILE) has been prepared for summation calculation of the decay heat of fission products. The average energies released in β- and γ-transitions have been calculated with computer code PROFP. The calculated results and necessary information have been arranged in tabular form together with the estimated results for 470 nuclides of which decay data are not available experimentally. (author)

  2. Internet Use and Access Among Pregnant Women via Computer and Mobile Phone: Implications for Delivery of Perinatal Care

    Science.gov (United States)

    Peragallo Urrutia, Rachel; Berger, Alexander A; Ivins, Amber A; Beckham, A Jenna; Thorp Jr, John M

    2015-01-01

    Background The use of Internet-based behavioral programs may be an efficient, flexible method to enhance prenatal care and improve pregnancy outcomes. There are few data about access to, and use of, the Internet via computers and mobile phones among pregnant women. Objective We describe pregnant women’s access to, and use of, computers, mobile phones, and computer technologies (eg, Internet, blogs, chat rooms) in a southern United States population. We describe the willingness of pregnant women to participate in Internet-supported weight-loss interventions delivered via computers or mobile phones. Methods We conducted a cross-sectional survey among 100 pregnant women at a tertiary referral center ultrasound clinic in the southeast United States. Data were analyzed using Stata version 10 (StataCorp) and R (R Core Team 2013). Means and frequency procedures were used to describe demographic characteristics, access to computers and mobile phones, and use of specific Internet modalities. Chi-square testing was used to determine whether there were differences in technology access and Internet modality use according to age, race/ethnicity, income, or children in the home. The Fisher’s exact test was used to describe preferences to participate in Internet-based postpartum weight-loss interventions via computer versus mobile phone. Logistic regression was used to determine demographic characteristics associated with these preferences. Results The study sample was 61.0% white, 26.0% black, 6.0% Hispanic, and 7.0% Asian with a mean age of 31.0 (SD 5.1). Most participants had access to a computer (89/100, 89.0%) or mobile phone (88/100, 88.0%) for at least 8 hours per week. Access remained high (>74%) across age groups, racial/ethnic groups, income levels, and number of children in the home. Internet/Web (94/100, 94.0%), email (90/100, 90.0%), and Facebook (50/100, 50.0%) were the most commonly used Internet technologies. Women aged less than 30 years were more likely to

  3. File-System Workload on a Scientific Multiprocessor

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1995-01-01

    Many scientific applications have intense computational and I/O requirements. Although multiprocessors have permitted astounding increases in computational performance, the formidable I/O needs of these applications cannot be met by current multiprocessors a their I/O subsystems. To prevent I/O subsystems from forever bottlenecking multiprocessors and limiting the range of feasible applications, new I/O subsystems must be designed. The successful design of computer systems (both hardware and software) depends on a thorough understanding of their intended use. A system designer optimizes the policies and mechanisms for the cases expected to most common in the user's workload. In the case of multiprocessor file systems, however, designers have been forced to build file systems based only on speculation about how they would be used, extrapolating from file-system characterizations of general-purpose workloads on uniprocessor and distributed systems or scientific workloads on vector supercomputers (see sidebar on related work). To help these system designers, in June 1993 we began the Charisma Project, so named because the project sought to characterize 1/0 in scientific multiprocessor applications from a variety of production parallel computing platforms and sites. The Charisma project is unique in recording individual read and write requests-in live, multiprogramming, parallel workloads (rather than from selected or nonparallel applications). In this article, we present the first results from the project: a characterization of the file-system workload an iPSC/860 multiprocessor running production, parallel scientific applications at NASA's Ames Research Center.

  4. 75 FR 15429 - Dynegy Power Marketing, Inc;. Notice of Filing

    Science.gov (United States)

    2010-03-29

    ... Marketing, Inc;. Notice of Filing March 22, 2010. Take notice that on December 15, 2008, Dynegy Power Marketing, Inc., Dynegy Power Corp., El Segundo Power LLC, Long Beach Generation LLC, Cabrillo Power I LLC... Commission, 888 First Street, NE., Washington, DC 20426. This filing is accessible online at http://www.ferc...

  5. INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)

    International Nuclear Information System (INIS)

    Arezzini, S; Carboni, A; Caruso, G; Ciampa, A; Coscetti, S; Mazzoni, E; Piras, S

    2014-01-01

    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.

  6. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    International Nuclear Information System (INIS)

    Bauerdick, L A T; Bloom, K; Bockelman, B; Bradley, D C; Dasu, S; Dost, J M; Sfiligoi, I; Tadel, A; Tadel, M; Wuerthwein, F; Yagil, A

    2014-01-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  7. Evaluation of Single File Systems Reciproc, Oneshape, and WaveOne using Cone Beam Computed Tomography -An In Vitro Study.

    Science.gov (United States)

    Dhingra, Annil; Ruhal, Nidhi; Miglani, Anjali

    2015-04-01

    Successful endodontic therapy depends on many factor, one of the most important step in any root canal treatment is root canal preparation. In addition, respecting the original shape of the canal is of the same importance; otherwise, canal aberrations such as transportation will be created. The purpose of this study is to compare and evaluate Reciprocating WaveOne ,Reciproc and Rotary Oneshape Single File Instrumentation System On Cervical Dentin Thickness, Cross Sectional Area and Canal Transportation on First Mandibular Molar Using Cone Beam Computed Tomography. Sixty Mandibular First Molars extracted due to periodontal reason was collected from the Department of Oral and Maxillofacial. Teeth were prepared using one rotary and two reciprocating single file system. Teeth were divided into 3 groups 20 teeth in each group. Pre instrumentation and Post instrumentation scans was done and evaluated for three parameters Canal Transportation, Cervical Dentinal Thickness, Cross-sectional Area. Results were analysed statistically using ANOVA, Post-Hoc Tukey analysis. The change in cross-sectional area after filing showed significant difference at 0mm, 1mm, 2mm and 7mm (pfile system over a distance of 7 mm (starting from 0mm and then evaluation at 1mm, 2mm, 3mm, 5mm and 7mm), the results showed a significant difference among the file systems at various lengths (p= 0.014, 0.046, 0.004, 0.028, 0.005 & 0.029 respectively). Mean value of cervical dentinal removal is maximum at all the levels for oneshape and minimum for waveone showing the better quality of waveone and reciproc over oneshape file system. Significant difference was found at 9mm, 11mm and 12mm between all the three file systems (p<0.001,< 0.001, <0.001). It was concluded that reciprocating motion is better than rotary motion in all the three parameters Canal Transportation, Cross-sectional Area, Cervical Dentinal Thickness.

  8. Distributed Data Management and Distributed File Systems

    CERN Document Server

    Girone, Maria

    2015-01-01

    The LHC program has been successful in part due to the globally distributed computing resources used for collecting, serving, processing, and analyzing the large LHC datasets. The introduction of distributed computing early in the LHC program spawned the development of new technologies and techniques to synchronize information and data between physically separated computing centers. Two of the most challenges services are the distributed file systems and the distributed data management systems. In this paper I will discuss how we have evolved from local site services to more globally independent services in the areas of distributed file systems and data management and how these capabilities may continue to evolve into the future. I will address the design choices, the motivations, and the future evolution of the computing systems used for High Energy Physics.

  9. Using speech recognition to enhance the Tongue Drive System functionality in computer access.

    Science.gov (United States)

    Huo, Xueliang; Ghovanloo, Maysam

    2011-01-01

    Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing.

  10. Gender Differences in Availability, Internet Access and Rate of Usage of Computers among Distance Education Learners.

    Science.gov (United States)

    Atan, Hanafi; Sulaiman, Fauziah; Rahman, Zuraidah Abd; Idrus, Rozhan Mohammed

    2002-01-01

    Explores the level of availability of computers, Internet accessibility, and the rate of usage of computers both at home and at the workplace between distance education learners according to gender. Results of questionnaires completed at the Universiti Sains Malaysia indicate that distance education reduces the gender gap. (Author/LRW)

  11. Methods and Algorithms for Detecting Objects in Video Files

    Directory of Open Access Journals (Sweden)

    Nguyen The Cuong

    2018-01-01

    Full Text Available Video files are files that store motion pictures and sounds like in real life. In today's world, the need for automated processing of information in video files is increasing. Automated processing of information has a wide range of application including office/home surveillance cameras, traffic control, sports applications, remote object detection, and others. In particular, detection and tracking of object movement in video file plays an important role. This article describes the methods of detecting objects in video files. Today, this problem in the field of computer vision is being studied worldwide.

  12. Design and Implementation of Linux Access Control Model

    Institute of Scientific and Technical Information of China (English)

    Wei Xiaomeng; Wu Yongbin; Zhuo Jingchuan; Wang Jianyun; Haliqian Mayibula

    2017-01-01

    In this paper,the design and implementation of an access control model for Linux system are discussed in detail. The design is based on the RBAC model and combines with the inherent characteristics of the Linux system,and the support for the process and role transition is added.The core idea of the model is that the file is divided into different categories,and access authority of every category is distributed to several roles.Then,roles are assigned to users of the system,and the role of the user can be transited from one to another by running the executable file.

  13. A Novel Query Method for Spatial Data in Mobile Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Guangsheng Chen

    2018-01-01

    Full Text Available With the development of network communication, a 1000-fold increase in traffic demand from 4G to 5G, it is critical to provide efficient and fast spatial data access interface for applications in mobile environment. In view of the low I/O efficiency and high latency of existing methods, this paper presents a memory-based spatial data query method that uses the distributed memory file system Alluxio to store data and build a two-level index based on the Alluxio key-value structure; moreover, it aims to solve the problem of low efficiency of traditional method; according to the characteristics of Spark computing framework, a data input format for spatial data query is proposed, which can selectively read the file data and reduce the data I/O. The comparative experiments show that the memory-based file system Alluxio has better I/O performance than the disk file system; compared with the traditional distributed query method, the method we proposed reduces the retrieval time greatly.

  14. Recent Evolution of the Offline Computing Model of the NOvA Experiment

    Science.gov (United States)

    Habig, Alec; Norman, A.

    2015-12-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study νe appearance in a νμ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files on either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. The current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.

  15. Recent Evolution of the Offline Computing Model of the NOvA Experiment

    International Nuclear Information System (INIS)

    Habig, Alec; Group, Craig; Norman, A.

    2015-01-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study νe appearance in a ν μ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files on either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. The current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics. (paper)

  16. 75 FR 32341 - Import Administration IA ACCESS Pilot Program

    Science.gov (United States)

    2010-06-08

    ... submitted electronically need not also be submitted in hard copy. Persons wishing to submit written comments in hard copy should file one signed original and two copies of each set of comments to the address... . Any questions concerning file formatting, document conversion, access on the Internet, or other...

  17. Experiences of registered nurses with regard to accessing health information at the point-of-care via mobile computing devices

    Directory of Open Access Journals (Sweden)

    Esmeralda Ricks

    2015-11-01

    Full Text Available Background: The volume of health information necessary to provide competent health care today has become overwhelming. Mobile computing devices are fast becoming an essential clinical tool for accessing health information at the point-of-care of patients. Objectives: This study explored and described how registered nurses experienced accessing information at the point-of-care via mobile computing devices (MCDs. Method: A qualitative, exploratory, descriptive and contextual design was used. Ten in–depth interviews were conducted with purposively sampled registered nurses employed by a state hospital in the Nelson Mandela Bay Municipality (NMBM. Interviews were recorded, transcribed verbatim and analysed using Tesch’s data analysis technique. Ethical principles were adhered to throughout the study. Guba’s model of trustworthiness was used to confirm integrity of the study. Results: Four themes emerged which revealed that the registered nurses benefited from the training they received by enabling them to develop, and improve, their computer literacy levels. Emphasis was placed on the benefits that the accessed information had for educational purposes for patients and the public, for colleagues and students. Furthermore the ability to access information at the point-of-care was considered by registered nurses as valuable to improve patient care because of the wide range of accurate and readily accessible information available via the mobile computing device. Conclusion: The registered nurses in this study felt that being able to access information at the point-of-care increased their confidence and facilitated the provision of quality care because it assisted them in being accurate and sure of what they were doing.

  18. Experiences of registered nurses with regard to accessing health information at the point-of-care via mobile computing devices.

    Science.gov (United States)

    Ricks, Esmeralda; Benjamin, Valencia; Williams, Margaret

    2015-11-19

    The volume of health information necessary to provide competent health care today has become overwhelming. Mobile computing devices are fast becoming an essential clinical tool for accessing health information at the point-of-care of patients. This study explored and described how registered nurses experienced accessing information at the point-of-care via mobile computing devices (MCDs). A qualitative, exploratory, descriptive and contextual design was used. Ten in-depth interviews were conducted with purposively sampled registered nurses employed by a state hospital in the Nelson Mandela Bay Municipality (NMBM). Interviews were recorded, transcribed verbatim and analysed using Tesch's data analysis technique. Ethical principles were adhered to throughout the study. Guba's model of trustworthiness was used to confirm integrity of the study. Four themes emerged which revealed that the registered nurses benefited from the training they received by enabling them to develop, and improve, their computer literacy levels. Emphasis was placed on the benefits that the accessed information had for educational purposes for patients and the public, for colleagues and students. Furthermore the ability to access information at the point-of-care was considered by registered nurses as valuable to improve patient care because of the wide range of accurate and readily accessible information available via the mobile computing device. The registered nurses in this study felt that being able to access information at the point-of-care increased their confidence and facilitated the provision of quality care because it assisted them in being accurate and sure of what they were doing.

  19. A Document-Based EHR System That Controls the Disclosure of Clinical Documents Using an Access Control List File Based on the HL7 CDA Header.

    Science.gov (United States)

    Takeda, Toshihiro; Ueda, Kanayo; Nakagawa, Akito; Manabe, Shirou; Okada, Katsuki; Mihara, Naoki; Matsumura, Yasushi

    2017-01-01

    Electronic health record (EHR) systems are necessary for the sharing of medical information between care delivery organizations (CDOs). We developed a document-based EHR system in which all of the PDF documents that are stored in our electronic medical record system can be disclosed to selected target CDOs. An access control list (ACL) file was designed based on the HL7 CDA header to manage the information that is disclosed.

  20. INOVASI MEDIA PEMBELAJARAN KEARSIPAN ELECTRONIK ARSIP (E-ARSIP BERBASIS MICROSOFT OFFICE ACCESS

    Directory of Open Access Journals (Sweden)

    Ahmad Saeroji

    2014-12-01

    Full Text Available Microsoft Office Access (Ms Access is a database management system which is a Relational Database Management System. The database in Microsoft Access is a set of objects consisting of tables, queries, forms, reports. Archival system is basically to store useful various files for organizations with the specific rules so the files can be found quickly and easily. Therefore; Microsoft Office Access is an appropriate breakthrough to build archival application systems based on Microsoft Office Access. It is not only easy to be operated but also available a package of Microsoft Office program. One of the most important benefits of the database is to facilitate the access to the data. The ease of accessing the data is the implication of the order data since it is the prerequisite of a good database. The database of archiving system is an application or system design which allows the archives storage digitally. The objective of the e-archives application program using Microsoft Access is to facilitate the delivery of material practices of electronic filing (e-archives. The purposes of the scientific study are: (1 to determine the basic concept and scope of Electronic archives (e-archives, (2 To know how to use the media of electronic archives (E-archives aided Microsoft Office Access in the learning activities for Vocational students of Office Administration program.

  1. INOVASI MEDIA PEMBELAJARAN KEARSIPAN ELECTRONIK ARSIP (E-ARSIP BERBASIS MICROSOFT OFFICE ACCESS

    Directory of Open Access Journals (Sweden)

    Ahmad Saeroji

    2016-01-01

    Full Text Available Microsoft Office Access (Ms Access is a database management system which is a Relational Database Management System. The database in Microsoft Access is a set of objects consisting of tables, queries, forms, reports. Archival system is basically to store useful various files for organizations with the specific rules so the files can be found quickly and easily. Therefore; Microsoft Office Access is an appropriate breakthrough to build archival application systems based on Microsoft Office Access. It is not only easy to be operated but also available a package of Microsoft Office program. One of the most important benefits of the database is to facilitate the access to the data. The ease of accessing the data is the implication of the order data since it is the prerequisite of a good database. The database of archiving system is an application or system design which allows the archives storage digitally. The objective of the e-archives application program using Microsoft Access is to facilitate the delivery of material practices of electronic filing (e-archives. The purposes of the scientific study are: (1 to determine the basic concept and scope of Electronic archives (e-archives, (2 To know how to use the media of electronic archives (E-archives aided Microsoft Office Access in the learning activities for Vocational students of Office Administration program.

  2. Investigating Access Performance of Long Time Series with Restructured Big Model Data

    Science.gov (United States)

    Shen, S.; Ostrenga, D.; Vollmer, B.; Meyer, D. J.

    2017-12-01

    Data sets generated by models are substantially increasing in volume, due to increases in spatial and temporal resolution, and the number of output variables. Many users wish to download subsetted data in preferred data formats and structures, as it is getting increasingly difficult to handle the original full-size data files. For example, application research users, such as those involved with wind or solar energy, or extreme weather events, are likely only interested in daily or hourly model data at a single point or for a small area for a long time period, and prefer to have the data downloaded in a single file. With native model file structures, such as hourly data from NASA Modern-Era Retrospective analysis for Research and Applications Version-2 (MERRA-2), it may take over 10 hours for the extraction of interested parameters at a single point for 30 years. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is exploring methods to address this particular user need. One approach is to create value-added data by reconstructing the data files. Taking MERRA-2 data as an example, we have tested converting hourly data from one-day-per-file into different data cubes, such as one-month, one-year, or whole-mission. Performance are compared for reading local data files and accessing data through interoperable service, such as OPeNDAP. Results show that, compared to the original file structure, the new data cubes offer much better performance for accessing long time series. We have noticed that performance is associated with the cube size and structure, the compression method, and how the data are accessed. An optimized data cube structure will not only improve data access, but also may enable better online analytic services.

  3. Endodontic management of a maxillary lateral incisor with an unusual root dilaceration diagnosed with cone beam computed tomography

    Directory of Open Access Journals (Sweden)

    Mahmoud Mohammed Eid Mahgoub

    2017-01-01

    Full Text Available Anterior teeth may have aberrant anatomical variations in the roots and root canals. Root dilaceration is an anomaly characterized by the displacement of the root of a tooth from its normal alignment with the crown which may be a consequence of injury during tooth development. This report aims to present a successful root canal treatment of a maxillary lateral incisor with unusual palatal root dilaceration (diagnosed with cone beam computed tomography in which the access cavity was prepared from the labial aspect of the tooth to provide a straight line access to the root canal system which was instrumented using OneShape rotary file system and precurved K-files up to size 50 under copious irrigation of 2.5% NaOCl using a side-vented irrigation tip. The canal was then obturated using the warm vertical compaction technique.

  4. 29 CFR 4000.28 - What if I send a computer disk?

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission as...

  5. Implementation of the Facility Integrated Inventory Computer System (FICS)

    International Nuclear Information System (INIS)

    McEvers, J.A.; Krichinsky, A.M.; Layman, L.R.; Dunnigan, T.H.; Tuft, R.M.; Murray, W.P.

    1980-01-01

    This paper describes a computer system which has been developed for nuclear material accountability and implemented in an active radiochemical processing plant involving remote operations. The system posesses the following features: comprehensive, timely records of the location and quantities of special nuclear materials; automatically updated book inventory files on the plant and sub-plant levels of detail; material transfer coordination and cataloging; automatic inventory estimation; sample transaction coordination and cataloging; automatic on-line volume determination, limit checking, and alarming; extensive information retrieval capabilities; and terminal access and application software monitoring and logging

  6. Physician Fee Schedule National Payment Amount File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The significant size of the Physician Fee Schedule Payment Amount File-National requires that database programs (e.g., Access, dBase, FoxPro, etc.) be used to read...

  7. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  8. Characteristics of the TRISTAN control computer network

    International Nuclear Information System (INIS)

    Kurokawa, Shinichi; Akiyama, Atsuyoshi; Katoh, Tadahiko; Kikutani, Eiji; Koiso, Haruyo; Oide, Katsunobu; Shinomoto, Manabu; Kurihara, Michio; Abe, Kenichi

    1986-01-01

    Twenty-four minicomputers forming an N-to-N token-ring network control the TRISTAN accelerator complex. The computers are linked by optical fiber cables with 10 Mbps transmission speed. The software system is based on NODAL, a multicomputer interpretive language developed at the CERN SPS. The high-level services offered to the users of the network are remote execution by the EXEC, EXEC-P and IMEX commands of NODAL and uniform file access throughout the system. The network software was designed to achieve the fast response of the EXEC command. The performance of the network is also reported. Tasks that overload the minicomputers are processed on the KEK central computers. One minicomputer in the network serves as a gateway to KEKNET, which connects the minicomputer network and the central computers. The communication with the central computers is managed within the framework of the KEK NODAL system. NODAL programs communicate with the central computers calling NODAL functions; functions for exchanging data between a data set on the central computers and a NODAL variable, submitting a batch job to the central computers, checking the status of the submitted job, etc. are prepared. (orig.)

  9. CloudMC: a cloud computing application for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-01-01

    This work presents CloudMC, a cloud computing application—developed in Windows Azure®, the platform of the Microsoft® cloud—for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based—the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice. (note)

  10. CloudMC: a cloud computing application for Monte Carlo simulation.

    Science.gov (United States)

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  11. beachmat: A Bioconductor C++ API for accessing high-throughput biological data from a variety of R matrix types.

    Directory of Open Access Journals (Sweden)

    Aaron T L Lun

    2018-05-01

    Full Text Available Biological experiments involving genomics or other high-throughput assays typically yield a data matrix that can be explored and analyzed using the R programming language with packages from the Bioconductor project. Improvements in the throughput of these assays have resulted in an explosion of data even from routine experiments, which poses a challenge to the existing computational infrastructure for statistical data analysis. For example, single-cell RNA sequencing (scRNA-seq experiments frequently generate large matrices containing expression values for each gene in each cell, requiring sparse or file-backed representations for memory-efficient manipulation in R. These alternative representations are not easily compatible with high-performance C++ code used for computationally intensive tasks in existing R/Bioconductor packages. Here, we describe a C++ interface named beachmat, which enables agnostic data access from various matrix representations. This allows package developers to write efficient C++ code that is interoperable with dense, sparse and file-backed matrices, amongst others. We evaluated the performance of beachmat for accessing data from each matrix representation using both simulated and real scRNA-seq data, and defined a clear memory/speed trade-off to motivate the choice of an appropriate representation. We also demonstrate how beachmat can be incorporated into the code of other packages to drive analyses of a very large scRNA-seq data set.

  12. beachmat: A Bioconductor C++ API for accessing high-throughput biological data from a variety of R matrix types.

    Science.gov (United States)

    Lun, Aaron T L; Pagès, Hervé; Smith, Mike L

    2018-05-01

    Biological experiments involving genomics or other high-throughput assays typically yield a data matrix that can be explored and analyzed using the R programming language with packages from the Bioconductor project. Improvements in the throughput of these assays have resulted in an explosion of data even from routine experiments, which poses a challenge to the existing computational infrastructure for statistical data analysis. For example, single-cell RNA sequencing (scRNA-seq) experiments frequently generate large matrices containing expression values for each gene in each cell, requiring sparse or file-backed representations for memory-efficient manipulation in R. These alternative representations are not easily compatible with high-performance C++ code used for computationally intensive tasks in existing R/Bioconductor packages. Here, we describe a C++ interface named beachmat, which enables agnostic data access from various matrix representations. This allows package developers to write efficient C++ code that is interoperable with dense, sparse and file-backed matrices, amongst others. We evaluated the performance of beachmat for accessing data from each matrix representation using both simulated and real scRNA-seq data, and defined a clear memory/speed trade-off to motivate the choice of an appropriate representation. We also demonstrate how beachmat can be incorporated into the code of other packages to drive analyses of a very large scRNA-seq data set.

  13. beachmat: A Bioconductor C++ API for accessing high-throughput biological data from a variety of R matrix types

    Science.gov (United States)

    Pagès, Hervé

    2018-01-01

    Biological experiments involving genomics or other high-throughput assays typically yield a data matrix that can be explored and analyzed using the R programming language with packages from the Bioconductor project. Improvements in the throughput of these assays have resulted in an explosion of data even from routine experiments, which poses a challenge to the existing computational infrastructure for statistical data analysis. For example, single-cell RNA sequencing (scRNA-seq) experiments frequently generate large matrices containing expression values for each gene in each cell, requiring sparse or file-backed representations for memory-efficient manipulation in R. These alternative representations are not easily compatible with high-performance C++ code used for computationally intensive tasks in existing R/Bioconductor packages. Here, we describe a C++ interface named beachmat, which enables agnostic data access from various matrix representations. This allows package developers to write efficient C++ code that is interoperable with dense, sparse and file-backed matrices, amongst others. We evaluated the performance of beachmat for accessing data from each matrix representation using both simulated and real scRNA-seq data, and defined a clear memory/speed trade-off to motivate the choice of an appropriate representation. We also demonstrate how beachmat can be incorporated into the code of other packages to drive analyses of a very large scRNA-seq data set. PMID:29723188

  14. 77 FR 4558 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-01-30

    ... Numbers: EC12-62-000. Applicants: La Paloma Generating Company, LLC, Merrill Lynch Credit Products, LLC..., LLC and La Paloma Generating Company, LLC. Filed Date: 1/20/12. Accession Number: 20120120-5257...

  15. 76 FR 54754 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-09-02

    ...: Montgomery L'Energia Power Partners LP. Description: Notice of Cancellation of FERC Electric Rate Schedule Tariff of Montgomery L'Energia Power Partners LP. Filed Date: 08/24/2011. Accession Number: 20110824-5095...

  16. 78 FR 64486 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-10-29

    ...: ER14-140-000. Applicants: Panther Creek Power Operating, LLC. Description: Panther Creek Power Operating, LLC submits Panther Tariff Revisions to be effective 12/20/2013. Filed Date: 10/21/13. Accession...

  17. 36 CFR 902.57 - Investigatory files compiled for law enforcement purposes.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Investigatory files compiled for law enforcement purposes. 902.57 Section 902.57 Parks, Forests, and Public Property PENNSYLVANIA AVENUE DEVELOPMENT CORPORATION FREEDOM OF INFORMATION ACT Exemptions From Public Access to Corporation Records § 902.57 Investigatory files compiled...

  18. Pseudo-interactive monitoring in distributed computing

    International Nuclear Information System (INIS)

    Sfiligoi, I.; Bradley, D.; Livny, M.

    2009-01-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  19. Pseudo-interactive monitoring in distributed computing

    International Nuclear Information System (INIS)

    Sfiligoi, I; Bradley, D; Livny, M

    2010-01-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  20. Pseudo-interactive monitoring in distributed computing

    Energy Technology Data Exchange (ETDEWEB)

    Sfiligoi, I.; /Fermilab; Bradley, D.; Livny, M.; /Wisconsin U., Madison

    2009-05-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  1. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  2. “… computer music is cool!” Theoretical Implications of Ambivalences in Contemporary Trends in Music Reception

    OpenAIRE

    Neumann-Braun, Klaus

    2006-01-01

    Technical innovations in the last years have decisively changed the ways in which we consume music. The use of the internet has led to an heretofore unknown expansion in the access to different kinds of music. Napster is the slogan which popularized the idea of searching the computer files of millions of computer users through a central server and of downloading a host of music titles in fairly good quality. Other “peer-to-peer”– systems (i.g. imesh) followed. This practice has led to an econ...

  3. Basic Stand Alone Medicare Claims Public Use Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS is committed to increasing access to its Medicare claims data through the release of de-identified data files available for public use. They contain...

  4. 76 FR 30934 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-05-27

    ... 19, 2011. Docket Numbers: EC11-84-000. Applicants: Montgomery L'Energia Power Partners LP, Tanner... Montgomery L'Energia Power Partners LP, et. al. Filed Date: 05/23/2011. Accession Number: 20110523-5016...

  5. 78 FR 38706 - Combined Notice of Filings #2

    Science.gov (United States)

    2013-06-27

    ... National Grid and Synergy Biogas to be effective 8/14/2013. Filed Date: 6/14/13. Accession Number: 20130614... Company. Description: Notice of Cancellation of Service Agreement No. 129 with Salt River Project...

  6. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Directory of Open Access Journals (Sweden)

    Shaoming Pan

    Full Text Available Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  7. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    Science.gov (United States)

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  8. Parallel file system performances in fusion data storage

    International Nuclear Information System (INIS)

    Iannone, F.; Podda, S.; Bracco, G.; Manduchi, G.; Maslennikov, A.; Migliori, S.; Wolkersdorfer, K.

    2012-01-01

    High I/O flow rates, up to 10 GB/s, are required in large fusion Tokamak experiments like ITER where hundreds of nodes store simultaneously large amounts of data acquired during the plasma discharges. Typical network topologies such as linear arrays (systolic), rings, meshes (2-D arrays), tori (3-D arrays), trees, butterfly, hypercube in combination with high speed data transports like Infiniband or 10G-Ethernet, are the main areas in which the effort to overcome the so-called parallel I/O bottlenecks is most focused. The high I/O flow rates were modelled in an emulated testbed based on the parallel file systems such as Lustre and GPFS, commonly used in High Performance Computing. The test runs on High Performance Computing–For Fusion (8640 cores) and ENEA CRESCO (3392 cores) supercomputers. Message Passing Interface based applications were developed to emulate parallel I/O on Lustre and GPFS using data archival and access solutions like MDSPLUS and Universal Access Layer. These methods of data storage organization are widely diffused in nuclear fusion experiments and are being developed within the EFDA Integrated Tokamak Modelling – Task Force; the authors tried to evaluate their behaviour in a realistic emulation setup.

  9. pcircle - A Suite of Scalable Parallel File System Tools

    Energy Technology Data Exchange (ETDEWEB)

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  10. Micro computed tomography evaluation of the Self-adjusting file and ProTaper Universal system on curved mandibular molars.

    Science.gov (United States)

    Serefoglu, Burcu; Piskin, Beyser

    2017-09-26

    The aim of this investigation was to compare the cleaning and shaping efficiency of Self-adjusting file and Protaper, and to assess the correlation between root canal curvature and working time in mandibular molars using micro-computed tomography. Twenty extracted mandibular molars instrumented with Protaper and Self-adjusting file and the total working time was measured in mesial canals. The changes in canal volume, surface area and structure model index, transportation, uninstrumented area and the correlation between working-time and the curvature were analyzed. Although no statistically significant difference was observed between two systems in distal canals (p>0.05), a significantly higher amount of removed dentin volume and lower uninstrumented area were provided by Protaper in mesial canals (p<0.0001). A correlation between working-time and the canal-curvature was also observed in mesial canals for both groups (SAFr 2 =0.792, p<0.0004, PTUr 2 =0.9098, p<0.0001).

  11. Non-POSIX File System for LHCb Online Event Handling

    CERN Document Server

    Garnier, J C; Cherukuwada, S S

    2011-01-01

    LHCb aims to use its O(20000) CPU cores in the high level trigger (HLT) and its 120 TB Online storage system for data reprocessing during LHC shutdown periods. These periods can last a few days for technical maintenance or only a few hours during beam interfill gaps. These jobs run on files which are staged in from tape storage to the local storage buffer. The result are again one or more files. Efficient file writing and reading is essential for the performance of the system. Rather than using a traditional shared file-system such as NFS or CIFS we have implemented a custom, light-weight, non-Posix network file-system for the handling of these files. Streaming this file-system for the data-access allows to obtain high performance, while at the same time keep the resource consumption low and add nice features not found in NFS such as high-availability, transparent fail-over of the read and write service. The writing part of this streaming service is in successful use for the Online, real-time writing of the d...

  12. FORTRAN data files transference from VAX/VMS to ALPHA/UNIX

    International Nuclear Information System (INIS)

    Sanchez, E.; Milligen, B.Ph. van

    1997-01-01

    Several tools have been developed to access the TJ-I and TJ-IU databases, which currently reside in VAX/VMS servers, from the TJ-II Data Acquisition System DEC ALPHA 8400 server. The TJ-I/TJ-IU databases are not homogeneous and contain several types of data files, namely, SADE. CAMAC and FORTRAN un formatted files. The tools presented in this report allow one to transfer CAMAC and those FORTRAN un formatted files defined herein. from a VAX/VMS server, for data manipulation on the ALPHA/Digital UNIX server. (Author) 5 refs

  13. 77 FR 6103 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-02-07

    ...: Tenaska Power Services Co., Tenaska Washington Partners, L.P., Tenaska Power Management, LLC. Description...; Queue No. V2-025 to be effective 12/29/2011. Filed Date: 1/30/12. Accession Number: 20120130-5253...

  14. 78 FR 65634 - Combined Notice of Filings #2

    Science.gov (United States)

    2013-11-01

    ... Company. Description: Inquiry Response to be effective 5/12/2013. Filed Date: 10/24/13. Accession Number... other information, call (866) 208-3676 (toll free). For TTY, call (202) 502-8659. Dated: October 24...

  15. 78 FR 20907 - Combined Notice of Filings #2

    Science.gov (United States)

    2013-04-08

    ... Independent Transmission System Operator, Inc. submits 2013-03-29 MidAm Att O Depreciation Rates to be...-Retirement Benefits Other than Pensions of Public Service Company of Colorado. Filed Date: 3/29/13. Accession...

  16. 77 FR 74653 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-12-17

    .... Applicants: Public Service Company of New Mexico. Description: Public Service Company of New Mexico submits... Market-Based Rate Tariff to be effective 2/1/2013. Filed Date: 12/7/12. Accession Number: 20121207-5219...

  17. Extending the Online Public Access Catalog into the Microcomputer Environment.

    Science.gov (United States)

    Sutton, Brett

    1990-01-01

    Describes PCBIS, a database program for MS-DOS microcomputers that features a utility for automatically converting online public access catalog search results stored as text files into structured database files that can be searched, sorted, edited, and printed. Topics covered include the general features of the program, record structure, record…

  18. Globus File Transfer Services | High-Performance Computing | NREL

    Science.gov (United States)

    installed on the systems at both ends of the data transfer. The NREL endpoint is nrel#globus. Click Login on the Globus web site. On the login page select "Globus ID" as the login method and click Login to the Globus website. From the Manage Data drop down menu, select Transfer Files. Then click Get

  19. Comparison of data file and storage configurations for efficient temporal access of satellite image data

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-01-01

    Full Text Available . Traditional storage formats store such a series of images as a sequence of individual files, with each file internally storing the pixels in their spatial order. Consequently, the construction of a time series profile of a single pixel requires reading from...

  20. Beyond a Terabyte File System

    Science.gov (United States)

    Powers, Alan K.

    1994-01-01

    The Numerical Aerodynamics Simulation Facility's (NAS) CRAY C916/1024 accesses a "virtual" on-line file system, which is expanding beyond a terabyte of information. This paper will present some options to fine tuning Data Migration Facility (DMF) to stretch the online disk capacity and explore the transitions to newer devices (STK 4490, ER90, RAID).

  1. Public census data on CD-ROM at Lawrence Berkeley Laboratory. Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Merrill, D.W.

    1993-03-12

    The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socioeconomic and geographic data files which are available to CEDR and PAREP collaborators via LBL`s computing network. At this time 89 CD-ROM diskettes (approximately 45 gigabytes) are on line via the Unix file server cedrcd.lbl.gov. Most of the files are from the US Bureau of the Census, and many of these pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the form of ASCII text files. In addition, printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), tel. (510) 642-6571, or the UC Documents Library, tel. (510) 642-2569, both located on the UC Berkeley Campus. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. LBL is grateful to UC DATA and the UC Documents Library for the use of their CD-ROM diskettes. Shared access to LBL facilities may be restricted in the future if costs become prohibitive. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user`s application program(s). Due to the size of the files, this access method is preferred over File Transfer Protocol (FTP) access.

  2. TimeSet: A computer program that accesses five atomic time services on two continents

    Science.gov (United States)

    Petrakis, P. L.

    1993-01-01

    TimeSet is a shareware program for accessing digital time services by telephone. At its initial release, it was capable of capturing time signals only from the U.S. Naval Observatory to set a computer's clock. Later the ability to synchronize with the National Institute of Standards and Technology was added. Now, in Version 7.10, TimeSet is able to access three additional telephone time services in Europe - in Sweden, Austria, and Italy - making a total of five official services addressable by the program. A companion program, TimeGen, allows yet another source of telephone time data strings for callers equipped with TimeSet version 7.10. TimeGen synthesizes UTC time data strings in the Naval Observatory's format from an accurately set and maintained DOS computer clock, and transmits them to callers. This allows an unlimited number of 'freelance' time generating stations to be created. Timesetting from TimeGen is made feasible by the advent of Becker's RighTime, a shareware program that learns the drift characteristics of a computer's clock and continuously applies a correction to keep it accurate, and also brings .01 second resolution to the DOS clock. With clock regulation by RighTime and periodic update calls by the TimeGen station to an official time source via TimeSet, TimeGen offers the same degree of accuracy within the resolution of the computer clock as any official atomic time source.

  3. 75 FR 3721 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-01-22

    ... Trading GP. Description: BNP Paribas Energy Trading GP submits a Tariff Amendment and Notice of Succession... Paragraph 36 of October 15 2009 Order on 2010 Business Plans Nad Budgets. Filed Date: 01/11/2010. Accession...

  4. 75 FR 19958 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-04-16

    ..., Modification of Charges for Reactive Power Service. Filed Date: 03/31/2010. Accession Number: 20100401-0208... Reference Room in Washington, DC. There is an eSubscription link on the Web site that enables subscribers to...

  5. 77 FR 22566 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-04-16

    .... 3265; Queue No. X1-042 to be effective 3/2/2012. Filed Date: 4/6/12. Accession Number: 20120406-5071..., ATC Management Inc. Description: Application under Section 204 of The Federal Power Act for...

  6. Nuclear decay data files of the Dosimetry Research Group

    International Nuclear Information System (INIS)

    Eckerman, K.F.; Westfall, R.J.; Ryman, J.C.; Cristy, M.

    1993-12-01

    This report documents the nuclear decay data files used by the Dosimetry Research Group at Oak Ridge National Laboratory and the utility DEXRAX which provides access to the files. The files are accessed, by nuclide, to extract information on the intensities and energies of the radiations associated with spontaneous nuclear transformation of the radionuclides. In addition, beta spectral data are available for all beta-emitting nuclides. Two collections of nuclear decay data are discussed. The larger collection contains data for 838 radionuclides, which includes the 825 radionuclides assembled during the preparation of Publications 30 and 38 of the International Commission on Radiological Protection (ICRP) and 13 additional nuclides evaluated in preparing a monograph for the Medical Internal Radiation Dose (MIRD) Committee of the Society of Nuclear Medicine. The second collection is composed of data from the MIRD monograph and contains information for 242 radionuclides. Abridged tabulations of these data have been published by the ICRP in Publication 38 and by the Society of Nuclear Medicine in a monograph entitled ''MIRD: Radionuclide Data and Decay Schemes.'' The beta spectral data reported here have not been published by either organization. Electronic copies of the files and the utility, along with this report, are available from the Radiation Shielding Information Center at Oak Ridge National Laboratory

  7. Scientific workflow and support for high resolution global climate modeling at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.

    2012-04-01

    The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on

  8. 75 FR 30839 - Privacy Act of 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer...

    Science.gov (United States)

    2010-06-02

    ... 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer Match No. 1048, IRS... Services (CMS). ACTION: Notice of renewal of an existing computer matching program (CMP) that has an...'' section below for comment period. DATES: Effective Dates: CMS filed a report of the Computer Matching...

  9. Cloud object store for archive storage of high performance computing data using decoupling middleware

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  10. A data compression algorithm for nuclear spectrum files

    International Nuclear Information System (INIS)

    Mika, J.F.; Martin, L.J.; Johnston, P.N.

    1990-01-01

    The total space occupied by computer files of spectra generated in nuclear spectroscopy systems can lead to problems of storage, and transmission time. An algorithm is presented which significantly reduces the space required to store nuclear spectra, without loss of any information content. Testing indicates that spectrum files can be routinely compressed by a factor of 5. (orig.)

  11. Volumetric analysis of hand, reciprocating and rotary instrumentation techniques in primary molars using spiral computed tomography: An in vitro comparative study.

    Science.gov (United States)

    Jeevanandan, Ganesh; Thomas, Eapen

    2018-01-01

    This present study was conducted to analyze the volumetric change in the root canal space and instrumentation time between hand files, hand files in reciprocating motion, and three rotary files in primary molars. One hundred primary mandibular molars were randomly allotted to one of the five groups. Instrumentation was done using Group I; nickel-titanium (Ni-Ti) hand file, Group II; Ni-Ti hand files in reciprocating motion, Group III; Race rotary files, Group IV; prodesign pediatric rotary files, and Group V; ProTaper rotary files. The mean volumetric changes were assessed using pre- and post-operative spiral computed tomography scans. Instrumentation time was recorded. Statistical analysis to access intergroup comparison for mean canal volume and instrumentation time was done using Bonferroni-adjusted Mann-Whitney test and Mann-Whitney test, respectively. Intergroup comparison of mean canal volume showed statistically significant difference between Groups II versus IV, Groups III versus V, and Groups IV versus V. Intergroup comparison of mean instrumentation time showed statistically significant difference among all the groups except Groups IV versus V. Among the various instrumentation techniques available, rotary instrumentation is the considered to be the better instrumentation technique for canal preparation in primary teeth.

  12. 77 FR 71790 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-12-04

    ... Numbers: ER13-459-000. Applicants: Southwest Power Pool, Inc. Description: 1911R2 Kansas City Power...: Joint OATT Attachment C-3 amendment to be effective 1/ 1/2013. Filed Date: 11/27/12. Accession Number...

  13. 78 FR 23243 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-04-18

    ...-000 Applicants: BayWa r.e. Mozart, LLC Description: Notice of Self-Certification of Exempt Wholesale Generator Status of BayWa r.e. Mozart, LLC. Filed Date: 4/10/13 Accession Number: 20130410-5093 Comments Due...

  14. Characterizing Computer Access Using a One-Channel EEG Wireless Sensor.

    Science.gov (United States)

    Molina-Cantero, Alberto J; Guerrero-Cubero, Jaime; Gómez-González, Isabel M; Merino-Monge, Manuel; Silva-Silva, Juan I

    2017-06-29

    This work studies the feasibility of using mental attention to access a computer. Brain activity was measured with an electrode placed at the Fp1 position and the reference on the left ear; seven normally developed people and three subjects with cerebral palsy (CP) took part in the experimentation. They were asked to keep their attention high and low for as long as possible during several trials. We recorded attention levels and power bands conveyed by the sensor, but only the first was used for feedback purposes. All of the information was statistically analyzed to find the most significant parameters and a classifier based on linear discriminant analysis (LDA) was also set up. In addition, 60% of the participants were potential users of this technology with an accuracy of over 70%. Including power bands in the classifier did not improve the accuracy in discriminating between the two attentional states. For most people, the best results were obtained by using only the attention indicator in classification. Tiredness was higher in the group with disabilities (2.7 in a scale of 3) than in the other (1.5 in the same scale); and modulating the attention to access a communication board requires that it does not contain many pictograms (between 4 and 7) on screen and has a scanning period of a relatively high t s c a n ≈ 10 s. The information transfer rate (ITR) is similar to the one obtained by other brain computer interfaces (BCI), like those based on sensorimotor rhythms (SMR) or slow cortical potentials (SCP), and makes it suitable as an eye-gaze independent BCI.

  15. The crystallographic information file (CIF): A new standard archive file for crystallography

    International Nuclear Information System (INIS)

    Hall, S.R.; Allen, F.H.; Brown, I.D.

    1991-01-01

    The specification of a new standard Crystallographic Information File (CIF) is described. Its development is based on the Self-Defining Text Archieve and Retrieval (STAR) procedure. The CIF is a general, flexible and easily extensible free-format archive file; it is human and machine readable and can be edited by a simple editor. The CIF is designed for the electronic transmission of crystallographic data between individual laboratories, journals and databases: It has been adopted by the International Union of Crystallography as the recommended medium for this purpose. The file consists of data names and data items, together with a loop facility for repeated items. The data names, constructed hierarchically so as to form data categories, are self-descriptive within a 32-character limit. The sorted list of data names, together with their precise definitions, constitutes the CIF dictionary (core version 1991). The CIF core dictionary is presented in full and covers the fundamental and most commonly used data items relevant to crystal structure analysis. The dictionary is also available as an electronic file suitable for CIF computer applications. Future extensions to the dictionary will include data items used in more specialized areas of crystallography. (orig.)

  16. An information retrieval system for research file data

    Science.gov (United States)

    Joan E. Lengel; John W. Koning

    1978-01-01

    Research file data have been successfully retrieved at the Forest Products Laboratory through a high-speed cross-referencing system involving the computer program FAMULUS as modified by the Madison Academic Computing Center at the University of Wisconsin. The method of data input, transfer to computer storage, system utilization, and effectiveness are discussed....

  17. 76 FR 14964 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-03-18

    ...: Rate Schedule 1 for Reactive Supply Service to be effective 6/1/2011. Filed Date: 03/08/2011. Accession... link on the Web site that enables subscribers to receive e-mail notification when a document is added...

  18. XRootd, disk-based, caching-proxy for optimization of data-access, data-placement and data-replication

    CERN Document Server

    Tadel, Matevz

    2013-01-01

    Following the smashing success of XRootd-based USCMS data-federation, AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching-proxy. The first one simply starts fetching a whole file as soon as a file-open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop file-system have been developed to allow foran immediate fallback to network access when local HDFS storage fails to provide the requested block. Tools needed to analyze and to tweak block replication factors and to inject downloaded blocks into a running HDFS installation have also been developed. Both cache implementations are in operation at UCSD and several tests were also performed at UNL and UW-M. Operational experience and applications to automatic storage healing and opportunistic compu...

  19. Optimal Caching in Multicast 5G Networks with Opportunistic Spectrum Access

    KAUST Repository

    Emara, Mostafa

    2018-01-15

    Cache-enabled small base station (SBS) densification is foreseen as a key component of 5G cellular networks. This architecture enables storing popular files at the network edge (i.e., SBS caches), which empowers local communication and alleviates traffic congestions at the core/backhaul network. This paper develops a mathematical framework, based on stochastic geometry, to characterize the hit probability of a cache-enabled multicast 5G network with SBS multi-channel capabilities and opportunistic spectrum access. To this end, we first derive the hit probability by characterizing opportunistic spectrum access success probabilities, service distance distributions, and coverage probabilities. The optimal caching distribution to maximize the hit probability is then computed. The performance and trade-offs of the derived optimal caching distributions are then assessed and compared with two widely employed caching distribution schemes, namely uniform and Zipf caching, through numerical results and extensive simulations. It is shown that the Zipf caching almost optimal only in scenarios with large number of available channels and large cache sizes.

  20. LASIP-III, a generalized processor for standard interface files

    International Nuclear Information System (INIS)

    Bosler, G.E.; O'Dell, R.D.; Resnik, W.M.

    1976-03-01

    The LASIP-III code was developed for processing Version III standard interface data files which have been specified by the Committee on Computer Code Coordination. This processor performs two distinct tasks, namely, transforming free-field format, BCD data into well-defined binary files and providing for printing and punching data in the binary files. While LASIP-III is exported as a complete free-standing code package, techniques are described for easily separating the processor into two modules, viz., one for creating the binary files and one for printing the files. The two modules can be separated into free-standing codes or they can be incorporated into other codes. Also, the LASIP-III code can be easily expanded for processing additional files, and procedures are described for such an expansion. 2 figures, 8 tables

  1. CryptoCache: A Secure Sharable File Cache for Roaming Users

    DEFF Research Database (Denmark)

    Jensen, Christian D.

    2000-01-01

    . Conventional distributed file systems cache everything locally or not at all; there is no possibility to cache files on nearby nodes.In this paper we present the design of a secure cache system called CryptoCache that allows roaming users to cache files on untrusted file hosting servers. The system allows...... flexible sharing of cached files among unauthenticated users, i.e. unlike most distributed file systems CryptoCache does not require a global authentication framework.Files are encrypted when they are transferred over the network and while stored on untrusted servers. The system uses public key......Small mobile computers are now sufficiently powerful to run many applications, but storage capacity remains limited so working files cannot be cached or stored locally. Even if files can be stored locally, the mobile device is not powerful enough to act as server in collaborations with other users...

  2. GEODOC: the GRID document file, record structure and data element description

    Energy Technology Data Exchange (ETDEWEB)

    Trippe, T.; White, V.; Henderson, F.; Phillips, S.

    1975-11-06

    The purpose of this report is to describe the information structure of the GEODOC file. GEODOC is a computer based file which contains the descriptive cataloging and indexing information for all documents processed by the National Geothermal Information Resource Group. This file (along with other GRID files) is managed by DBMS, the Berkeley Data Base Management System. Input for the system is prepared using the IRATE Text Editing System with its extended (12 bit) character set, or punched cards.

  3. Virus Alert: Ten Steps to Safe Computing.

    Science.gov (United States)

    Gunter, Glenda A.

    1997-01-01

    Discusses computer viruses and explains how to detect them; discusses virus protection and the need to update antivirus software; and offers 10 safe computing tips, including scanning floppy disks and commercial software, how to safely download files from the Internet, avoiding pirated software copies, and backing up files. (LRW)

  4. National Radiobiology Archives distributed access programmer's guide

    International Nuclear Information System (INIS)

    Prather, J.C.; Smith, S.K.; Watson, C.R.

    1991-12-01

    The National Radiobiology Archives is a comprehensive effort to gather, organize, and catalog original data, representative specimens, and supporting materials related to significant radiobiology studies. This provides researchers with information for analyses which compare or combine results of these and other studies and with materials for analysis by advanced molecular biology techniques. This Programmer's Guide document describes the database access software, NRADEMO, and the subset loading script NRADEMO/MAINT/MAINTAIN, which comprise the National Laboratory Archives Distributed Access Package. The guide is intended for use by an experienced database management specialist. It contains information about the physical and logical organization of the software and data files. It also contains printouts of all the scripts and associated batch processing files. It is part of a suite of documents published by the National Radiobiology Archives

  5. Distributed computing testbed for a remote experimental environment

    International Nuclear Information System (INIS)

    Butner, D.N.; Casper, T.A.; Howard, B.C.; Henline, P.A.; Davis, S.L.; Barnes, D.

    1995-01-01

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ''Collaboratory.'' The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on the DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation's Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility

  6. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters

    Directory of Open Access Journals (Sweden)

    Abreu Rui MV

    2010-10-01

    Full Text Available Abstract Background Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. Implementation MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. Conclusion MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a

  7. A New Data Access Mechanism for HDFS

    Science.gov (United States)

    Li, Qiang; Sun, Zhenyu; Wei, Zhanchen; Sun, Gongxing

    2017-10-01

    With the era of big data emerging, Hadoop has become the de facto standard of big data processing platform. However, it is still difficult to get legacy applications, such as High Energy Physics (HEP) applications, to run efficiently on Hadoop platform. There are two reasons which lead to the difficulties mentioned above: firstly, random access is not supported on Hadoop File System (HDFS), secondly, it is difficult to make legacy applications adopt to HDFS streaming data processing mode. In order to address the two issues, a new read and write mechanism of HDFS is proposed. With this mechanism, data access is done on the local file system instead of through HDFS streaming interfaces. To enable files modified by users, three attributes including permissions, owner and group are imposed on Block objects. Blocks stored on Datanodes have the same attributes as the file they are owned by. Users can modify blocks when the Map task running locally, and HDFS is responsible to update the rest replicas later after the block modification finished. To further improve the performance of Hadoop system, a complete localization task execution mechanism is implemented for I/O intensive jobs. Test results show that average CPU utilization is improved by 10% with the new task selection strategy, data read and write performances are improved by about 10% and 30% separately.

  8. Evaluation of Single File Systems Reciproc, Oneshape, and WaveOne using Cone Beam Computed Tomography –An In Vitro Study

    Science.gov (United States)

    Dhingra, Annil; Miglani, Anjali

    2015-01-01

    Background Successful endodontic therapy depends on many factor, one of the most important step in any root canal treatment is root canal preparation. In addition, respecting the original shape of the canal is of the same importance; otherwise, canal aberrations such as transportation will be created. Aim The purpose of this study is to compare and evaluate Reciprocating WaveOne ,Reciproc and Rotary Oneshape Single File Instrumentation System On Cervical Dentin Thickness, Cross Sectional Area and Canal Transportation on First Mandibular Molar Using Cone Beam Computed Tomography. Materials and Methods Sixty Mandibular First Molars extracted due to periodontal reason was collected from the Department of Oral and Maxillofacial. Teeth were prepared using one rotary and two reciprocating single file system. Teeth were divided into 3 groups 20 teeth in each group. Pre instrumentation and Post instrumentation scans was done and evaluated for three parameters Canal Transportation, Cervical Dentinal Thickness, Cross-sectional Area. Results were analysed statistically using ANOVA, Post-Hoc Tukey analysis. Results The change in cross-sectional area after filing showed significant difference at 0mm, 1mm, 2mm and 7mm (pfile system over a distance of 7 mm (starting from 0mm and then evaluation at 1mm, 2mm, 3mm, 5mm and 7mm), the results showed a significant difference among the file systems at various lengths (p= 0.014, 0.046, 0.004, 0.028, 0.005 & 0.029 respectively). Mean value of cervical dentinal removal is maximum at all the levels for oneshape and minimum for waveone showing the better quality of waveone and reciproc over oneshape file system. Significant difference was found at 9mm, 11mm and 12mm between all the three file systems (p<0.001,< 0.001, <0.001). Conclusion It was concluded that reciprocating motion is better than rotary motion in all the three parameters Canal Transportation, Cross-sectional Area, Cervical Dentinal Thickness. PMID:26023639

  9. RRB / SSI Interface Checkwriting Integrated Computer Operation Extract File (CHICO)

    Data.gov (United States)

    Social Security Administration — This monthly file provides SSA with information about benefit payments made to railroad retirement beneficiaries. SSA uses this data to verify Supplemental Security...

  10. Exploiting Virtualization and Cloud Computing in ATLAS

    International Nuclear Information System (INIS)

    Harald Barreiro Megino, Fernando; Van der Ster, Daniel; Benjamin, Doug; De, Kaushik; Gable, Ian; Paterson, Michael; Taylor, Ryan; Hendrix, Val; Vitillo, Roberto A; Panitkin, Sergey; De Silva, Asoka; Walker, Rod

    2012-01-01

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R and D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  11. 78 FR 64491 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-10-29

    .... Comments Due: 5 p.m. ET 11/12/13. Docket Numbers: ER12-2570-002; ER10-3041-002. Applicants: Panther Creek... Panther Creek Operating, LLC, et al. Filed Date: 10/21/13. Accession Number: 20131021-5160. Comments Due...

  12. 78 FR 15359 - Combined Notice of Filings

    Science.gov (United States)

    2013-03-11

    ...: WBI Energy Transmission, Inc. Description: 2013 Annual Fuel and Electric Power Reimbursement to be.... Description: Storm Surcharge 2013 to be effective 4/1/2013. Filed Date: 3/1/13. Accession Number: 20130301... Numbers: RP13-668-000. Applicants: CF Industries Enterprises, Inc., CF Industries Nitrogen, LLC...

  13. Public census data on CD-ROM at Lawrence Berkeley Laboratory. Revision 3

    Energy Technology Data Exchange (ETDEWEB)

    Merrill, D.W.

    1993-01-16

    The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socioeconomic and geographic data files which are available to CEDR and PAREP collaborators via LBL`s computing network. At this time 72 CD-ROM diskettes (approximately 37 gigabytes) are on line via the Unix file server ``cedrcd.lbl.gov``. Most of the files are from the US Bureau of the Census, and many of these pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the form of ASCII text files. In addition, printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), tel. (510) 642-6571, or the UC Documents Library, tel. (510) 642-2569, both located on the UC Berkeley Campus. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. LBL is grateful to UC DATA and the UC Documents Library for the use of their CD-ROM diskettes. Shared access to LBL facilities may be restricted in the future if costs become prohibitive. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user`s application program(s). Due to the size of the files, this access method is preferred over File Transfer Protocol (FTP) access. Please contact Deane Merrill (dwmerrill@lbl.gov) if you wish to make use of the data.

  14. Public census data on CD-ROM at Lawrence Berkeley Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Merrill, D.W.

    1993-01-16

    The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socioeconomic and geographic data files which are available to CEDR and PAREP collaborators via LBL's computing network. At this time 72 CD-ROM diskettes (approximately 37 gigabytes) are on line via the Unix file server cedrcd.lbl.gov''. Most of the files are from the US Bureau of the Census, and many of these pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the form of ASCII text files. In addition, printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), tel. (510) 642-6571, or the UC Documents Library, tel. (510) 642-2569, both located on the UC Berkeley Campus. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. LBL is grateful to UC DATA and the UC Documents Library for the use of their CD-ROM diskettes. Shared access to LBL facilities may be restricted in the future if costs become prohibitive. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user's application program(s). Due to the size of the files, this access method is preferred over File Transfer Protocol (FTP) access. Please contact Deane Merrill (dwmerrill lbl.gov) if you wish to make use of the data.

  15. The Improvement and Performance of Mobile Environment Using Both Cloud and Text Computing

    OpenAIRE

    S.Saravana Kumar; J.Lakshmi Priya; P.Hannah Jennifer; N.Jeff Monica; Fathima

    2013-01-01

    In this research paper presents an design model for file sharing system for ubiquitos mobile devices using both cloud and text computing. File s haring is one of the rationales for computer networks with increasing demand for file sharing ap plications and technologies in small and large enterprise networks and on the Internet. File transfer is an important process in any form of computing as we need to really share the data ac ross. ...

  16. The NILE system architecture: fault-tolerant, wide-area access to computing and data resources

    International Nuclear Information System (INIS)

    Ricciardi, Aleta; Ogg, Michael; Rothfus, Eric

    1996-01-01

    NILE is a multi-disciplinary project building a distributed computing environment for HEP. It provides wide-area, fault-tolerant, integrated access to processing and data resources for collaborators of the CLEO experiment, though the goals and principles are applicable to many domains. NILE has three main objectives: a realistic distributed system architecture design, the design of a robust data model, and a Fast-Track implementation providing a prototype design environment which will also be used by CLEO physicists. This paper focuses on the software and wide-area system architecture design and the computing issues involved in making NILE services highly-available. (author)

  17. Fixing Accessibility Issues in Open-Source Teaching Repositories

    Directory of Open Access Journals (Sweden)

    Francisco Javier Díaz

    2017-12-01

    Full Text Available In the LINTI, New Information Technologies Research Laboratory at the Computer Science School, in the National University of La Plata, it is being developed a project that involves the integration of the repository, implemented using DSpace, with different tools and platforms used in academic tasks. Accessibility is a process that cuts across all software development stages, so when using a free software product it is important to evaluate it in order to correct faults if it´s necessary. This article describes a DSpace repository accessibility validation, using screen readers for manual test, automatic validation with software tools and experimental test with users with and without disabilities. The evaluation involves the proper basic functions and the implemented extensions. The original DSpace software was extended through the integration with different tools and platforms, such as Moodle LMS, the library management system called Meran, file management services like DropBox and GoogleDrive and the social network Facebook.  The tools used during accessibility evaluation were Examinator, Google ChromeVox and one entirely implemented in the LINTI, called SiMor. The experimental tests were made with blind and deaf persons, most of them college students. All the validation results are detailed using tables and graphs, where it can observe the measured values. It is also described the changes that was necessary to carry out in the repository to improve the user experience and ensure Web service accessibility.

  18. Tabulation of Fundamental Assembly Heat and Radiation Source Files

    International Nuclear Information System (INIS)

    T. deBues; J.C. Ryman

    2006-01-01

    The purpose of this calculation is to tabulate a set of computer files for use as input to the WPLOAD thermal loading software. These files contain details regarding heat and radiation from pressurized water reactor (PWR) assemblies and boiling water reactor (BWR) assemblies. The scope of this calculation is limited to rearranging and reducing the existing file information into a more streamlined set of tables for use as input to WPLOAD. The electronic source term files used as input to this calculation were generated from the output files of the SAS2H/ORIGIN-S sequence of the SCALE Version 4.3 modular code system, as documented in References 2.1.1 and 2.1.2, and are included in Attachment II

  19. A Python library for FAIRer access and deposition to the Metabolomics Workbench Data Repository.

    Science.gov (United States)

    Smelter, Andrey; Moseley, Hunter N B

    2018-01-01

    The Metabolomics Workbench Data Repository is a public repository of mass spectrometry and nuclear magnetic resonance data and metadata derived from a wide variety of metabolomics studies. The data and metadata for each study is deposited, stored, and accessed via files in the domain-specific 'mwTab' flat file format. In order to improve the accessibility, reusability, and interoperability of the data and metadata stored in 'mwTab' formatted files, we implemented a Python library and package. This Python package, named 'mwtab', is a parser for the domain-specific 'mwTab' flat file format, which provides facilities for reading, accessing, and writing 'mwTab' formatted files. Furthermore, the package provides facilities to validate both the format and required metadata elements of a given 'mwTab' formatted file. In order to develop the 'mwtab' package we used the official 'mwTab' format specification. We used Git version control along with Python unit-testing framework as well as continuous integration service to run those tests on multiple versions of Python. Package documentation was developed using sphinx documentation generator. The 'mwtab' package provides both Python programmatic library interfaces and command-line interfaces for reading, writing, and validating 'mwTab' formatted files. Data and associated metadata are stored within Python dictionary- and list-based data structures, enabling straightforward, 'pythonic' access and manipulation of data and metadata. Also, the package provides facilities to convert 'mwTab' files into a JSON formatted equivalent, enabling easy reusability of the data by all modern programming languages that implement JSON parsers. The 'mwtab' package implements its metadata validation functionality based on a pre-defined JSON schema that can be easily specialized for specific types of metabolomics studies. The library also provides a command-line interface for interconversion between 'mwTab' and JSONized formats in raw text and a

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  1. Scalable Strategies for Computing with Massive Data

    Directory of Open Access Journals (Sweden)

    Michael Kane

    2013-11-01

    Full Text Available This paper presents two complementary statistical computing frameworks that address challenges in parallel processing and the analysis of massive data. First, the foreach package allows users of the R programming environment to define parallel loops that may be run sequentially on a single machine, in parallel on a symmetric multiprocessing (SMP machine, or in cluster environments without platform-specific code. Second, the bigmemory package implements memory- and file-mapped data structures that provide (a access to arbitrarily large data while retaining a look and feel that is familiar to R users and (b data structures that are shared across processor cores in order to support efficient parallel computing techniques. Although these packages may be used independently, this paper shows how they can be used in combination to address challenges that have effectively been beyond the reach of researchers who lack specialized software development skills or expensive hardware.

  2. User's guide for the implementation of level one of the proposed American National Standard Specifications for an information interchange data descriptive file on control data 6000/7000 series computers

    CERN Document Server

    Wiley, R A

    1977-01-01

    User's guide for the implementation of level one of the proposed American National Standard Specifications for an information interchange data descriptive file on control data 6000/7000 series computers

  3. Computer Security: DNS to the rescue!

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2016-01-01

    Why you should be grateful to the Domain Name System at CERN.   Incidents involving so-called “drive-by” infections and “ransomware” are on the rise. Whilst an up-to-date and fully patched operating system is essential; whilst running anti-virus software with current virus signature files is a must; whilst “stop --- think --- don’t click” surely helps, we can still go one step further in better protecting your computers: DNS to the rescue. The DNS, short for Domain Name System, translates the web address you want to visit (like “http://cern.ch”) to a machine-readable format (the IP address, here: “188.184.9.234”). For years, we have automatically monitored the DNS translation requests made by your favourite web browser (actually by your operating system, but that doesn’t matter here), and we have automatically informed you if your computer tried to access a website known to hos...

  4. 75 FR 35782 - Combined Notice of Filings No 2

    Science.gov (United States)

    2010-06-23

    ... Marketing, Inc. Filed Date: 06/04/2010. Accession Number: 20100604-0205. Comment Date: 5 p.m. Eastern Time... notification when a document is added to a subscribed docket(s). For assistance with any FERC Online service...

  5. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment

    Science.gov (United States)

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-01-01

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment. PMID:28629131

  6. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment.

    Science.gov (United States)

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-06-17

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment.

  7. A service-oriented data access control model

    Science.gov (United States)

    Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali

    2017-01-01

    The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.

  8. Guided Endodontic Access in Maxillary Molars Using Cone-beam Computed Tomography and Computer-aided Design/Computer-aided Manufacturing System: A Case Report.

    Science.gov (United States)

    Lara-Mendes, Sônia T de O; Barbosa, Camila de Freitas M; Santa-Rosa, Caroline C; Machado, Vinícius C

    2018-05-01

    The aim of this study was to describe a guided endodontic technique that facilitates access to root canals of molars presenting with pulp calcifications. A 61-year-old woman presented to our service with pain in the upper left molar region. The second and third left molars showed signs of apical periodontitis confirmed by the cone-beam computed tomographic (CBCT) scans brought to us by the patient at the initial appointment. Conventional endodontic treatment was discontinued given the difficulty in locating the root canals. Intraoral scanning and the CBCT scans were used to plan the access to the calcified canals by means of implant planning software. Guides were fabricated through rapid prototyping and allowed for the correct orientation of a cylindrical drill used to provide access through the calcifications. Second to that, the root canals were prepared with reciprocating endodontic instruments and rested for 2 weeks with intracanal medication. Subsequently, canals were packed with gutta-percha cones using the hydraulic compression technique. Permanent restorations of the access cavities were performed. By comparing the tomographic images, the authors observed a drastic reduction of the periapical lesions as well as the absence of pain symptoms after 3 months. This condition was maintained at the 1-year follow-up. The guided endodontic technique in maxillary molars was shown to be a fast, safe, and predictable therapy and can be regarded as an excellent option for the location of calcified root canals, avoiding failures in complex cases. Copyright © 2018 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  9. Use of WebDAV to Support a Virtual File System in a Coalition Environment

    National Research Council Canada - National Science Library

    Bradney, Jeremiah A

    2006-01-01

    .... By enabling the use of WebDAV in MYSEA, this thesis provides a means for fulfilling the above requirement for secure remote access by creating a virtual web-based file system accessible from the MYSEA MLS network...

  10. [Comparison of effectiveness and safety between Twisted File technique and ProTaper Universal rotary full sequence based on micro-computed tomography].

    Science.gov (United States)

    Chen, Xiao-bo; Chen, Chen; Liang, Yu-hong

    2016-02-18

    To evaluate the efficacy and security of two type of rotary nickel titanium system (Twisted File and ProTaper Universal) for root canal preparation based on micro-computed tomography(micro-CT). Twenty extracted molars (including 62 canals) were divided into two experimental groups and were respectively instrumented using Twisted File rotary nickel titanium system (TF) and ProTaper Universal rotary nickel titanium system (PU) to #25/0.08 following recommended protocol. Time for root canal instrumentation (accumulation of time for every single file) was recorded. The 0-3 mm root surface from apex was observed under an optical stereomicroscope at 25 × magnification. The presence of crack line was noted. The root canals were scanned with micro-CT before and after root canal preparation. Three-dimensional shape images of canals were reconstructed, calculated and evaluated. The amount of canal central transportation of the two groups was calculated and compared. The shorter preparation time [(0.53 ± 0.14) min] was observed in TF group, while the preparation time of PU group was (2.06 ± 0.39) min (Pvs. (0.097 ± 0.084) mm, P<0.05]. No instrument separation was observed in both the groups. Cracks were not found in both the groups either based in micro-CT images or observation under an optical stereomicroscope at 25 × magnification. Compared with ProTaper Universal, Twisted File took less time in root canal preparation and exhibited better shaping ability, and less canal transportation.

  11. Software development for on-line computation with PDP 15/76 computer

    Energy Technology Data Exchange (ETDEWEB)

    Viyogi, Y P; Bhattacharjee, T K; De, S K; Basu, A K; Ganguly, N K [Bhabha Atomic Research Centre, Bombay (India). Variable Energy Cyclotron Project

    1979-01-01

    Two important capabilities have been incorporated in the on-line data processing system for the pulse height analysis at VEC, for the processing of data by using codes PHA1 and PHA2, the single parameter and dual parameter data acquisition programs. (1) RDMA is written in assembly language for randomly accessing any element of data from data filing devices. This increases the availability of core so that capability of processing programs can be enhanced. (2) PHARAD, also written in assembly language, converts data files created by acquisition programs into FORTRAN compatible files. These are very essential for on-line processing of data. The on-line data acqujsition and processing by PDP 15/76 using these facilities are discussed.

  12. Detection Of Alterations In Audio Files Using Spectrograph Analysis

    Directory of Open Access Journals (Sweden)

    Anandha Krishnan G

    2015-08-01

    Full Text Available The corresponding study was carried out to detect changes in audio file using spectrograph. An audio file format is a file format for storing digital audio data on a computer system. A sound spectrograph is a laboratory instrument that displays a graphical representation of the strengths of the various component frequencies of a sound as time passes. The objectives of the study were to find the changes in spectrograph of audio after altering them to compare altering changes with spectrograph of original files and to check for similarity and difference in mp3 and wav. Five different alterations were carried out on each audio file to analyze the differences between the original and the altered file. For altering the audio file MP3 or WAV by cutcopy the file was opened in Audacity. A different audio was then pasted to the audio file. This new file was analyzed to view the differences. By adjusting the necessary parameters the noise was reduced. The differences between the new file and the original file were analyzed. By adjusting the parameters from the dialog box the necessary changes were made. The edited audio file was opened in the software named spek where after analyzing a graph is obtained of that particular file which is saved for further analysis. The original audio graph received was combined with the edited audio file graph to see the alterations.

  13. U.S. EPA River Reach File Version 1.0

    Data.gov (United States)

    Kansas Data Access and Support Center — Reach File Version 1.0 (RF1) is a vector database of approximately 700,000 miles of streams and open waters in the conterminous United States. It is used extensively...

  14. NASA work unit system file maintenance manual

    Science.gov (United States)

    1972-01-01

    The NASA Work Unit System is a management information system for research tasks (i.e., work units) performed under NASA grants and contracts. It supplies profiles on research efforts and statistics on fund distribution. The file maintenance operator can add, delete and change records at a remote terminal or can submit punched cards to the computer room for batch update. The system is designed for file maintenance by a person with little or no knowledge of data processing techniques.

  15. 76 FR 12726 - Tropicana Manufacturing Company, Inc.; Supplemental Notice That Initial Market-Based Rate Filing...

    Science.gov (United States)

    2011-03-08

    ... . To facilitate electronic service, persons with Internet access who will eFile a document and/or be... Manufacturing Company, Inc.; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Tropicana Manufacturing Company, Inc.'s application for market-based rate authority, with an accompanying...

  16. Sandia Data Archive (SDA) file specifications

    Energy Technology Data Exchange (ETDEWEB)

    Dolan, Daniel H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ao, Tommy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    The Sandia Data Archive (SDA) format is a specific implementation of the HDF5 (Hierarchal Data Format version 5) standard. The format was developed for storing data in a universally accessible manner. SDA files may contain one or more data records, each associated with a distinct text label. Primitive records provide basic data storage, while compound records support more elaborate grouping. External records allow text/binary files to be carried inside an archive and later recovered. This report documents version 1.0 of the SDA standard. The information provided here is sufficient for reading from and writing to an archive. Although the format was original designed for use in MATLAB, broader use is encouraged.

  17. Standard interface files and procedures for reactor physics codes. Version IV

    International Nuclear Information System (INIS)

    O'Dell, R.D.

    1977-09-01

    Standards, procedures, and recommendations of the Committee on Computer Code Coordination for promoting the exchange of reactor physics codes are updated to Version IV status. Standards and procedures covering general programming, program structure, standard interface files, and file management and handling subroutines are included

  18. Remote file inquiry (RFI) system

    Science.gov (United States)

    1975-01-01

    System interrogates and maintains user-definable data files from remote terminals, using English-like, free-form query language easily learned by persons not proficient in computer programming. System operates in asynchronous mode, allowing any number of inquiries within limitation of available core to be active concurrently.

  19. Knowledge management: Role of the the Radiation Safety Information Computational Center (RSICC)

    Science.gov (United States)

    Valentine, Timothy

    2017-09-01

    The Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 software packages that have been provided by code developers from various federal and international agencies. RSICC's customers (scientists, engineers, and students from around the world) obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programs both domestically and internationally, as the majority of RSICC's customers are students attending U.S. universities. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC's activities, services, and systems that support knowledge management and education and training in the nuclear field.

  20. Evaluation of the Self-Adjusting File system (SAF) for the instrumentation of primary molar root canals: a micro-computed tomographic study.

    Science.gov (United States)

    Kaya, E; Elbay, M; Yiğit, D

    2017-06-01

    The Self-Adjusting File (SAF) system has been recommended for use in permanent teeth since it offers more conservative and effective root-canal preparation when compared to traditional rotary systems. However, no study had evaluated the usage of SAF in primary teeth. The aim of this study was to evaluate and compare the use of SAF, K file (manual instrumentation) and Profile (traditional rotary instrumentation) systems for primary-tooth root-canal preparation in terms of instrumentation time and amounts of dentin removed using micro-computed tomography (μCT) technology. Study Design: The study was conducted with 60 human primary mandibular second molar teeth divided into 3 groups according to instrumentation technique: Group I: SAF (n=20); Group II: K file (n=20); Group III; Profile (n=20). Teeth were embedded in acrylic blocks and scanned with a μCT scanner prior to instrumentation. All distal root canals were prepared up to size 30 for K file,.04/30 for Profile and 2 mm thickness, size 25 for SAF; instrumentation time was recorded for each tooth, and a second μCT scan was performed after instrumentation was complete. Amounts of dentin removed were measured using the three-dimensional images by calculating the difference in root-canal volume before and after preparation. Data was statistically analysed using the Kolmogorov-Smirnov and Kruskal-Wallis tests. Manual instrumentation (K file) resulted in significantly more dentin removal when compared to rotary instrumentation (Profile and SAF), while the SAF system generated significantly less dentin removal than both manual instrumentation (K file) and traditional rotary instrumentation (Profile) (psystems. Within the experimental conditions of the present study, the SAF seems as a useful system for root-canal instrumentation in primary molars because it removed less dentin than other systems, which is especially important for the relatively thin-walled canals of primary teeth, and because it involves less

  1. Nuclear plant fire incident data file

    International Nuclear Information System (INIS)

    Sideris, A.G.; Hockenbury, R.W.; Yeater, M.L.; Vesely, W.E.

    1979-01-01

    A computerized nuclear plant fire incident data file was developed by American Nuclear Insurers and was further analyzed by Rensselaer Polytechnic Institute with technical and monetary support provided by the Nuclear Regulatory Commission. Data on 214 fires that occurred at nuclear facilities have been entered in the file. A computer program has been developed to sort the fire incidents according to various parameters. The parametric sorts that are presented in this article are significant since they are the most comprehensive statistics presently available on fires that have occurred at nuclear facilities

  2. Grid Computing at GSI for ALICE and FAIR - present and future

    International Nuclear Information System (INIS)

    Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten

    2012-01-01

    The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE-CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.

  3. 78 FR 43196 - Combined Notice of Filings #2

    Science.gov (United States)

    2013-07-19

    ... Energy Marketing, Inc., Brookfield Energy Marketing LP, Brookfield Energy Marketing US LLC, Brookfield Renewable Energy Marketing US, Brookfield Smoky Mountain Hydropower LLC, Carr Street Generating Station, L.P...: Ameren Illinois Company. Description: Refund Report to be effective N/A. Filed Date: 7/10/13. Accession...

  4. 75 FR 70736 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-11-18

    ...: EC10-98-000. Applicants: GDF Suez S.A., International Power PLC. Description: Supplemental Affidavit of...., International Power plc and its Indicated United States Subsidiaries. Filed Date: 10/29/2010. Accession Number... Generation Holdings, LLC. Description: TPF Generation Holdings, LLC submits an application for authorization...

  5. 75 FR 7577 - Combined Notice of Filings # 1

    Science.gov (United States)

    2010-02-22

    ..., LLC, Baltimore Gas and Electric Company, Constellation Pwr Source Generation LLC, Constellation New... Operating Companies FERC Electric Third Revised Volume No. 1 Open Access Transmission Tariff. Filed Date: 02... Numbers: EG10-20-000. Applicants: Northeastern Power Company. Description: Notice of Self-Certification of...

  6. 77 FR 1064 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-01-09

    ... confirm timely development of new interface pricing software of New York Independent System Operator, Inc... Market Power Analysis of Northern Indiana Public Service Company. Filed Date: 12/28/11. Accession Number..., Duke Energy Indiana, Inc., St. Paul Cogeneration, LLC. Description: Updated market power analysis of...

  7. 75 FR 11528 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-03-11

    ... Order Accepting Initial Market Based Rate Schedule and Granting Waivers and Blanket Authorizations...: Southwest Power Pool, Inc submits an executed service agreement for Network Integration Transmission Service... Integration Transmission Service. Filed Date: 03/02/2010. Accession Number: 20100303-0222. Comment Date: 5 p.m...

  8. 78 FR 7424 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-02-01

    .... Applicants: Public Service Company of New Mexico, Delta Person GP, LLC, BHB Power, LLC, Delta Person, Limited...-003. Applicants: Southwestern Public Service Company. Description: Supplement to June 29, 2012 Triennial Market Power Analysis of Southwestern Public Service Company. Filed Date: 1/24/13. Accession...

  9. Extending DIRAC File Management with Erasure-Coding for efficient storage

    CERN Document Server

    Skipsey, Samuel Cadellin; Britton, David; Crooks, David; Roy, Gareth

    2015-01-01

    The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP\\cite{GridPP}, extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. ...

  10. The version control service for ATLAS data acquisition configuration files

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files [1]. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications pro...

  11. Computerized management of radiology department: Installation and use of local area network(LAN) by personal computers

    International Nuclear Information System (INIS)

    Lee, Young Joon; Han, Kook Sang; Geon, Do Ig; Sol, Chang Hyo; Kim, Byung Soo

    1993-01-01

    There is increasing need for network connecting personal computers(PC) together. Thus local area network(LAN) emerged, which was designed to allow multiple computers to access and share multiple files and programs and expensive peripheral devices and to communicate with each user. We build PC-LAN in our department that consisted of 1) hardware-9 sets of personal computers(IBM compatible 80386 DX, 1 set; 80286 AT, 8 sets) and cables and network interface cards (Ethernet compatible, 16 bits) that connected PC and peripheral devices 2) software - network operating system and database management system. We managed this network for 6 months. The benefits of PC-LAN were 1) multiuser (share multiple files and programs, peripheral devices) 2) real data processing 3) excellent expandability and flexibility, compatibility, easy connectivity 4) single cable for networking) rapid data transmission 5) simple and easy installation and management 6) using conventional PC's software running under DOS(Disk Operating System) without transformation 7) low networking cost. In conclusion, PC-lan provides an easier and more effective way to manage multiuser database system needed at hospital departments instead of more expensive and complex network of minicomputer or mainframe

  12. Activity-based computing: computational management of activities reflecting human intention

    DEFF Research Database (Denmark)

    Bardram, Jakob E; Jeuris, Steven; Houben, Steven

    2015-01-01

    paradigm that has been applied in personal information management applications as well as in ubiquitous, multidevice, and interactive surface computing. ABC has emerged as a response to the traditional application- and file-centered computing paradigm, which is oblivious to a notion of a user’s activity...

  13. Radiology Teaching Files on the Internet

    International Nuclear Information System (INIS)

    Lim, Eun Chung; Kim, Eun Kyung

    1996-01-01

    There is increasing attention about radiology teaching files on the Internet in the field of diagnostic radiology. The purpose of this study was to aid in the creation of new radiology teaching file by analysing the present radiology teaching file sites on the Internet with many aspects and evaluating images on those sites, using Macintosh II ci compute r, 28.8kbps TelePort Fax/Modem, Netscape Navigator 2.0 software. The results were as follow : 1. Analysis of radiology teaching file sites (1) Country distribution was the highest in USA (57.5%). (2) Average number of cases was 186 cases and radiology teaching file sites with search engine were 9 sites (22.5%). (3) At the method of case arrangement, anatomic area type and diagnosis type were found at the 10 sites (25%) each, question and answer type was found at the 9 sites (22.5%). (4) Radiology teaching file sites with oro-maxillofacial disorder were 9 sites (22.5%). (5) At the image format, GIF format was found at the 14 sites (35%), and JPEG format found at the 14 sites (35%). (6) Created year was the highest in 1995 (43.7%). (7) Continuing case upload was found at the 35 sites (87.5%). 2. Evaluation of images on the radiology teaching files (1) Average file size of GIF format (71 Kbyte) was greater than that of JPEG format (24 Kbyte). (P<0.001) (2) Image quality of GIF format was better than that of JPEG format. (P<0.001)

  14. 14 CFR 221.102 - Accessibility of tariffs to the public.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Accessibility of tariffs to the public. 221.102 Section 221.102 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... Inspection § 221.102 Accessibility of tariffs to the public. Each file of tariffs shall be kept in complete...

  15. Simple re-instantiation of small databases using cloud computing.

    Science.gov (United States)

    Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M

    2013-01-01

    Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.

  16. Configuration Management File Manager Developed for Numerical Propulsion System Simulation

    Science.gov (United States)

    Follen, Gregory J.

    1997-01-01

    One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.

  17. Federating LHCb datasets using the DIRAC File catalog

    CERN Document Server

    Haen, Christophe; Frank, Markus; Tsaregorodtsev, Andrei

    2015-01-01

    In the distributed computing model of LHCb the File Catalog (FC) is a central component that keeps track of each file and replica stored on the Grid. It is federating the LHCb data files in a logical namespace used by all LHCb applications. As a replica catalog, it is used for brokering jobs to sites where their input data is meant to be present, but also by jobs for finding alternative replicas if necessary. The LCG File Catalog (LFC) used originally by LHCb and other experiments is now being retired and needs to be replaced. The DIRAC File Catalog (DFC) was developed within the framework of the DIRAC Project and presented during CHEP 2012. From the technical point of view, the code powering the DFC follows an Aspect oriented programming (AOP): each type of entity that is manipulated by the DFC (Users, Files, Replicas, etc) is treated as a separate 'concern' in the AOP terminology. Hence, the database schema can also be adapted to the needs of a Virtual Organization. LHCb opted for a highly tuned MySQL datab...

  18. 77 FR 37393 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-06-21

    ... acquire from Interstate Power and Light Company certain batteries, switches, related equipment and structures etc pursuant to section 203. Filed Date: 6/11/12. Accession Number: 20120611-5183. Comments Due: 5... Numbers: ER11-4657-001. Applicants: Apple Group. Description: Apple Group Baseline Tariff to be effective...

  19. 78 FR 68833 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-11-15

    ... Installed Capacity Requirement, Hydro Quebec Interconnection Capability Credits and Related Values for the... Company. Description: Southern California Edison Company submits LGIA with Portal Ridge Solar A, Portal Ridge Solar B, Portal Ridge Solar C to be effective 11/7/2013. Filed Date: 11/6/13. Accession Number...

  20. 78 FR 2381 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-01-11

    ...: Public Service Company of New Mexico. Description: Public Service Company of New Mexico submits its Triennial Market Power Update pursuant to Order No. 697. Filed Date: 12/21/12. Accession Number: 20121226... Pacific Power Company. Description: Updated Market Power Analysis for Southwest Region of Sierra Pacific...

  1. Agent-Mining of Grid Log-Files: A Case Study

    NARCIS (Netherlands)

    Stoter, A.; Dalmolen, Simon; Mulder, .W.

    2013-01-01

    Grid monitoring requires analysis of large amounts of log files across multiple domains. An approach is described for automated extraction of job-flow information from large computer grids, using software agents and genetic computation. A prototype was created as a first step towards communities of

  2. Jefferson Lab mass storage and file replication services

    International Nuclear Information System (INIS)

    Bird, I.; Chen, Y.; Hess, B.; Kowalski, A.; Watson, C.

    2001-01-01

    Jefferson Lab has implemented a scalable, distributed, high performance mass storage system-JASMine. The system is entirely implemented in Java, provides access to robotic tape storage and includes disk cache and stage manager components. The disk manager subsystem may be used independently to manage stand-alone disk pools. The system includes a scheduler to provide policy-based access to the storage systems. Security is provided by pluggable authentication modules and it implemented at the network socket level. The tape and disk cache systems have well defined interfaces in order to provide integration with grid-based services. The system is in production and being used to archive 1 TB per day from the experiments, and currently moves over 2 TB per day total. The authors will describe the architecture of JASMine; discuss the rationale for building the system, and present a transparent 3 rd party file replication service to move data to collaborating institutes using JASMine, XML, and servlet technology interfacing to grid-based file transfer mechanisms

  3. Further computer appreciation

    CERN Document Server

    Fry, T F

    2014-01-01

    Further Computer Appreciation is a comprehensive cover of the principles and aspects in computer appreciation. The book starts by describing the development of computers from the first to the third computer generations, to the development of processors and storage systems, up to the present position of computers and future trends. The text tackles the basic elements, concepts and functions of digital computers, computer arithmetic, input media and devices, and computer output. The basic central processor functions, data storage and the organization of data by classification of computer files,

  4. Grammar-Based Specification and Parsing of Binary File Formats

    Directory of Open Access Journals (Sweden)

    William Underwood

    2012-03-01

    Full Text Available The capability to validate and view or play binary file formats, as well as to convert binary file formats to standard or current file formats, is critically important to the preservation of digital data and records. This paper describes the extension of context-free grammars from strings to binary files. Binary files are arrays of data types, such as long and short integers, floating-point numbers and pointers, as well as characters. The concept of an attribute grammar is extended to these context-free array grammars. This attribute grammar has been used to define a number of chunk-based and directory-based binary file formats. A parser generator has been used with some of these grammars to generate syntax checkers (recognizers for validating binary file formats. Among the potential benefits of an attribute grammar-based approach to specification and parsing of binary file formats is that attribute grammars not only support format validation, but support generation of error messages during validation of format, validation of semantic constraints, attribute value extraction (characterization, generation of viewers or players for file formats, and conversion to current or standard file formats. The significance of these results is that with these extensions to core computer science concepts, traditional parser/compiler technologies can potentially be used as a part of a general, cost effective curation strategy for binary file formats.

  5. A micro-computed tomographic evaluation of dentinal microcrack alterations during root canal preparation using single-file Ni-Ti systems.

    Science.gov (United States)

    Li, Mei-Lin; Liao, Wei-Li; Cai, Hua-Xiong

    2018-01-01

    The aim of the present study was to evaluate the length of dentinal microcracks observed prior to and following root canal preparation with different single-file nickel-titanium (Ni-Ti) systems using micro-computed tomography (micro-CT) analysis. A total of 80 mesial roots of mandibular first molars presenting with type II Vertucci canal configurations were scanned at an isotropic resolution of 7.4 µm. The samples were randomly assigned into four groups (n=20 per group) according to the system used for root canal preparation, including the WaveOne (WO), OneShape (OS), Reciproc (RE) and control groups. A second micro-CT scan was conducted after the root canals were prepared with size 25 instruments. Pre- and postoperative cross-section images of the roots (n=237,760) were then screened to identify the lengths of the microcracks. The results indicated that the microcrack lengths were notably increased following root canal preparation (Pfiles. Among the single-file Ni-Ti systems, WO and RE were not observed to cause notable microcracks, while the OS system resulted in evident microcracks.

  6. The Role of the Radiation Safety Information Computational Center (RSICC) in Knowledge Management

    International Nuclear Information System (INIS)

    Valentine, T.

    2016-01-01

    Full text: The Radiation Safety Information Computational Center (RSICC) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 packages that have been provided by contributors from various agencies. RSICC’s customers obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to help ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programmes both domestically and internationally, as the majority of RSICC’s customers are students attending U.S. universities. RSICC also supports and promotes workshops and seminars in nuclear science and technology to further the use and/or development of computational tools and data. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC’s activities, services, and systems that support knowledge management and education and training in the nuclear field. (author

  7. BIBLIO: A Reprint File Management Algorithm

    Science.gov (United States)

    Zelnio, Robert N.; And Others

    1977-01-01

    The development of a simple computer algorithm designed for use by the individual educator or researcher in maintaining and searching reprint files is reported. Called BIBLIO, the system is inexpensive and easy to operate and maintain without sacrificing flexibility and utility. (LBH)

  8. COMPUTING SERVICES DURING THE ANNUAL CERN SHUTDOWN

    CERN Multimedia

    2000-01-01

    As in previous years, computing services run by IT division will be left running unattended during the annual shutdown. The following points should be noted. No interruptions are scheduled for local and wide area networking and the ACB, e-mail and unix interactive services. Maintenance work is scheduled for the NICE home directory servers and the central Web servers. Users must, therefore, expect service interruptions. Unix batch services will be available but without access to HPSS or to manually mounted tapes. Dedicated Engineering services, general purpose database services and the Helpdesk will be closed during this period. An operator service will be maintained and can be reached at extension 75011 or by email to: computer.operations@cern.ch Users should be aware that, except where there are special arrangements, any major problems that develop during this period will most likely be resolved only after CERN has reopened. In particular, we cannot guarantee backups for Home Directory files for eithe...

  9. The NEA computer program library: a possible GDMS application

    International Nuclear Information System (INIS)

    Schuler, W.

    1978-01-01

    NEA Computer Program library maintains a series of eleven sequential computer files, used for linked applications in managing their stock of computer codes for nuclear reactor calculations, storing index and program abstract information, and administering their service to requesters. The high data redundancy beween the files suggests that a data base approach would be valid and this paper suggests a possible 'schema' for an CODASYL GDMS

  10. Development of a script for converting DICOM files to .TXT

    International Nuclear Information System (INIS)

    Abrantes, Marcos E.S.; Oliveira, A.H. de

    2014-01-01

    Background: with the increased use of computer simulation techniques for diagnosis or therapy in patients, the MCNP and SCMS software is being widely used. For use as SCMS data entry interface for the MCNP is necessary to perform transformation of DICOM images to text files. Objective: to produce a semi automatic script conversion DICOM images generated by Computerized Tomography or Magnetic Resonance, for .txt in the IMAGEJ software. Methodology: this study was developed in the IMAGEJ software platform with an Intel Core 2 Duo computer, CPU of 2.00GHz, with 2:00 GB of RAM for 32-bit system. Development of the script was held in a text editor using JAVA language. For script insertion in IMAGEJ the plug in tool of this software was used. After this, a window is open asking for the path of the files that will be read, first and last name of DICOM file to be converted, along with where the new files will be stored. Results: for the manual processing of DICOM conversion to .txt of cerebral computed tomography with 600 images requires a time of about 8 hours. The use of script allows conversion time reduction for 12 minutes. Conclusion: the script used demonstrates DICOM conversion ability to .txt and a significant improvement in time savings in processing

  11. Public census data on CD-ROM at Lawrence Berkeley Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Merrill, D.W.

    1992-10-01

    The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socio-economic and geographic data files which are available to CEDR and PAREP collaborators via LBL's computing network. At this time 70 CD-ROM diskettes (approximately 36 gigabytes) are on line via the Unix file server cedrcd. lbl. gov. Most of the files are from the US Bureau of the Census, and most pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the form of ASCII text files. Printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), or the UC Documents Library. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user's application program(s).

  12. Public census data on CD-ROM at Lawrence Berkeley Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Merrill, D.W.

    1992-10-01

    The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socio-economic and geographic data files which are available to CEDR and PAREP collaborators via LBL`s computing network. At this time 70 CD-ROM diskettes (approximately 36 gigabytes) are on line via the Unix file server cedrcd. lbl. gov. Most of the files are from the US Bureau of the Census, and most pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the form of ASCII text files. Printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), or the UC Documents Library. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user`s application program(s).

  13. 75 FR 61713 - Combined Notice of Filings #1

    Science.gov (United States)

    2010-10-06

    ... Numbers: EC10-98-000. Applicants: GDF SUEZ S.A., INTERNATIONAL POWER PLC. Description: Joint Application... Plc. Filed Date: 09/21/2010. Accession Number: 20100921-5079. Comment Date: 5 p.m. Eastern Time on... Energy Company, LLC submits application for authorization to make wholesale sales of energy and capacity...

  14. 77 FR 38044 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-06-26

    ...-000. Applicants: NaturEner Rim Rock Wind Energy, LLC. Description: Notice of Self Certification of Exempt Wholesale Generator Status of NaturEner Rim Rock Wind Energy, LLC. Filed Date: 6/18/12. Accession... Glacier Wind Energy 1, LLC. Description: Notice of Self-Certification of Exempt Wholesale Generator Status...

  15. 78 FR 9682 - Combined Notice of Filings #2

    Science.gov (United States)

    2013-02-11

    ... Wholesale Generator Status of Niagara Wind Power, LLC. Filed Date: 1/31/13. Accession Number: 20130131-5139... Facility, LLC, Blackwell Wind, LLC, Butler Ridge Wind Energy Center, LLC, Cimarron Wind Energy, LLC....P., Florida Power & Light Co., FPL Energy Burleigh County Wind, LLC, FPL Energy Cabazon Wind, LLC...

  16. 78 FR 61942 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-10-07

    ...-Certification Of Exempt Wholesale Generator Status Of Mountain Wind Power, LLC. Filed Date: 9/26/13. Accession...: Mountain Wind Power, LLC. Description: Notice Of Self-Certification Of Exempt Wholesale Generator Status Of... Winds, LLC, FPL Energy Cabazon Wind, LLC, FPL Energy Green Power Wind, LLC, FPL Energy Montezuma Wind...

  17. 77 FR 56638 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-09-13

    .... Applicants: New York Independent System Operator, Inc. Description: Amendment to NYISO OATT, Attachment Y....m. e.t. 9/17/12. Docket Numbers: ER12-2368-000. Applicants: Denver City Energy Associates, LP... Denver City Energy Associates, L.P. tariff. Filed Date: 9/5/12. Accession Number: 20120905-5122. Comments...

  18. 76 FR 67165 - Combined Notice of Filings #1

    Science.gov (United States)

    2011-10-31

    ... Thursday, November 10, 2011. Docket Numbers: ER12-141-000. Applicants: NAP Trading and Marketing, Inc. Description: NAP Trading and Marketing, Inc submits Notice of Cancellation of Market-Based Rate Tariff to be... of Affiliate Restrictions to be effective 10/20/2011. Filed Date: 10/20/2011. Accession Number...

  19. 77 FR 34376 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-06-11

    ...: Copper Mountain Solar 2, LLC. Description: Copper Mountain Solar 2, LLC MBR Tariff Revision to be.... Docket Numbers: ER12-1792-001. Applicants: Community Energy, Inc. Description: Amendment to MBR... MBR Application to be effective 6/30/2012. Filed Date: 6/1/12. Accession Number: 20120601-5291. [[Page...

  20. 78 FR 23760 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-04-22

    ...-1266-000. Applicants: CalEnergy, LLC. Description: CalEnergy FERC MBR Tariff Application to be.... Docket Numbers: ER13-1267-000. Applicants: CE Leathers Company. Description: CE Leathers FERC MBR Tariff... Company MBR Tariff Application to be effective 6/3/2013. Filed Date: 4/12/13. Accession Number: 20130412...