WorldWideScience

Sample records for unit computer file

  1. Computer files.

    Science.gov (United States)

    Malik, M

    1995-02-01

    From what has been said, several recommendations can be made for users of small personal computers regardless of which operating system they use. If your computer has a large hard disk not specially required by any single application, organize the disk into a small number of volumes. You will then be using the computer as if it had several smaller disks, which will help you to create a logical file structure. The size of individual volumes has to be selected carefully with respect to the files kept in each volume. Otherwise, it may be that you will have too much space in one volume and not enough in another. In each volume, organize the structure of directories and subdirectories logically so that they correspond to the logic of your file content. Be aware of the fact that the directories suggested as default when installing new software are often not the optimum. For instance, it is better to put different graphics packages under a common subdirectory rather than to install them at the same level as all other packages including statistics, text processors, etc. Create a special directory for each task you use the computer. Note that it is a bad practice to keep many different and logically unsorted files in the root directory of any of your volumes. Only system and important service files should be kept there. Although any file may be written all over the disk, access to it will be faster if it is written over the minimum number of cylinders. From time to time, use special programs that reorganize your files in this way.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Permanent-File-Validation Utility Computer Program

    Science.gov (United States)

    Derry, Stephen D.

    1988-01-01

    Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.

  3. Small file aggregation in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  4. Unit 03 - Introduction to Computers

    OpenAIRE

    Unit 74, CC in GIS; National Center for Geographic Information and Analysis

    1990-01-01

    This unit provides a brief introduction to computer hardware and software. It discusses binary notation, the ASCII coding system and hardware components including the central processing unit (CPU), memory, peripherals and storage media. Software including operating systems, word processors database packages, spreadsheets and statistical packages are briefly described.

  5. Administration of Library-Owned Computer Files. SPEC Kit 159.

    Science.gov (United States)

    Shaw, Suzanne J.

    This document reports the results of a follow-up survey of 34 Association of Research Libraries member libraries which was conducted in 1989 to measure changes that had taken place in the administration of computer files (CF)--previously referred to as machine readable data files--since the original survey in 1984. It is noted that this survey…

  6. Distributing an executable job load file to compute nodes in a parallel computer

    Science.gov (United States)

    Gooding, Thomas M.

    2016-08-09

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  7. Distributing an executable job load file to compute nodes in a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Gooding, Thomas M.

    2016-09-13

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  8. Sharing digital micrographs and other data files between computers.

    Science.gov (United States)

    Entwistle, A

    2004-01-01

    It ought to be easy to exchange digital micrographs and other computer data files with a colleague even on another continent. In practice, this often is not the case. The advantages and disadvantages of various methods that are available for exchanging data files between computers are discussed. When possible, data should be transferred through computer networking. When data are to be exchanged locally between computers with similar operating systems, the use of a local area network is recommended. For computers in commercial or academic environments that have dissimilar operating systems or are more widely spaced, the use of FTPs is recommended. Failing this, posting the data on a website and transferring by hypertext transfer protocol is suggested. If peer to peer exchange between computers in domestic environments is needed, the use of Messenger services such as Microsoft Messenger or Yahoo Messenger is the method of choice. When it is not possible to transfer the data files over the internet, single use, writable CD ROMs are the best media for transferring data. If for some reason this is not possible, DVD-R/RW, DVD+R/RW, 100 MB ZIP disks and USB flash media are potentially useful media for exchanging data files.

  9. 29 CFR 459.1 - Computation of time for filing papers.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 2 2010-07-01 2010-07-01 false Computation of time for filing papers. 459.1 Section 459.1... OF CONDUCT MISCELLANEOUS § 459.1 Computation of time for filing papers. In computing any period of... computations. When these regulations require the filing of any paper, such document must be received by...

  10. NET: an inter-computer file transfer command

    Energy Technology Data Exchange (ETDEWEB)

    Burris, R.D.

    1978-05-01

    The NET command was defined and supported in order to facilitate file transfer between computers. Among the goals of the implementation were greatest possible ease of use, maximum power (i.e., support of a diversity of equipment and operations), and protection of the operating system.

  11. 5 CFR 2429.21 - Computation of time for filing papers.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Computation of time for filing papers... REQUIREMENTS General Requirements § 2429.21 Computation of time for filing papers. (a) In computing any period... § 2429.23(a) of this part, when this subchapter requires the filing of any paper with the Authority,...

  12. Citizens unite for computational immunology!

    Science.gov (United States)

    Belden, Orrin S; Baker, Sarah Catherine; Baker, Brian M

    2015-07-01

    Recruiting volunteers who can provide computational time, programming expertise, or puzzle-solving talent has emerged as a powerful tool for biomedical research. Recent projects demonstrate the potential for such 'crowdsourcing' efforts in immunology. Tools for developing applications, new funding opportunities, and an eager public make crowdsourcing a serious option for creative solutions for computationally-challenging problems. Expanded uses of crowdsourcing in immunology will allow for more efficient large-scale data collection and analysis. It will also involve, inspire, educate, and engage the public in a variety of meaningful ways. The benefits are real - it is time to jump in!

  13. Development of CONSER Cataloging Policies for Remote Access Computer File Serials.

    Science.gov (United States)

    Anderson, Bill; Hawkins, Les

    1996-01-01

    Describes the development of CONSER (Cooperative Online Serials) policies and practices for cataloging remote access computer file serials. Topics include electronic serials on the Internet, cataloging standards for computer files, OCLC and Internet resources, networked resources as published documents, multiple file formats, sources of…

  14. Storing files in a parallel computing system based on user or application specification

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Nick, Jeffrey M.; Grider, Gary; Torres, Aaron

    2016-03-29

    Techniques are provided for storing files in a parallel computing system based on a user-specification. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a specification from the distributed application indicating how the plurality of files should be stored; and storing one or more of the plurality of files in one or more storage nodes of a multi-tier storage system based on the specification. The plurality of files comprise a plurality of complete files and/or a plurality of sub-files. The specification can optionally be processed by a daemon executing on one or more nodes in a multi-tier storage system. The specification indicates how the plurality of files should be stored, for example, identifying one or more storage nodes where the plurality of files should be stored.

  15. Converting Between PLY and Ballistic Research Laboratory-Computer-Aided Design (BRL-CAD) File Formats

    Science.gov (United States)

    2015-02-01

    Converting Between PLY and Ballistic Research Laboratory–Computer-Aided Design (BRL- CAD ) File Formats by Rishub Jain ARL-CR-0760...0760 February 2015 Converting Between PLY and Ballistic Research Laboratory–Computer-Aided Design (BRL- CAD ) File Formats Rishub Jain US...and Ballistic Research Laboratory–Computer-Aided Design (BRL- CAD ) File Formats 5a. CONTRACT NUMBER W911NF-10-2-0076 5b. GRANT NUMBER 5c. PROGRAM

  16. Methods and apparatus for multi-resolution replication of files in a parallel computing system using semantic information

    Energy Technology Data Exchange (ETDEWEB)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-10-20

    Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.

  17. 12 CFR 269b.720 - Computation of time for filing papers.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Computation of time for filing papers. 269b.720... papers. In computing any period of time prescribed by or allowed by the panel, the day of the act, event... regulations in this subchapter require the filing of any paper, such document must be received by the panel...

  18. Analyzing Log Files to Predict Students' Problem Solving Performance in a Computer-Based Physics Tutor

    Science.gov (United States)

    Lee, Young-Jin

    2015-01-01

    This study investigates whether information saved in the log files of a computer-based tutor can be used to predict the problem solving performance of students. The log files of a computer-based physics tutoring environment called Andes Physics Tutor was analyzed to build a logistic regression model that predicted success and failure of students'…

  19. Arranging and finding folders and files on your Windows 7 computer

    CERN Document Server

    Steps, Studio Visual

    2014-01-01

    If you have lots of documents on your desk, it may prove to be impossible to find the document you are looking for. In order to easily find certain documents, they are often stored in a filing cabinet and arranged in a logical order. The folders on your computer serve the same purpose. They do not just contain files; they can also contain other folders. You can create an unlimited number of folders, and each folder can contain any number of subfolders and files. You can use Windows Explorer, also called the folder window, to work with the files and folders on your computer. You can copy, delete, move, find, and sort files, among other things. Or you can transfer files and folders to a USB stick, an external hard drive, a CD, DVD or Blu-Ray disk. In this practical guide we will show you how to use the folder window, and help you arrange your own files.

  20. Comparison of canal transportation and centering ability of hand Protaper files and rotary Protaper files by using micro computed tomography

    OpenAIRE

    Amit Gandhi; Taru Gandhi

    2011-01-01

    Introduction and objective: The aim of the present study was to compare root canal preparation with rotary ProTaper files and hand ProTaper files to find a better instrumentation technique for maintaining root canal geometry with the aid of computed tomography. Material and methods: Twenty curved root canals with at least 10 degree of curvature were divided into 2 groups of 10 teeth each. In group I the canals were prepared with hand ProTaper files and in group II the canals were prepared wit...

  1. Heuristic file sorted assignment algorithm of parallel I/O on cluster computing system

    Institute of Scientific and Technical Information of China (English)

    CHEN Zhi-gang; ZENG Bi-qing; XIONG Ce; DENG Xiao-heng; ZENG Zhi-wen; LIU An-feng

    2005-01-01

    A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk according to the similar service time. Firstly, the files were sorted and stored at the set I in descending order in terms of their service time, then one disk of cluster node was selected randomly when the files were to be assigned, and at last the continuous files were taken orderly from the set I to the disk until the disk reached its load maximum. The experimental results show that the new strategy improves the performance by 20.2% when the load of the system is light and by 31.6% when the load is heavy. And the higher the data access rate, the more evident the improvement of the performance obtained by the heuristic file sorted assignment algorithm.

  2. Uniform Tests of File Converters Using Unit Cubes

    Science.gov (United States)

    2015-03-01

    and easy tool. This allows BRL–CAD to act as a hub for conversion and, in turn, increases the number of users of BRL–CAD. The converters all work...Design FASTGEN Fast Shotline Generator GDiff Geometry Differences GUI graphical user interface STL STereoLithography 6 1 DEFENSE... user to convert from different geometry file types. The goal of this project is to create a uniform test for every file converter. This test is the

  3. Geothermal-energy files in computer storage: sites, cities, and industries

    Energy Technology Data Exchange (ETDEWEB)

    O' Dea, P.L.

    1981-12-01

    The site, city, and industrial files are described. The data presented are from the hydrothermal site file containing about three thousand records which describe some of the principal physical features of hydrothermal resources in the United States. Data elements include: latitude, longitude, township, range, section, surface temperature, subsurface temperature, the field potential, and well depth for commercialization. (MHR)

  4. RRB / SSI Interface Checkwriting Integrated Computer Operation Extract File (CHICO)

    Data.gov (United States)

    Social Security Administration — This monthly file provides SSA with information about benefit payments made to railroad retirement beneficiaries. SSA uses this data to verify Supplemental Security...

  5. Computational unit for non-contact photonic system

    Science.gov (United States)

    Kochetov, Alexander V.; Skrylev, Pavel A.

    2005-06-01

    Requirements to the unified computational unit for non-contact photonic system have been formulated. Estimation of central processing unit performance and required memory size are calculated. Specialized microcontroller optimal to use as central processing unit has been selected. Memory chip types are determinated for system. The computational unit consists of central processing unit based on selected microcontroller, NVRAM memory, receiving circuit, SDRAM memory, control and power circuits. It functions, as performing unit that calculates required parameters ofrail track.

  6. Project Integration Architecture (PIA) and Computational Analysis Programming Interface (CAPRI) for Accessing Geometry Data from CAD Files

    Science.gov (United States)

    Benyo, Theresa L.

    2002-01-01

    Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.

  7. 43 CFR 44.56 - How does a unit of general local government file a protest?

    Science.gov (United States)

    2010-10-01

    ... FINANCIAL ASSISTANCE, LOCAL GOVERNMENTS State and Local Governments' Responsibilities After the Department Distributes Payments § 44.56 How does a unit of general local government file a protest? The protesting local... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false How does a unit of general...

  8. Fast and Easy Searching of Files in Unisys 2200 Computers

    Science.gov (United States)

    Snook, Bryan E.

    2010-01-01

    A program has been written to enable (1) fast and easy searching of symbolic files for one or more strings of characters, dates, or numerical values in specific fields or columns and (2) summarizing results of searching other fields or columns.

  9. Synchronizing files or images among several computers or removable devices. A utility to avoid frequent back-ups.

    Science.gov (United States)

    Leonardi, Rosalia; Maiorana, Francesco; Giordano, Daniela

    2008-06-01

    Many of us use and maintain files on more than 1 computer--a desktop part of the time, and a notebook, a palmtop, or removable devices at other times. It can be easy to forget which device contains the latest version of a particular file, and time-consuming searches often ensue. One way to solve this problem is to use software that synchronizes the files. This allows users to maintain updated versions of the same file in several locations.

  10. Data file, Continental Margin Program, Atlantic Coast of the United States: vol. 2 sample collection and analytical data

    Science.gov (United States)

    Hathaway, John C.

    1971-01-01

    The purpose of the data file presented below is twofold: the first purpose is to make available in printed form the basic data relating to the samples collected as part of the joint U.S. Geological Survey - Woods Hole Oceanographic Institution program of study of the Atlantic continental margin of the United States; the second purpose is to maintain these data in a form that is easily retrievable by modern computer methods. With the data in such form, repeate manual transcription for statistical or similar mathematical treatment becomes unnecessary. Manual plotting of information or derivatives from the information may also be eliminated. Not only is handling of data by the computer considerably faster than manual techniques, but a fruitful source of errors, transcription mistakes, is eliminated.

  11. Accelerating Computation of the Unit Commitment Problem (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Hummon, M.; Barrows, C.; Jones, W.

    2013-10-01

    Production cost models (PCMs) simulate power system operation at hourly (or higher) resolution. While computation times often extend into multiple days, the sequential nature of PCM's makes parallelism difficult. We exploit the persistence of unit commitment decisions to select partition boundaries for simulation horizon decomposition and parallel computation. Partitioned simulations are benchmarked against sequential solutions for optimality and computation time.

  12. Free Oscilloscope Web App Using a Computer Mic, Built-In Sound Library, or Your Own Files

    Science.gov (United States)

    Ball, Edward; Ruiz, Frances; Ruiz, Michael J.

    2017-01-01

    We have developed an online oscilloscope program which allows users to see waveforms by utilizing their computer microphones, selecting from our library of over 30 audio files, and opening any *.mp3 or *.wav file on their computers. The oscilloscope displays real-time signals against time. The oscilloscope has been calibrated so one can make…

  13. Survey on Security Issues in File Management in Cloud Computing Environment

    Science.gov (United States)

    Gupta, Udit

    2015-06-01

    Cloud computing has pervaded through every aspect of Information technology in past decade. It has become easier to process plethora of data, generated by various devices in real time, with the advent of cloud networks. The privacy of users data is maintained by data centers around the world and hence it has become feasible to operate on that data from lightweight portable devices. But with ease of processing comes the security aspect of the data. One such security aspect is secure file transfer either internally within cloud or externally from one cloud network to another. File management is central to cloud computing and it is paramount to address the security concerns which arise out of it. This survey paper aims to elucidate the various protocols which can be used for secure file transfer and analyze the ramifications of using each protocol.

  14. r.maxent.lambdas - Computes raw and/or logistic prediction maps from MaxEnt lambdas files

    OpenAIRE

    Blumentrath, Stefan

    2016-01-01

    The script is intended to compute raw and/or logistic prediction maps from a lambdas file produced with MaxEnt 3.3.3e. It will parse the specified lambdas-file from MaxEnt 3.3.3e and translate it into an r.mapcalc-expression which is then stored in a temporary file and finally piped to r.mapcalc. If alias names had been used in MaxEnt, these alias names can automatically be replaced according to a CSV-like file provided by the user. This file should contain alias names in the first column and...

  15. Dimensional quality control of Ti-Ni dental file by optical coordinate metrology and computed tomography

    DEFF Research Database (Denmark)

    Yagüe-Fabra, J.A.; Tosello, Guido; Ontiveros, S.

    2014-01-01

    Endodontic dental files usually present complex 3D geometries, which make the complete measurement of the component very challenging with conventional micro metrology tools. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactil...

  16. Participation of VAX VMS computers in IBM file-transfer networks

    Energy Technology Data Exchange (ETDEWEB)

    Raffenetti, R.C.

    1983-01-01

    Communications software written at Argonne National Laboratory enables VAX VMS computer systems to participate as end nodes in a standard IBM file-transfer network. The software, which emulates the IBM Network Job Entry (NJE) protocol, has been in use at Argonne for over two years, and is in use at other installations. The basic NJE services include transfer of print and punch files, job submittal, execution of remote commands, and transmission of user-to-user messages. The transmit services are asynchronous to the user's VMS session and received files are automatically routed to a designated user directory. Access to files is validated according to the VMS protection mechanism. New features which were added recently include application level software to transfer general, sequential files and to bridge the electronic mail systems of VMS and VM/CMS. This paper will review the NJE emulator and describe the design and implementation of the sequential file transfer service. The performance of the emulator will be described. Another paper at this symposium will describe the mail bridge.

  17. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  18. 78 FR 24199 - Streak Products, Inc. v. UTi, United States, Inc.; Notice of Filing of Complaint and Assignment

    Science.gov (United States)

    2013-04-24

    ... From the Federal Register Online via the Government Publishing Office FEDERAL MARITIME COMMISSION Streak Products, Inc. v. UTi, United States, Inc.; Notice of Filing of Complaint and Assignment Notice is given that a complaint has been filed with the Federal Maritime Commission (Commission) by Streak...

  19. Advanced Algebra and Trigonometry: Supplemental Computer Units.

    Science.gov (United States)

    Dotseth, Karen

    A set of computer-oriented, supplemental activities is offered which can be used with a course in advanced algebra and trigonometry. The activities involve use of the BASIC programming language; it is assumed that the teacher is familiar with programming in BASIC. Students will learn some BASIC; however, the intent is not to develop proficient…

  20. Free oscilloscope web app using a computer mic, built-in sound library, or your own files

    Science.gov (United States)

    Ball, Edward; Ruiz, Frances; Ruiz, Michael J.

    2017-07-01

    We have developed an online oscilloscope program which allows users to see waveforms by utilizing their computer microphones, selecting from our library of over 30 audio files, and opening any *.mp3 or *.wav file on their computers. The oscilloscope displays real-time signals against time. The oscilloscope has been calibrated so one can make accurate frequency measurements of periodic waves to within 1%. The web app is ideal for computer projection in class.

  1. DOC-a file system cache to support mobile computers

    Science.gov (United States)

    Huizinga, D. M.; Heflinger, K.

    1995-09-01

    This paper identifies design requirements of system-level support for mobile computing in small form-factor battery-powered portable computers and describes their implementation in DOC (Disconnected Operation Cache). DOC is a three-level client caching system designed and implemented to allow mobile clients to transition between connected, partially disconnected and fully disconnected modes of operation with minimal user involvement. Implemented for notebook computers, DOC addresses not only typical issues of mobile elements such as resource scarcity and fluctuations in service quality but also deals with the pitfalls of MS-DOS, the operating system which prevails in the commercial notebook market. Our experiments performed in the software engineering environment of AST Research indicate not only considerable performance gains for connected and partially disconnected modes of DOC, but also the successful operation of the disconnected mode.

  2. Definitions of database files and fields of the Personal Computer-Based Water Data Sources Directory

    Science.gov (United States)

    Green, J. Wayne

    1991-01-01

    This report describes the data-base files and fields of the personal computer-based Water Data Sources Directory (WDSD). The personal computer-based WDSD was derived from the U.S. Geological Survey (USGS) mainframe computer version. The mainframe version of the WDSD is a hierarchical data-base design. The personal computer-based WDSD is a relational data- base design. This report describes the data-base files and fields of the relational data-base design in dBASE IV (the use of brand names in this abstract is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey) for the personal computer. The WDSD contains information on (1) the type of organization, (2) the major orientation of water-data activities conducted by each organization, (3) the names, addresses, and telephone numbers of offices within each organization from which water data may be obtained, (4) the types of data held by each organization and the geographic locations within which these data have been collected, (5) alternative sources of an organization's data, (6) the designation of liaison personnel in matters related to water-data acquisition and indexing, (7) the volume of water data indexed for the organization, and (8) information about other types of data and services available from the organization that are pertinent to water-resources activities.

  3. A Newer User Authentication, File encryption and Distributed Server Based Cloud Computing security architecture

    Directory of Open Access Journals (Sweden)

    Kawser Wazed Nafi

    2012-10-01

    Full Text Available The cloud computing platform gives people the opportunity for sharing resources, services and information among the people of the whole world. In private cloud system, information is shared among the persons who are in that cloud. For this, security or personal information hiding process hampers. In this paper we have proposed new security architecture for cloud computing platform. This ensures secure communication system and hiding information from others. AES based file encryption system and asynchronous key system for exchanging information or data is included in this model. This structure can be easily applied with main cloud computing features, e.g. PaaS, SaaS and IaaS. This model also includes onetime password system for user authentication process. Our work mainly deals with the security system of the whole cloud computing platform.

  4. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  5. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  6. Fast crustal deformation computing method for multiple computations accelerated by a graphics processing unit cluster

    Science.gov (United States)

    Yamaguchi, Takuma; Ichimura, Tsuyoshi; Yagi, Yuji; Agata, Ryoichiro; Hori, Takane; Hori, Muneo

    2017-08-01

    As high-resolution observational data become more common, the demand for numerical simulations of crustal deformation using 3-D high-fidelity modelling is increasing. To increase the efficiency of performing numerical simulations with high computation costs, we developed a fast solver using heterogeneous computing, with graphics processing units (GPUs) and central processing units, and then used the solver in crustal deformation computations. The solver was based on an iterative solver and was devised so that a large proportion of the computation was calculated more quickly using GPUs. To confirm the utility of the proposed solver, we demonstrated a numerical simulation of the coseismic slip distribution estimation, which requires 360 000 crustal deformation computations with 82 196 106 degrees of freedom.

  7. Evaluation of clinical data in childhood asthma. Application of a computer file system

    Energy Technology Data Exchange (ETDEWEB)

    Fife, D.; Twarog, F.J.; Geha, R.S.

    1983-10-01

    A computer file system was used in our pediatric allergy clinic to assess the value of chest roentgenograms and hemoglobin determinations used in the examination of patients and to correlate exposure to pets and forced hot air with the severity of asthma. Among 889 children with asthma, 20.7% had abnormal chest roentgenographic findings, excluding hyperinflation and peribronchial thickening, and 0.7% had abnormal hemoglobin values. Environmental exposure to pets or forced hot air was not associated with increased severity of asthma, as assessed by five measures of outcome: number of medications administered, requirement for corticosteroids, frequency of clinic visits, frequency of emergency room visits, and frequency of hospitalizations.

  8. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2016-07-08

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  9. Trust in social computing. The case of peer-to-peer file sharing networks

    Directory of Open Access Journals (Sweden)

    Heng Xu

    2011-09-01

    Full Text Available Social computing and online communities are changing the fundamental way people share information and communicate with each other. Social computing focuses on how users may have more autonomy to express their ideas and participate in social exchanges in various ways, one of which may be peer-to-peer (P2P file sharing. Given the greater risk of opportunistic behavior by malicious or criminal communities in P2P networks, it is crucial to understand the factors that affect individual’s use of P2P file sharing software. In this paper, we develop and empirically test a research model that includes trust beliefs and perceived risks as two major antecedent beliefs to the usage intention. Six trust antecedents are assessed including knowledge-based trust, cognitive trust, and both organizational and peer-network factors of institutional trust. Our preliminary results show general support for the model and offer some important implications for software vendors in P2P sharing industry and regulatory bodies.

  10. 7 CFR 47.25 - Filing; extensions of time; effective date of filing; computations of time; official notice.

    Science.gov (United States)

    2010-01-01

    ... Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE MARKETING OF PERISHABLE AGRICULTURAL COMMODITIES RULES OF PRACTICE UNDER THE PERISHABLE AGRICULTURAL COMMODITIES ACT Rules Applicable to Reparation Proceedings § 47.25 Filing; extensions of...

  11. Implicit Theories of Creativity in Computer Science in the United States and China

    Science.gov (United States)

    Tang, Chaoying; Baer, John; Kaufman, James C.

    2015-01-01

    To study implicit concepts of creativity in computer science in the United States and mainland China, we first asked 308 Chinese computer scientists for adjectives that would describe a creative computer scientist. Computer scientists and non-computer scientists from China (N = 1069) and the United States (N = 971) then rated how well those…

  12. Implicit Theories of Creativity in Computer Science in the United States and China

    Science.gov (United States)

    Tang, Chaoying; Baer, John; Kaufman, James C.

    2015-01-01

    To study implicit concepts of creativity in computer science in the United States and mainland China, we first asked 308 Chinese computer scientists for adjectives that would describe a creative computer scientist. Computer scientists and non-computer scientists from China (N = 1069) and the United States (N = 971) then rated how well those…

  13. For what reasons do patients file a complaint? A retrospective study on patient rights units' registries.

    Science.gov (United States)

    Önal, Gülsüm; Civaner, M Murat

    2015-01-01

    In 2004, Patient Rights Units were established in all public hospitals in Turkey to allow patients to voice their complaints about services. To determine what violations are reflected into the complaint mechanism, the pattern over time, and patients' expectations of the services. Descriptive study. A retrospective study performed using the complaint database of the Istanbul Health Directorate, from 2005 to 2011. The results indicate that people who are older than 40 years, women, and those with less than high school education are the most common patients in these units. A total of 218,186 complaints were filed. Each year, the number of complaints increased compared to the previous year, and nearly half of the applications were made in 2010 and 2011 (48.9%). The three most frequent complaints were "not benefiting from services in general" (35.4%), "not being treated in a respectable manner and in comfortable conditions" (17.8%), and "not being properly informed" (13.5%). Two-thirds of the overall applications were found in favour of the patients (63.3%), and but this rate has decreased over the years. Patients would like to be treated in a manner that respects their human dignity. Educating healthcare workers on communication skills might be a useful initiative. More importantly, health policies and the organisation of services should prioritise patient rights. It is only then would be possible to exercise patient rights in reality.

  14. Superscalar pipelined inner product computation unit for signed unsigned number

    Directory of Open Access Journals (Sweden)

    Ravindra P. Rajput

    2016-09-01

    Full Text Available In this paper, we proposed superscalar pipelined inner product computation unit for signed-unsigned number operating at 16 GHz. This is designed using five stage pipelined operation with four 8 × 8 multipliers operating in parallel. Superscalar pipelined is designed to compute four 8 × 8 products in parallel in three clock cycles. In the fourth clock cycle of the pipeline operation, two inner products are computed using two adders in parallel. Fifth stage of the pipeline is designed to compute the final product by adding two inner partial products. Upon the pipeline is filled up, every clock cycle the new product of 16 × 16-bit signed unsigned number is obtained. The worst delay measured among the pipeline stage is 0.062 ns, and this delay is considered as the clock cycle period. With the delay of 0.062 ns clock cycle period, the pipeline stage can be operated with 16 GHz synchronous clock signal. Each superscalar pipeline stage is implemented using 45 nm CMOS process technology, and the comparison of results shows that the delay is decreased by 38%, area is reduced by 45% and power dissipation is saved by 32%.

  15. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files

    Energy Technology Data Exchange (ETDEWEB)

    Randolph Schwarz; Leland L. Carter; Alysia Schwarz

    2005-08-23

    Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.

  16. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files

    Energy Technology Data Exchange (ETDEWEB)

    Randolph Schwarz; Leland L. Carter; Alysia Schwarz

    2005-08-23

    Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.

  17. 77 FR 2492 - United States Pharmacopeial Convention; Filing of Food Additive Petition; Amendment

    Science.gov (United States)

    2012-01-18

    ... Pharmacopeial Convention; Filing of Food Additive Petition; Amendment AGENCY: Food and Drug Administration, HHS... for a food additive petition filed by the U.S. Pharmacopeial Convention requesting that the food additive regulations that incorporate by reference food-grade specifications from prior editions of...

  18. Sandia`s computer support units: The first three years

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.N. [Sandia National Labs., Albuquerque, NM (United States). Labs. Computing Dept.

    1997-11-01

    This paper describes the method by which Sandia National Laboratories has deployed information technology to the line organizations and to the desktop as part of the integrated information services organization under the direction of the Chief Information officer. This deployment has been done by the Computer Support Unit (CSU) Department. The CSU approach is based on the principle of providing local customer service with a corporate perspective. Success required an approach that was both customer compelled at times and market or corporate focused in most cases. Above all, a complete solution was required that included a comprehensive method of technology choices and development, process development, technology implementation, and support. It is the authors hope that this information will be useful in the development of a customer-focused business strategy for information technology deployment and support. Descriptions of current status reflect the status as of May 1997.

  19. Influence of the glide path on various parameters of root canal prepared with WaveOne reciprocating file using cone beam computed tomography

    Directory of Open Access Journals (Sweden)

    Anil Dhingra

    2015-01-01

    Full Text Available Background: Nickel-titanium (NiTi rotary instrumentation carries a risk of fracture, mainly as a result of flexural (fatigue fracture and torsional (shear failure stresses. This risk might be reduced by creating a glide path before NiTi rotary instrumentation. The aim of this study was to compare various root canal parameters with the new WaveOne single-file reciprocating system in mesial canals of mandibular molars with and without glide path using cone beam computed tomography (CBCT. Materials and Methods: One hundred mandibular molar teeth with canal curvature between 20° and 30° were divided into two groups of 50 teeth each. In Group 1, no glide path was created, whereas in Group 2, a glide path was created with PathFiles at working length (WL. In both groups, canals were shaped with WaveOne primary reciprocating files to the WL. Canals were scanned in a CBCT unit before and after instrumentation. Postinstrumentation changes in canal curvature, cross-sectional area, centric ability, residual dentin thickness, and the extent of canal transportation were calculated using image analysis software and subjected to statistical analysis. Data were analyzed using Student′s t-test and Mann-Whitney U-test (P < 0.05. Results: The mean difference of root canal curvature, cross-sectional area, centric ability, and residual dentin thickness increased, whereas it reduced significantly for canal transportation in Group 2. Conclusion: WaveOne NiTi files appeared to maintain the original canal anatomy and the presence of a glide path further improves their performance and was found to be beneficial for all the parameters tested in this study.

  20. 2014 Cartographic Boundary File, United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  1. 2015 Cartographic Boundary File, Division for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  2. 2015 Cartographic Boundary File, Combined Statistical Area for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  3. 2014 Cartographic Boundary File, Combined Statistical Area for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  4. 2016 KML Boundary File, United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  5. 2015 Cartographic Boundary File, State-Congressional District for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  6. 2016 Cartographic Boundary File, United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  7. 2015 Cartographic Boundary File, Region for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  8. 2014 Cartographic Boundary File, Region for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  9. 2016 Cartographic Boundary File, Region for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  10. 2014 Cartographic Boundary File, Division for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  11. 2016 Cartographic Boundary File, Region for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  12. 2016 Cartographic Boundary File, Division for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  13. 2014 Cartographic Boundary File, State for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  14. 2014 Cartographic Boundary File, Division for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  15. 2015 Cartographic Boundary File, State-Congressional District for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  16. 2016 Cartographic Boundary File, Current County and Equivalent for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  17. 2014 Cartographic Boundary File, Division for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  18. 2014 Cartographic Boundary File, Division for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  19. 2016 Cartographic Boundary File, Division for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  20. 2015 Cartographic Boundary File, Division for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  1. 2016 Cartographic Boundary File, Division for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  2. 2016 Cartographic Boundary File, Division for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  3. 2014 Cartographic Boundary File, State-Congressional District for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  4. 2014 Cartographic Boundary File, State for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  5. 2014 Cartographic Boundary File, State-County for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  6. 2014 Cartographic Boundary File, State for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  7. 2015 Cartographic Boundary File, United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  8. 2014 Cartographic Boundary File, State-Congressional District for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  9. 2014 Cartographic Boundary File, State-Congressional District for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  10. 2015 Cartographic Boundary File, Region for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  11. 2015 Cartographic Boundary File, Division for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  12. 2014 Cartographic Boundary File, Urban Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  13. 2014 Cartographic Boundary File, State-County for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  14. 2014 Cartographic Boundary File, Region for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  15. 2015 Cartographic Boundary File, United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  16. 2015 Cartographic Boundary File, Combined Statistical Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  17. 2014 Cartographic Boundary File, State for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  18. 2014 Cartographic Boundary File, Combined Statistical Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  19. 2014 Cartographic Boundary File, State-Congressional District for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  20. 2015 Cartographic Boundary File, Combined Statistical Area for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  1. 2014 Cartographic Boundary File, Region for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  2. 2014 Cartographic Boundary File, Combined Statistical Area for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  3. 2015 Cartographic Boundary File, Division for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  4. 2016 Cartographic Boundary File, 115th Congressional Districts for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  5. 2016 Cartographic Boundary File, Current County and Equivalent for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  6. 2015 Cartographic Boundary File, State-Congressional District for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  7. 2014 Cartographic Boundary File, State-Congressional District for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  8. 2015 Cartographic Boundary File, State-Congressional District for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  9. 2016 Cartographic Boundary File, Current Combined Statistical Area for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  10. 2015 Cartographic Boundary File, Division for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  11. 2014 Cartographic Boundary File, United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  12. 2014 Cartographic Boundary File, State-County for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  13. 2014 Cartographic Boundary File, State-Congressional District for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  14. 2015 Cartographic Boundary File, Combined Statistical Area for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  15. 2015 Cartographic Boundary File, Region for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  16. 2015 Cartographic Boundary File, State-Congressional District for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  17. 2014 Cartographic Boundary File, United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  18. 2014 Cartographic Boundary File, Urban Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  19. 2014 Cartographic Boundary File, Combined Statistical Area for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  20. 2014 Cartographic Boundary File, Region for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  1. 2015 Cartographic Boundary File, Combined Statistical Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  2. 2014 Cartographic Boundary File, Region for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  3. 2014 Cartographic Boundary File, Region for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  4. 2014 Cartographic Boundary File, Division for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  5. 2015 Cartographic Boundary File, Combined Statistical Area for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  6. 2015 Cartographic Boundary File, Division for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  7. 2016 Cartographic Boundary File, 115th Congressional Districts for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /Topologically...

  8. 2016 Cartographic Boundary File, Division for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  9. 2015 Cartographic Boundary File, State-County for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  10. 2014 Cartographic Boundary File, State-County for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  11. 2016 Cartographic Boundary File, 115th Congressional Districts for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  12. 2015 Cartographic Boundary File, State-County for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  13. 2016 Cartographic Boundary File, Region for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  14. 2016 Cartographic Boundary File, Region for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  15. 2014 Cartographic Boundary File, State-County for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  16. 2016 Cartographic Boundary File, Region for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  17. 2015 Cartographic Boundary File, State-County for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  18. 2016 Cartographic Boundary File, Current County and Equivalent for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  19. Adolescent Fertility: National File [Machine-Readable Data File].

    Science.gov (United States)

    Moore, Kristin A.; And Others

    This computer file contains recent cross sectional data on adolescent fertility in the United States for 1960, 1965, 1970, 1975 and 1980-85. The following variables are included: (1) births; (2) birth rates; (3) abortions; (4) non-marital childbearing; (5) infant mortality; and (6) low birth weight. Data for both teenagers and women aged 20-24 are…

  20. Novel Framework for Hidden Data in the Image Page within Executable File Using Computation between Advanced Encryption Standard and Distortion Techniques

    CERN Document Server

    Naji, A W; Zaidan, B B; Al-Khateeb, Wajdi F; Khalifa, Othman O; Zaidan, A A; Gunawan, Teddy S

    2009-01-01

    The hurried development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information. In additional, digital document is also easy to copy and distribute, therefore it may face many threats. It became necessary to find an appropriate protection due to the significance, accuracy and sensitivity of the information. Furthermore, there is no formal method to be followed to discover a hidden data. In this paper, a new information hiding framework is presented.The proposed framework aim is implementation of framework computation between advance encryption standard (AES) and distortion technique (DT) which embeds information in image page within executable file (EXE file) to find a secure solution to cover file without change the size of cover file. The framework includes two main functions; first is the hiding of the information in the image page of EXE file, through the execution of four process (specify the cover file, spec...

  1. 7 CFR 900.15 - Filing; extensions of time; effective date of filing; and computation of time.

    Science.gov (United States)

    2010-01-01

    ...; and computation of time. 900.15 Section 900.15 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS Rules of Practice and Procedure...

  2. 37 CFR 5.11 - License for filing in a foreign country an application on an invention made in the United States...

    Science.gov (United States)

    2010-07-01

    ... foreign country an application on an invention made in the United States or for transmitting an... TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL SECRECY OF CERTAIN INVENTIONS AND LICENSES TO EXPORT AND... filing in a foreign country an application on an invention made in the United States or for transmitting...

  3. United States Files Complaint Against Volkswagen, Audi and Porsche for Alleged Clean Air Act Violations

    Science.gov (United States)

    WASHINGTON - The U.S. Department of Justice, on behalf of the U.S. Environmental Protection Agency, today filed a civil complaint in federal court in Detroit, Michigan against Volkswagen AG, Audi AG, Volkswagen Group of America, Inc., Volkswagen Gro

  4. User's guide for MODTOOLS: Computer programs for translating data of MODFLOW and MODPATH into geographic information system files

    Science.gov (United States)

    Orzol, Leonard L.

    1997-01-01

    MODTOOLS is a set of computer programs for translating data of the ground-water model, MODFLOW, and the particle-tracker, MODPATH, into a Geographic Information System (GIS). MODTOOLS translates data into a GIS software called ARC/INFO. MODFLOW is the recognized name for the U.S. Geological Survey Modular Three-Dimensional Finite-Difference Ground-Water Model. MODTOOLS uses the data arrays input to or output by MODFLOW during a ground-water flow simulation to construct several types of GIS output files. MODTOOLS can also be used to translate data from MODPATH into GIS files. MODPATH and its companion program, MODPATH-PLOT, are collectively called the U.S. Geological Survey Three-Dimensional Particle Tracking Post-Processing Programs. MODPATH is used to calculate ground-water flow paths using the results of MODFLOW and MODPATH-PLOT can be used to display the flow paths in various ways.

  5. Social Studies: Application Units. Course II, Teachers. Computer-Oriented Curriculum. REACT (Relevant Educational Applications of Computer Technology).

    Science.gov (United States)

    Tecnica Education Corp., San Carlos, CA.

    This book is one of a series in Course II of the Relevant Educational Applications of Computer Technology (REACT) Project. It is designed to point out to teachers two of the major applications of computers in the social sciences: simulation and data analysis. The first section contains a variety of simulation units organized under the following…

  6. Computational system to create an entry file for replicating I-125 seeds simulating brachytherapy case studies using the MCNPX code

    Directory of Open Access Journals (Sweden)

    Leonardo da Silva Boia

    2014-03-01

    Full Text Available Purpose: A computational system was developed for this paper in the C++ programming language, to create a 125I radioactive seed entry file, based on the positioning of a virtual grid (template in voxel geometries, with the purpose of performing prostate cancer treatment simulations using the MCNPX code.Methods: The system is fed with information from the planning system with regard to each seed’s location and its depth, and an entry file is automatically created with all the cards (instructions for each seed regarding their cell blocks and surfaces spread out spatially in the 3D environment. The system provides with precision a reproduction of the clinical scenario for the MCNPX code’s simulation environment, thereby allowing the technique’s in-depth study.Results and Conclusion: The preliminary results from this study showed that the lateral penumbra of uniform scanning proton beams was less sensitive In order to validate the computational system, an entry file was created with 88 125I seeds that were inserted in the phantom’s MAX06 prostate region with initial activity determined for the seeds at the 0.27 mCi value. Isodose curves were obtained in all the prostate slices in 5 mm steps in the 7 to 10 cm interval, totaling 7 slices. Variance reduction techniques were applied in order to optimize computational time and the reduction of uncertainties such as photon and electron energy interruptions in 4 keV and forced collisions regarding cells of interest. Through the acquisition of isodose curves, the results obtained show that hot spots have values above 300 Gy, as anticipated in literature, stressing the importance of the sources’ correct positioning, in which the computational system developed provides, in order not to release excessive doses in adjacent risk organs. The 144 Gy prescription curve showed in the validation process that it covers perfectly a large percentage of the volume, at the same time that it demonstrates a large

  7. The efficacy of the Self-Adjusting File versus WaveOne in removal of root filling residue that remains in oval canals after the use of ProTaper retreatment files: A cone-beam computed tomography study

    Directory of Open Access Journals (Sweden)

    Ajinkya M Pawar

    2016-01-01

    Full Text Available Aim: The current ex vivo study compared the efficacy of removing root fillings using ProTaper retreatment files followed by either WaveOne reciprocating file or the Self-Adjusting File (SAF. Materials and Methods: Forty maxillary canines with single oval root canal were selected and sectioned to obtain 18-mm root segments. The root canals were instrumented with WaveOne primary files, followed by obturation using warm lateral compaction, and the sealer was allowed to fully set. The teeth were then divided into two equal groups (N = 20. Initial removal of the bulk of root filling material was performed with ProTaper retreatment files, followed by either WaveOne files (Group 1 or SAF (Group 2. Endosolv R was used as a gutta-percha softener. Preoperative and postoperative high-resolution cone-beam computed tomography (CBCT was used to measure the volume of the root filling residue that was left after the procedure. Statistical analysis was performed using t-test. Results: The mean volume of root filling residue in Group 1 was 9.4 (±0.5 mm 3 , whereas in Group 2 the residue volume was 2.6 (±0.4 mm 3 , (P < 0.001; t-test. Conclusions: When SAF was used after ProTaper retreatment files, significantly less root filling residue was left in the canals compared to when WaveOne was used.

  8. A Fault—Tolerant File Management Algorithm in Distributed Computer System “THUDS”

    Institute of Scientific and Technical Information of China (English)

    廖先Shi; 金兰

    1989-01-01

    A concurrent control with independent processes from simultaneous access to a critical section is discussed for the case where there are two distinct classes of processes known as readers and writers.The readers can share the file with one another,but the interleaved execution with readers and writers may produce undesirable conflicts.The file management algorithm proposed in this paper is the activity of avoiding these results.This algorithm not only guarantees the consistency and integrity of the shared file,but also supports optimal parallelism.The concept of dynamic virtual queue is introduced and serves the foundation for this algorithm.Our algorithm with its implicit redundancy allows software fault-tolerant technique.

  9. OK, Computer: File Sharing, the Music Industry, and Why We Need the Pirate Party

    Directory of Open Access Journals (Sweden)

    Adrian Cosstick

    2009-03-01

    Full Text Available The Pirate Party believes the state and big business are in the process of protecting stale and inefficient models of business for their own monetary benefit by limiting our right to share information. The Pirate Party suggests that they are achieving this goal through the amendment of intellectual property legislation. In the dawn of the digital era, the Pirate Party advocates that governments and multinational corporations are using intellectual property to: crack down on file sharing which limits the ability to share knowledge and information; increase the terms and length of copyright to raise profits; and build code into music files which limits their ability to be shared (Pirate Party, 2009. There are a number of ‘copyright industries’ that are affected by these issues, none more so than the music industry. Its relationship with file sharing is topical and makes an excellent case study to address the impact big business has had on intellectual property and the need for the Pirate Party’s legislative input. The essay will then examine the central issues raised by illegal file sharing. In particular, the future for record companies in an environment that increasingly demands flexibility, and whether the Pirate Party’s proposal is a viable solution to the music industry’s problems

  10. A 1.5 GFLOPS Reciprocal Unit for Computer Graphics

    DEFF Research Database (Denmark)

    Nannarelli, Alberto; Rasmussen, Morten Sleth; Stuart, Matthias Bo

    2006-01-01

    The reciprocal operation 1/d is a frequent operation performed in graphics processors (GPUs). In this work, we present the design of a radix-16 reciprocal unit based on the algorithm combining the traditional digit-by-digit algorithm and the approximation of the reciprocal by one Newton...

  11. F2AC: A Lightweight, Fine-Grained, and Flexible Access Control Scheme for File Storage in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wei Ren

    2016-01-01

    Full Text Available Current file storage service models for cloud servers assume that users either belong to single layer with different privileges or cannot authorize privileges iteratively. Thus, the access control is not fine-grained and flexible. Besides, most access control methods at cloud servers mainly rely on computationally intensive cryptographic algorithms and, especially, may not be able to support highly dynamic ad hoc groups with addition and removal of group members. In this paper, we propose a scheme called F2AC, which is a lightweight, fine-grained, and flexible access control scheme for file storage in mobile cloud computing. F2AC can not only achieve iterative authorization, authentication with tailored policies, and access control for dynamically changing accessing groups, but also provide access privilege transition and revocation. A new access control model called directed tree with linked leaf model is proposed for further implementations in data structures and algorithms. The extensive analysis is given for justifying the soundness and completeness of F2AC.

  12. Thermal/Heat Transfer Analysis Using a Graphic Processing Unit (GPU) Enabled Computing Environment Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project was to use GPU enabled computing to accelerate the analyses of heat transfer and thermal effects. Graphical processing unit (GPU)...

  13. USGS Small-scale Dataset - 1:1,000,000-Scale Ferries of the United States 201406 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer portrays the ferries on major roads in the United States and Puerto Rico. The file was produced by extracting ferries from the 1:1,000,000-scale Major...

  14. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2012-08-22

    ... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants... regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems... entitled ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear...

  15. The Global Energy Situation on Earth, Student Guide. Computer Technology Program Environmental Education Units.

    Science.gov (United States)

    Northwest Regional Educational Lab., Portland, OR.

    This is the student guide in a set of five computer-oriented environmental/energy education units. Contents of this guide are: (1) Introduction to the unit; (2) The "EARTH" program; (3) Exercises; and (4) Sources of information on the energy crisis. This guide supplements a simulation which allows students to analyze different aspects of…

  16. A User’s Index to CRREL Land Treatment Computer Programs and Data Files.

    Science.gov (United States)

    1982-11-01

    file in raw form. It reads from DATAMETL. NUMERO BASIC 2 Program to print out a user-specifiable section of the heavy metals data in a tabular form. It...reads from METAL and DATAMETL. NUMERO BASIC 2 0 Same as NUMERO , except that there is a printing limit of 11 samples. UPETAL BASIC 16 Program to update...DATA EXPER>STATS>NTNSOCT 19 NUMBER BASIC EXPER>METALS>DART>NUMBER 13 ’ NUMERO BASIC EXPER>METALS>DART> NUMERO 13 NUMEROL BASIC EXPER>METALS>DART

  17. Parallel computing for simultaneous iterative tomographic imaging by graphics processing units

    Science.gov (United States)

    Bello-Maldonado, Pedro D.; López, Ricardo; Rogers, Colleen; Jin, Yuanwei; Lu, Enyue

    2016-05-01

    In this paper, we address the problem of accelerating inversion algorithms for nonlinear acoustic tomographic imaging by parallel computing on graphics processing units (GPUs). Nonlinear inversion algorithms for tomographic imaging often rely on iterative algorithms for solving an inverse problem, thus computationally intensive. We study the simultaneous iterative reconstruction technique (SIRT) for the multiple-input-multiple-output (MIMO) tomography algorithm which enables parallel computations of the grid points as well as the parallel execution of multiple source excitation. Using graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA) programming model an overall improvement of 26.33x was achieved when combining both approaches compared with sequential algorithms. Furthermore we propose an adaptive iterative relaxation factor and the use of non-uniform weights to improve the overall convergence of the algorithm. Using these techniques, fast computations can be performed in parallel without the loss of image quality during the reconstruction process.

  18. Comp Plan: A computer program to generate dose and radiobiological metrics from dose-volume histogram files.

    Science.gov (United States)

    Holloway, Lois Charlotte; Miller, Julie-Anne; Kumar, Shivani; Whelan, Brendan M; Vinod, Shalini K

    2012-01-01

    Treatment planning studies often require the calculation of a large number of dose and radiobiological metrics. To streamline these calculations, a computer program called Comp Plan was developed using MATLAB. Comp Plan calculates common metrics, including equivalent uniform dose, tumor control probability, and normal tissue complication probability from dose-volume histogram data. The dose and radiobiological metrics can be calculated for the original data or for an adjusted fraction size using the linear quadratic model. A homogeneous boost dose can be added to a given structure if desired. The final output is written to an Excel file in a format convenient for further statistical analysis. Comp Plan was verified by independent calculations. A lung treatment planning study comparing 45 plans for 7 structures using up to 6 metrics for each structure was successfully analyzed within approximately 5 minutes with Comp Plan. The code is freely available from the authors on request.

  19. PCF File Format.

    Energy Technology Data Exchange (ETDEWEB)

    Thoreson, Gregory G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    PCF files are binary files designed to contain gamma spectra and neutron count rates from radiation sensors. It is the native format for the GAmma Detector Response and Analysis Software (GADRAS) package [1]. It can contain multiple spectra and information about each spectrum such as energy calibration. This document outlines the format of the file that would allow one to write a computer program to parse and write such files.

  20. Verification of data files of TREF-computer program; TREF-ohjelmiston ohjaustiedostojen soveltuvuustutkimus

    Energy Technology Data Exchange (ETDEWEB)

    Ruottu, S.; Halme, A.; Ruottu, A. [Einco Oy, Karhula (Finland)

    1996-12-01

    Originally the aim of Y43 project was to verify TREF data files for several different processes. However, it appeared that deficient or missing coordination between experimental and theoretical works made meaningful verifications impossible in some cases. Therefore verification calculations were focused on catalytic cracking reactor which was developed by Neste. The studied reactor consisted of prefluidisation and reaction zones. Verification calculations concentrated mainly on physical phenomena like vaporization near oil injection zone. The main steps of the cracking process can be described as follows oil (liquid) -> oil (gas) -> oil (catal) -> product (catal) + char (catal) -> product (gas). Catalytic nature of cracking reaction was accounted by defining the cracking pseudoreaction into catalyst phase. This simplified reaction model was valid only for vaporization zone. Applied fluid dynamic theory was based on the results of EINCO`s earlier LIEKKI-projects. (author)

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  2. A Massively Parallel Computational Method of Reading Index Files for SOAPsnv.

    Science.gov (United States)

    Zhu, Xiaoqian; Peng, Shaoliang; Liu, Shaojie; Cui, Yingbo; Gu, Xiang; Gao, Ming; Fang, Lin; Fang, Xiaodong

    2015-12-01

    SOAPsnv is the software used for identifying the single nucleotide variation in cancer genes. However, its performance is yet to match the massive amount of data to be processed. Experiments reveal that the main performance bottleneck of SOAPsnv software is the pileup algorithm. The original pileup algorithm's I/O process is time-consuming and inefficient to read input files. Moreover, the scalability of the pileup algorithm is also poor. Therefore, we designed a new algorithm, named BamPileup, aiming to improve the performance of sequential read, and the new pileup algorithm implemented a parallel read mode based on index. Using this method, each thread can directly read the data start from a specific position. The results of experiments on the Tianhe-2 supercomputer show that, when reading data in a multi-threaded parallel I/O way, the processing time of algorithm is reduced to 3.9 s and the application program can achieve a speedup up to 100×. Moreover, the scalability of the new algorithm is also satisfying.

  3. A computational simulation study on the acoustic pressure generated by a dental endosonic file: effects of intensity, file shape and volume.

    Science.gov (United States)

    Tiong, T Joyce; Price, Gareth J; Kanagasingam, Shalini

    2014-09-01

    One of the uses of ultrasound in dentistry is in the field of endodontics (i.e. root canal treatment) in order to enhance cleaning efficiency during the treatment. The acoustic pressures generated by the oscillation of files in narrow channels has been calculated using the COMSOL simulation package. Acoustic pressures in excess of the cavitation threshold can be generated and higher values were found in narrower channels. This parallels experimental observations of sonochemiluminescence. The effect of varying the channel width and length and the dimensions and shape of the file are reported. As well as explaining experimental observations, the work provides a basis for the further development and optimisation of the design of endosonic files. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. All-optical quantum computing with a hybrid solid-state processing unit

    CERN Document Server

    Pei, Pei; Li, Chong

    2011-01-01

    We develop an architecture of hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have prominent advantage of the insensitivity to dissipation process due to the virtual excitation of subsystems. Moreover, the QND measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid systems can merge and be integrated into one quantum processor afterwards.

  5. Students' Beliefs about Mobile Devices vs. Desktop Computers in South Korea and the United States

    Science.gov (United States)

    Sung, Eunmo; Mayer, Richard E.

    2012-01-01

    College students in the United States and in South Korea completed a 28-item multidimensional scaling (MDS) questionnaire in which they rated the similarity of 28 pairs of multimedia learning materials on a 10-point scale (e.g., narrated animation on a mobile device Vs. movie clip on a desktop computer) and a 56-item semantic differential…

  6. Students' Beliefs about Mobile Devices vs. Desktop Computers in South Korea and the United States

    Science.gov (United States)

    Sung, Eunmo; Mayer, Richard E.

    2012-01-01

    College students in the United States and in South Korea completed a 28-item multidimensional scaling (MDS) questionnaire in which they rated the similarity of 28 pairs of multimedia learning materials on a 10-point scale (e.g., narrated animation on a mobile device Vs. movie clip on a desktop computer) and a 56-item semantic differential…

  7. Shifting forest value orientations in the United States, 1980-2001: A computer content analysis

    Science.gov (United States)

    David N. Bengston; Trevor J. Webb; David P. Fan

    2004-01-01

    This paper examines three forest value orientations - clusters of interrelated values and basic beliefs about forests - that emerged from an analysis of the public discourse about forest planning, management, and policy in the United States. The value orientations include anthropocentric, biocentric, and moral/spiritual/aesthetic orientations toward forests. Computer...

  8. Computer Phobia in Higher Education: A Comparative Analysis of United Kingdom and Turkish University Students

    Directory of Open Access Journals (Sweden)

    Ömer Faruk Ursavaş

    2011-12-01

    Full Text Available The possession or acquisition of a range of computer skills is an implicit assumption related to many undergraduate study programmes, and use of university computer facilities may impact on overall academic performance and employability beyond graduation. This study therefore tested levels of computer anxiety (CARS and computer thoughts (CTS in Turkish and United Kingdom undergraduates with reference to culture group difference, regularity of use (or home use and use of university computer facilities. A substantial minority of students (32-33% reported computer anxiety in both groups, but more UK (41% than Turkish students (21% were deficient in positive self-concept (CTS. Reference to the subscales in the two measures pinpointed cultural differences disguised at scale level, and gender differences were evident across rather than within culture groups. As expected, positive self-concept was associated with use of computer facilities (r’s = 0 to 0.25, p < .001, and anxiety was associated more weakly with avoidance (r’s = 0 to -0.18, p < .001. Results suggest that computer confidence (implying motivation and engagement should not be assumed to exist in the agenda for wider participation. Also within and between group differences indicate that there is no typical or stereotypical student profile in approach to computer activity

  9. On the Computational Complexity of Degenerate Unit Distance Representations of Graphs

    Science.gov (United States)

    Horvat, Boris; Kratochvíl, Jan; Pisanski, Tomaž

    Some graphs admit drawings in the Euclidean k-space in such a (natural) way, that edges are represented as line segments of unit length. Such embeddings are called k-dimensional unit distance representations. The embedding is strict if the distances of points representing nonadjacent pairs of vertices are different than 1. When two non-adjacent vertices are drawn in the same point, we say that the representation is degenerate. Computational complexity of nondegenerate embeddings has been studied before. We initiate the study of the computational complexity of (possibly) degenerate embeddings. In particular we prove that for every k ≥ 2, deciding if an input graph has a (possibly) degenerate k-dimensional unit distance representation is NP-hard.

  10. Computer Science Teacher Professional Development in the United States: A Review of Studies Published between 2004 and 2014

    Science.gov (United States)

    Menekse, Muhsin

    2015-01-01

    While there has been a remarkable interest to make computer science a core K-12 academic subject in the United States, there is a shortage of K-12 computer science teachers to successfully implement computer sciences courses in schools. In order to enhance computer science teacher capacity, training programs have been offered through teacher…

  11. Computer Science Teacher Professional Development in the United States: A Review of Studies Published between 2004 and 2014

    Science.gov (United States)

    Menekse, Muhsin

    2015-01-01

    While there has been a remarkable interest to make computer science a core K-12 academic subject in the United States, there is a shortage of K-12 computer science teachers to successfully implement computer sciences courses in schools. In order to enhance computer science teacher capacity, training programs have been offered through teacher…

  12. Cone-beam Computed Tomographic Assessment of Canal Centering Ability and Transportation after Preparation with Twisted File and Bio RaCe Instrumentation.

    Directory of Open Access Journals (Sweden)

    Kiamars Honardar

    2014-08-01

    Full Text Available Use of rotary Nickel-Titanium (NiTi instruments for endodontic preparation has introduced a new era in endodontic practice, but this issue has undergone dramatic modifications in order to achieve improved shaping abilities. Cone-beam computed tomography (CBCT has made it possible to accurately evaluate geometrical changes following canal preparation. This study was carried out to compare canal centering ability and transportation of Twisted File and BioRaCe rotary systems by means of cone-beam computed tomography.Thirty root canals from freshly extracted mandibular and maxillary teeth were selected. Teeth were mounted and scanned before and after preparation by CBCT at different apical levels. Specimens were divided into 2 groups of 15. In the first group Twisted File and in the second, BioRaCe was used for canal preparation. Canal transportation and centering ability after preparation were assessed by NNT Viewer and Photoshop CS4 software. Statistical analysis was performed using t-test and two-way ANOVA.All samples showed deviations from the original axes of the canals. No significant differences were detected between the two rotary NiTi instruments for canal centering ability in all sections. Regarding canal transportation however, a significant difference was seen in the BioRaCe group at 7.5mm from the apex.Under the conditions of this in vitro study, Twisted File and BioRaCe rotary NiTi files retained original canal geometry.

  13. 2015 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  14. 2014 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  15. 2014 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  16. 2014 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  17. 2015 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  18. 2015 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  19. 2014 Cartographic Boundary File, 5-Digit ZIP Code Tabulation Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  20. 2015 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  1. 2014 Cartographic Boundary File, New England City and Town Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  2. 2016 Cartographic Boundary File, Current Metropolitan/Micropolitan Statistical Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  3. 2014 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  4. 2015 Cartographic Boundary File, 5-Digit ZIP Code Tabulation Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  5. 2014 Cartographic Boundary File, 5-Digit ZIP Code Tabulation Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  6. 2015 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  7. 2015 Cartographic Boundary File, New England City and Town Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  8. 2014 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  9. 2015 Cartographic Boundary File, State-Congressional District-County for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  10. 2016 Cartographic Boundary File, Current Metropolitan/Micropolitan Statistical Area for United States, 1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  11. 2016 Cartographic Boundary File, Current Metropolitan/Micropolitan Statistical Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  12. 2016 Cartographic Boundary File, Current Metropolitan/Micropolitan Statistical Area for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  13. 2016 Cartographic Boundary File, Current Metropolitan/Micropolitan Statistical Area for United States,1:20,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  14. 2014 Cartographic Boundary File, State-Congressional District-County for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  15. 2016 Cartographic Boundary File, Current Metropolitan/Micropolitan Statistical Area for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  16. 2016 Cartographic Boundary File, Current New England City and Town Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  17. 2016 Cartographic Boundary File, Current New England City and Town Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  18. 2014 Cartographic Boundary File, State-Congressional District-County for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  19. 2015 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  20. 2014 Cartographic Boundary File, New England City and Town Area for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  1. 2016 Cartographic Boundary File, 115th Congressional Districts within Current County and Equivalent for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  2. 2014 Cartographic Boundary File, Metropolitan Statistical Area/Micropolitan Statistical Area for United States, 1:5,000,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  3. Computer availability and students' science achievement in Taiwan and the United States

    Science.gov (United States)

    Wen, Meichun Lydia

    The purpose of the study was to examine the differences associated with nationality, computer availability at school, and computer availability at home on eighth-grade students' science achievement. Achievement scores were obtained from the Third International Mathematics and Science Study---Repeat dataset for Taiwan and the United States (U.S.) students. One hundred thirty-seven schools in Taiwan and 152 schools in the U.S. were selected with 5270 Taiwanese students and 6236 American students. A three-way analysis of variance was conducted using house weight to weight the selected sample. The dependent variable was TIMSS 1999 science overall score, and the independent variables were nationality, four levels of number of students per computer, and two levels of computer availability at home. An Omega Squared (o2) was calculated for each of the significant main effects. Follow-up analyses were included for statistically significant interactions. Descriptive statistics revealed that the average class size in Taiwan was significantly larger than the class size in the U.S. The statistical analysis found a difference in mean science achievement score between Taiwan and the United States, among the four levels of number of students per computer, and between the two levels of computer availability at home. Taiwanese students performed significantly better than American students (o2 = 5.8%). Students in the group with the least number of students per computer performed significantly better than rest of the three groups (o 2 = 0.3%). The statistically significant difference among the levels of computer availability at school might be due to large sample size rather than true differences among groups because of the small amount of variance accounted. Furthermore, students who had a computer at home had significantly higher achievement in science than those without a computer at home (o 2 = 4.8%). Statistically significant interactions were found between (1) nationality and

  4. Computer-delivered patient simulations in the United States Medical Licensing Examination (USMLE).

    Science.gov (United States)

    Dillon, Gerard F; Clauser, Brian E

    2009-01-01

    To obtain a full and unrestricted license to practice medicine in the United States, students and graduates of the MD-granting US medical schools and of medical schools located outside of the United States must take and pass the United States Medical Licensing Examination. United States Medical Licensing Examination began as a series of paper-and-pencil examinations in the early 1990s and converted to computer-delivery in 1999. With this change to the computerized format came the opportunity to introduce computer-simulated patients, which had been under development at the National Board of Medical Examiners for a number of years. This testing format, called a computer-based case simulation, requires the examinee to manage a simulated patient in simulated time. The examinee can select options for history-taking and physical examination. Diagnostic studies and treatment are ordered via free-text entry, and the examinee controls the advance of simulated time and the location of the patient in the health care setting. Although the inclusion of this format has brought a number of practical, psychometric, and security challenges, its addition has allowed a significant expansion in ways to assess examinees on their diagnostic decision making and therapeutic intervention skills and on developing and implementing a reasonable patient management plan.

  5. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2013-08-02

    ... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants..., ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This... software elements if those systems include software. This RG is one of six RG revisions addressing...

  6. Evaluation of the Efficacy of TRUShape and Reciproc File Systems in the Removal of Root Filling Material: An Ex Vivo Micro-Computed Tomographic Study.

    Science.gov (United States)

    de Siqueira Zuolo, Arthur; Zuolo, Mario Luis; da Silveira Bueno, Carlos Eduardo; Chu, Rene; Cunha, Rodrigo Sanches

    2016-02-01

    The purpose of this study was to evaluate the efficacy of TRUShape (Dentsply Tulsa Dental Specialties, Tulsa, OK) compared with the Reciproc file (VDW, Munich, Germany) in the removal of filling material from oval canals filled with 2 different sealers and differences in the working time. Sixty-four mandibular canines with oval canals were prepared and divided into 4 groups (n = 16). Half of the specimens were filled with gutta-percha and pulp canal sealer (PCS), and the remainders were filled with gutta-percha and bioceramic sealer (BCS). The specimens were retreated using either the Reciproc or TRUShape files. A micro-computed tomographic scanner was used to assess filling material removal, and the time taken for removal was also recorded. Data were analyzed using the Kruskal-Wallis and Mann-Whitney U tests. The mean volume of the remaining filling material was similar when comparing both files (P ≥ .05). However, in the groups filled with BCS, the percentage of remaining filling material was higher than in the groups filled with PCS (P material when comparing both files system; however, Reciproc was faster than TRUShape. BCS groups exhibited significantly more remaining filling material in the canals and required more time for retreatment. Remaining filling material was observed in all samples regardless of the technique or sealer used. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  7. Unit cell-based computer-aided manufacturing system for tissue engineering.

    Science.gov (United States)

    Kang, Hyun-Wook; Park, Jeong Hun; Kang, Tae-Yun; Seol, Young-Joon; Cho, Dong-Woo

    2012-03-01

    Scaffolds play an important role in the regeneration of artificial tissues or organs. A scaffold is a porous structure with a micro-scale inner architecture in the range of several to several hundreds of micrometers. Therefore, computer-aided construction of scaffolds should provide sophisticated functionality for porous structure design and a tool path generation strategy that can achieve micro-scale architecture. In this study, a new unit cell-based computer-aided manufacturing (CAM) system was developed for the automated design and fabrication of a porous structure with micro-scale inner architecture that can be applied to composite tissue regeneration. The CAM system was developed by first defining a data structure for the computing process of a unit cell representing a single pore structure. Next, an algorithm and software were developed and applied to construct porous structures with a single or multiple pore design using solid freeform fabrication technology and a 3D tooth/spine computer-aided design model. We showed that this system is quite feasible for the design and fabrication of a scaffold for tissue engineering.

  8. Accelerated multidimensional radiofrequency pulse design for parallel transmission using concurrent computation on multiple graphics processing units.

    Science.gov (United States)

    Deng, Weiran; Yang, Cungeng; Stenger, V Andrew

    2011-02-01

    Multidimensional radiofrequency (RF) pulses are of current interest because of their promise for improving high-field imaging and for optimizing parallel transmission methods. One major drawback is that the computation time of numerically designed multidimensional RF pulses increases rapidly with their resolution and number of transmitters. This is critical because the construction of multidimensional RF pulses often needs to be in real time. The use of graphics processing units for computations is a recent approach for accelerating image reconstruction applications. We propose the use of graphics processing units for the design of multidimensional RF pulses including the utilization of parallel transmitters. Using a desktop computer with four NVIDIA Tesla C1060 computing processors, we found acceleration factors on the order of 20 for standard eight-transmitter two-dimensional spiral RF pulses with a 64 × 64 excitation resolution and a 10-μsec dwell time. We also show that even greater acceleration factors can be achieved for more complex RF pulses. Copyright © 2010 Wiley-Liss, Inc.

  9. Structure and Application pf WAV File

    Institute of Scientific and Technical Information of China (English)

    Guo,Xingji

    2005-01-01

    As regards audio digitization, the researcher introduced several computer process means of audio information, and then, presented application patterns based on WAV file after analyzing thoroughly the structure of the widely used WAV File in computer application field.

  10. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    Science.gov (United States)

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  11. Computer use and vision-related problems among university students in ajman, United arab emirate.

    Science.gov (United States)

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-03-01

    The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P visual problems reported among computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm - OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1.065-3.368). High prevalence of vision related problems was noted among university students. Sustained periods of close screen work without screen filters were found to be associated with occurrence of the symptoms and increased interruptions of work of the students. There is a need to increase the ergonomic awareness among students and corrective measures need to be implemented to reduce the impact of computer related vision problems.

  12. 76 FR 39757 - Filing Procedures

    Science.gov (United States)

    2011-07-06

    ... an accurate filing history can be maintained. (b) Changes to user information other than the Firm... computer or internet resources. If the request is granted, the Secretary will promptly inform you and... applicable hardware and software requirements for electronic filing. (2) To file or submit a document in...

  13. Sensitivity Data File Formats

    Energy Technology Data Exchange (ETDEWEB)

    Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-04-01

    The format of the TSUNAMI-A sensitivity data file produced by SAMS for cases with deterministic transport solutions is given in Table 6.3.A.1. The occurrence of each entry in the data file is followed by an identification of the data contained on each line of the file and the FORTRAN edit descriptor denoting the format of each line. A brief description of each line is also presented. A sample of the TSUNAMI-A data file for the Flattop-25 sample problem is provided in Figure 6.3.A.1. Here, only two profiles out of the 130 computed are shown.

  14. The emergence of computer science instructional units in American colleges and universities (1950--1975): A history

    Science.gov (United States)

    Conners, Susan Elaine

    The purpose and scope of this dissertation is to investigate the origins and development of academic computer science units in American higher education and examine the intent and structure of their curricula. Specifically the study examines selected undergraduate and graduate curricula that developed from 1950 to 1975. This dissertation examines several of the earliest academic units formed and the issues surrounding their formation. This study examines some of the variety of courses and programs that existed among the early computer science programs. The actual titles of the units varied but they shared a common overreaching goal to study computers. The departments formed in various methods and some units were a subset of other departments. Faculties of these new units were often comprised of faculty members from various other disciplines. This dissertation is an exploration of the connections between a variety of diverse institutions and the new computer science discipline that formed from these early academic roots. While much has been written about the history of hardware and software development and the individual pioneers in the relatively new computer science discipline, the history of the academic units was documented primarily based on individual institutions. This study uses a wider lens to examine the patterns of these early academic units as they formed and became computer science units. The successes of these early pioneers resulted in a proliferation of academic computer programs in the following decades. The curricular debates continue as the number and purposes of these programs continue to expand. This dissertation seeks to provide useful information for future curricular decisions by examining the roots of the academic computer science units.

  15. Quality control and dosimetry in computed tomography units; Controle de qualidade e dosimetria em equipamentos de tomografia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Pina, Diana Rodrigues de; Ribeiro, Sergio Marrone [UNESP, Botucatu, SP (Brazil). Faculdade de Medicina], e-mail: drpina@fmb.unesp.br; Duarte, Sergio Barbosa [Centro Brasileiro e Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Netto, Thomaz Ghilardi [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Hospital das Clinicas. Centro de Ciencias das Imagens e Fisica Medica; Morceli, Jose [UNESP, Botucatu, SP (Brazil). Faculdade de Medicina. Secao de Diagnostico por Imagem; Carbi, Eros Duarte Ortigoso; Costa Neto, Andre; Souza, Rafael Toledo Fernandes de [UNESP, Botucatu, SP (Brazil). Inst. de Biociencias

    2009-05-15

    Objective: Evaluation of equipment conditions and dosimetry in computed tomography services utilizing protocols for head, abdomen, and lumbar spine in adult patients (in three different units) and pediatric patients up to 18 months of age (in one of the units evaluated). Materials and methods: Computed tomography dose index and multiple-scan average dose were estimated in studies of adult patients with three different units. Additionally, entrance surface doses as well as absorbed dose were estimated in head studies for both adult and pediatric patients in a single computed tomography unit. Results: Mechanical quality control tests were performed, demonstrating that computed tomography units comply with the equipment-use specifications established by the current standards. Dosimetry results have demonstrated that the multiplescan average dose values were in excess of up to 109.0% the reference levels, presenting considerable variation amongst the computed tomography units evaluated in the present study. Absorbed doses obtained with pediatric protocols are lower than those with adult patients, presenting a reduction of up to 51.0% in the thyroid gland. Conclusion: The present study has analyzed the operational conditions of three computed tomography units, establishing which parameters should be set for the deployment of a quality control program in the institutions where this study was developed. (author)

  16. Real-Time Computation of Parameter Fitting and Image Reconstruction Using Graphical Processing Units

    CERN Document Server

    Locans, Uldis; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Gunther; Wang, Qiulin

    2016-01-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of muSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the ...

  17. Computer-aided modeling of aluminophosphate zeolites as packings of building units

    KAUST Repository

    Peskov, Maxim

    2012-03-22

    New building schemes of aluminophosphate molecular sieves from packing units (PUs) are proposed. We have investigated 61 framework types discovered in zeolite-like aluminophosphates and have identified important PU combinations using a recently implemented computational algorithm of the TOPOS package. All PUs whose packing completely determines the overall topology of the aluminophosphate framework were described and catalogued. We have enumerated 235 building models for the aluminophosphates belonging to 61 zeolite framework types, from ring- or cage-like PU clusters. It is indicated that PUs can be considered as precursor species in the zeolite synthesis processes. © 2012 American Chemical Society.

  18. Hounsfield Unit inaccuracy in computed tomography lesion size and density, diagnostic quality vs attenuation correction

    Science.gov (United States)

    Szczepura, Katy; Thompson, John; Manning, David

    2017-03-01

    In computed tomography the Hounsfield Units (HU) are used as an indicator of the tissue type based on the linear attenuation coefficients of the tissue. HU accuracy is essential when this metric is used in any form to support diagnosis. In hybrid imaging, such as SPECT/CT and PET/CT, the information is used for attenuation correction (AC) of the emission images. This work investigates the HU accuracy of nodules of known size and HU, comparing diagnostic quality (DQ) images with images used for AC.

  19. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  20. USGS Small-scale Dataset - Global Map: Ports of the United States 201406 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer includes Global Map data showing ferry ports in the United States and Puerto Rico. The data are a modified version of the National Atlas of the United...

  1. Introduction to Hadoop Distributed File System

    Directory of Open Access Journals (Sweden)

    Vaibhav Gopal korat

    2012-04-01

    Full Text Available HDFS is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes, and provide high-throughput access to this information. Files are stored in a redundant fashion across multiple machines to ensure their durability to failure and high availability to very parallel applications. This paper includes the step by step introduction to the file system to distributed file system and to the Hadoop Distributed File System. Section I introduces What is file System, Need of File System, Conventional File System, its advantages, Need of Distributed File System, What is Distributed File System and Benefits of Distributed File System. Also the analysis of large dataset and comparison of mapreducce with RDBMS, HPC and Grid Computing communities have been doing large-scale data processing for years. Sections II introduce the concept of Hadoop Distributed File System. Lastly section III contains Conclusion followed with the References.

  2. Chemical Equilibrium, Unit 2: Le Chatelier's Principle. A Computer-Enriched Module for Introductory Chemistry. Student's Guide and Teacher's Guide.

    Science.gov (United States)

    Jameson, A. Keith

    Presented are the teacher's guide and student materials for one of a series of self-instructional, computer-based learning modules for an introductory, undergraduate chemistry course. The student manual for this unit on Le Chatelier's principle includes objectives, prerequisites, pretest, instructions for executing the computer program, and…

  3. Improved computational performance of MFA using elementary metabolite units and flux coupling.

    Science.gov (United States)

    Suthers, Patrick F; Chang, Young J; Maranas, Costas D

    2010-03-01

    Extending the scope of isotope mapping models becomes increasingly important in order to analyze strains and drive improved product yields as more complex pathways are engineered into strains and as secondary metabolites are used as starting points for new products. Here we present how the elementary metabolite unit (EMU) framework and flux coupling significantly decrease the computational burden of metabolic flux analysis (MFA) when applied to large-scale metabolic models. We applied these techniques to a previously published isotope mapping model of Escherichia coli accounting for 238 reactions. We find that the combined use of EMU and flux coupling analysis leads to a ten-fold decrease in the number of variables in comparison to the original isotope distribution vector (IDV) version of the model. In addition, using OptMeas the task of identifying additional measurement choices to fully specify the flows in the metabolic network required only 2% of the computation time of the one using IDVs. The observed computational savings reveal the rapid progress in performing MFA with increasingly larger isotope models with the ultimate goal of handling genome-scale models of metabolism. (c) 2009 Elsevier Inc. All rights reserved.

  4. Computer-assisted learning and simulation lab with 40 DentSim units.

    Science.gov (United States)

    Welk, A; Maggio, M P; Simon, J F; Scarbecz, M; Harrison, J A; Wicks, R A; Gilpatrick, R O

    2008-01-01

    There are an increasing number of studies about the computer-assisted dental patient simulator DentSim (DenX, Israel), by which dental students can acquire cognitive motor skills in a multimedia environment. However, only a very few studies have been published dealing with efficient ways to use and to manage a computer-assisted dental simulation lab with 40 DentSim units. The current approach and optimization steps of the College of Dentistry at the University of Tennessee Health Science Center were evaluated based on theoretical and practical tests and by questionnaires (partial 5-point Likert scale). Half of the D1 (first-year) students (2004/05) already had experience with computer-assisted learning at their undergraduate college and most of the students even expected to be taught via computer-assisted learning systems (83.5%) at the dental school. 87.3% of the students working with DentSim found the experience to be very interesting or interesting. Before the students carried out the preparation exercises, they were trained in the skills they needed to work with the sophisticated technology, eg, system-specific operation skills (66.6% attained maximal reachable points) and information searching skills (79.5% attained maximal reachable points). The indirect knowledge retention rate / incidental learning rate of the preparation exercises in the sense of computer-assisted problem-oriented learning regarding anatomy, preparation procedures, and cavity design was promising. The wide- ranging number of prepared teeth needed to acquire the necessary skills shows the varied individual learning curves of the students. The acceptance of, and response to, additional elective training time in the computer-assisted simulation lab were very high. Integrating the DentSim technology into the existing curriculum is a way to improve dental education, but it is also a challenge for both teachers and the students. It requires a shift in both curriculum and instructional goals that

  5. Computational and Pharmacological Target of Neurovascular Unit for Drug Design and Delivery

    Directory of Open Access Journals (Sweden)

    Md. Mirazul Islam

    2015-01-01

    Full Text Available The blood-brain barrier (BBB is a dynamic and highly selective permeable interface between central nervous system (CNS and periphery that regulates the brain homeostasis. Increasing evidences of neurological disorders and restricted drug delivery process in brain make BBB as special target for further study. At present, neurovascular unit (NVU is a great interest and highlighted topic of pharmaceutical companies for CNS drug design and delivery approaches. Some recent advancement of pharmacology and computational biology makes it convenient to develop drugs within limited time and affordable cost. In this review, we briefly introduce current understanding of the NVU, including molecular and cellular composition, physiology, and regulatory function. We also discuss the recent technology and interaction of pharmacogenomics and bioinformatics for drug design and step towards personalized medicine. Additionally, we develop gene network due to understand NVU associated transporter proteins interactions that might be effective for understanding aetiology of neurological disorders and new target base protective therapies development and delivery.

  6. Computational and Pharmacological Target of Neurovascular Unit for Drug Design and Delivery.

    Science.gov (United States)

    Islam, Md Mirazul; Mohamed, Zahurin

    2015-01-01

    The blood-brain barrier (BBB) is a dynamic and highly selective permeable interface between central nervous system (CNS) and periphery that regulates the brain homeostasis. Increasing evidences of neurological disorders and restricted drug delivery process in brain make BBB as special target for further study. At present, neurovascular unit (NVU) is a great interest and highlighted topic of pharmaceutical companies for CNS drug design and delivery approaches. Some recent advancement of pharmacology and computational biology makes it convenient to develop drugs within limited time and affordable cost. In this review, we briefly introduce current understanding of the NVU, including molecular and cellular composition, physiology, and regulatory function. We also discuss the recent technology and interaction of pharmacogenomics and bioinformatics for drug design and step towards personalized medicine. Additionally, we develop gene network due to understand NVU associated transporter proteins interactions that might be effective for understanding aetiology of neurological disorders and new target base protective therapies development and delivery.

  7. Utero-fetal unit and pregnant woman modeling using a computer graphics approach for dosimetry studies.

    Science.gov (United States)

    Anquez, Jérémie; Boubekeur, Tamy; Bibin, Lazar; Angelini, Elsa; Bloch, Isabelle

    2009-01-01

    Potential sanitary effects related to electromagnetic fields exposure raise public concerns, especially for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies, performed on anthropomorphic models of pregnant women. In this paper, we propose a new methodology to generate a set of detailed utero-fetal unit (UFU) 3D models during the first and third trimesters of pregnancy, based on segmented 3D ultrasound and MRI data. UFU models are built using recent geometry processing methods derived from mesh-based computer graphics techniques and embedded in a synthetic woman body. Nine pregnant woman models have been generated using this approach and validated by obstetricians, for anatomical accuracy and representativeness.

  8. USGS Small-scale Dataset - 1:1,000,000-Scale Hydrographic Geodatabase of the United States - Conterminous United States 201403 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This geodatabase contains streams, waterbodies and wetlands, streamflow gaging stations, and coastlines for the conterminous United States. The streams are...

  9. The contribution of motor unit pairs to the correlation functions computed from surface myoelectric signals.

    Science.gov (United States)

    González-Cueto, José A; Erim, Zeynep

    2005-11-01

    The contribution of motor unit action potential trains (MUAPT) of distinct motor units (MU) to the crosscorrelation function between myoelectric signals (MES) recorded at the skin surface is studied. In specific, the significance of the correlation between the firing activity of concurrently active MUs (which results in cross-terms in the overall correlation function) is compared to the representation obtained using the contributions of single MUs at each recording site (auto-terms). A model for the generation of surface MUAPs is combined with the generation of MU firing statistics in order to obtain surface MUAPTs. MU firing statistics are simulated to incorporate MU synchronization levels reported in the literature. Alternatively, experimental firing statistics are fed to the model generating the MUAPTs. The contribution of individual MU pairs to the global myoelectric signal correlation function is assessed. Results indicate that the cross-terms from different MUs decrease steadily contributing very little to the overall correlation for record lengths as short as 30 s. Thus, the error expected when computing the crosscorrelation function between two channels of MES as the superposition of the auto-terms contributed by single MUs (i.e., ignoring the cross-terms from different MUs) is shown to be very small.

  10. Learner Use of Holistic Language Units in Multimodal, Task-Based Synchronous Computer-Mediated Communication

    Directory of Open Access Journals (Sweden)

    Karina Collentine

    2009-06-01

    Full Text Available Second language acquisition (SLA researchers strive to understand the language and exchanges that learners generate in synchronous computer-mediated communication (SCMC. Doughty and Long (2003 advocate replacing open-ended SCMC with task-based language teaching (TBLT design principles. Since most task-based SCMC (TB-SCMC research addresses an interactionist view (e.g., whether uptake occurs, we know little about holistic language units generated by learners even though research suggests that task demands make TB-SCMC communication notably different from general SCMC communication. This study documents and accounts for discourse-pragmatic and sociocultural behaviors learners exhibit in TB-SCMC. To capture a variety of such behaviors, it documents holistic language units produced by intermediate and advanced learners of Spanish during two multimodal, TB-SCMC activities. The study found that simple assertions were most prevalent (a with dyads at the lower level of instruction and (b when dyads had a relatively short amount of time to chat. Additionally, interpersonal, sociocultural behaviors (e.g., joking, off-task discussions were more likely to occur (a amongst dyads at the advanced level and (b when they had relatively more time to chat. Implications explain how tasks might mitigate the potential processing overload that multimodal materials could incur.

  11. Optical diagnostics of a single evaporating droplet using fast parallel computing on graphics processing units

    Science.gov (United States)

    Jakubczyk, D.; Migacz, S.; Derkachov, G.; Woźniak, M.; Archer, J.; Kolwas, K.

    2016-09-01

    We report on the first application of the graphics processing units (GPUs) accelerated computing technology to improve performance of numerical methods used for the optical characterization of evaporating microdroplets. Single microdroplets of various liquids with different volatility and molecular weight (glycerine, glycols, water, etc.), as well as mixtures of liquids and diverse suspensions evaporate inside the electrodynamic trap under the chosen temperature and composition of atmosphere. The series of scattering patterns recorded from the evaporating microdroplets are processed by fitting complete Mie theory predictions with gradientless lookup table method. We showed that computations on GPUs can be effectively applied to inverse scattering problems. In particular, our technique accelerated calculations of the Mie scattering theory on a single-core processor in a Matlab environment over 800 times and almost 100 times comparing to the corresponding code in C language. Additionally, we overcame problems of the time-consuming data post-processing when some of the parameters (particularly the refractive index) of an investigated liquid are uncertain. Our program allows us to track the parameters characterizing the evaporating droplet nearly simultaneously with the progress of evaporation.

  12. Exploring Graphics Processing Unit (GPU Resource Sharing Efficiency for High Performance Computing

    Directory of Open Access Journals (Sweden)

    Teng Li

    2013-11-01

    Full Text Available The increasing incorporation of Graphics Processing Units (GPUs as accelerators has been one of the forefront High Performance Computing (HPC trends and provides unprecedented performance; however, the prevalent adoption of the Single-Program Multiple-Data (SPMD programming model brings with it challenges of resource underutilization. In other words, under SPMD, every CPU needs GPU capability available to it. However, since CPUs generally outnumber GPUs, the asymmetric resource distribution gives rise to overall computing resource underutilization. In this paper, we propose to efficiently share the GPU under SPMD and formally define a series of GPU sharing scenarios. We provide performance-modeling analysis for each sharing scenario with accurate experimentation validation. With the modeling basis, we further conduct experimental studies to explore potential GPU sharing efficiency improvements from multiple perspectives. Both further theoretical and experimental GPU sharing performance analysis and results are presented. Our results not only demonstrate the significant performance gain for SPMD programs with the proposed efficient GPU sharing, but also the further improved sharing efficiency with the optimization techniques based on our accurate modeling.

  13. Unit physics performance of a mix model in Eulerian fluid computations

    Energy Technology Data Exchange (ETDEWEB)

    Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory

    2011-01-25

    In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.

  14. OTS: a program for converting Noldus Observer data files to SDIS files.

    Science.gov (United States)

    Bakeman, R; Quera, V

    2000-02-01

    A program for converting Noldus Observer data files (ODF) to sequential data interchange standard (SDIS) files is described. Observer users who convert their data files can then take advantage of various flexible and powerful data modification and computational procedures available in the Generalized Sequential Querier, a program that assumes SDIS-formatted files.

  15. USGS Small-scale Dataset - Global Map: Railroad Stations of the United States 201403 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer includes Global Map data showing Amtrak intercity railroad terminals in the United States. The data are a modified version of the National Atlas of...

  16. USGS Small-scale Dataset - Streamflow Gaging Stations of the United States 201403 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows selected streamflow gaging stations of the United States, Puerto Rico, and the U.S. Virgin Islands, in 2013. Gaging stations, or gages, measure...

  17. USGS 1:1,000,000-Scale National Wilderness Preservation System of the United States 201412 FileGDB

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer consists of National Wilderness Preservation System areas of 320 acres or more, in the United States, Puerto Rico, and the U.S. Virgin Islands. Some...

  18. USGS Small-scale Dataset - Global Map: Airports of the United States 201403 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer includes Global Map data showing airports in the United States, Puerto Rico and the U.S. Virgin Islands. The data are a modified version of the...

  19. Assessment of Undiscovered Deposits of Gold, Silver, Copper, Lead, and Zinc in the United States: A Portable Document (PDF) Recompilation of USGS Open-File Report 96-96 and Circular 1178

    Science.gov (United States)

    U.S. Geological Survey National Mineral Resource Assessment Team Recompiled by Schruben, Paul G.

    2002-01-01

    This publication contains the results of a national mineral resource assessment study. The study (1) identifies regional tracts of ground believed to contain most of the nation's undiscovered resources of gold, silver, copper, lead, and zinc in conventional types of deposits; and (2) includes probabilistic estimates of the amounts of these undiscovered resources in most of the tracts. It also contains a table of the significant known deposits in the tracts, and includes descriptions of the mineral deposit models used for the assessment. The assessment was previously released in two major publications. The conterminous United States assessment was published in 1996 as USGS Open-File Report 96-96. Subsequently, the Alaska assessment was combined with the conterminous assessment in 1998 and released as USGS Circular 1178. This new recompilation was undertaken for several reasons. First, the graphical browser software used in Circular 1178 was ONLY compatible with the Microsoft Windows operating system. It was incompatible with the Macintosh operating system, Linux, and other types of Unix computers. Second, the browser on Circular 1178 is much less intuitive to operate, requiring most users to follow a tutorial to understand how to navigate the information on the CD. Third, this release corrects several errors and numbering inconsistencies in Circular 1178.

  20. Selective File Dumper

    Science.gov (United States)

    Bassetti, Nanni; Frati, Denis

    During a computer forensics investigation we faced a problem how to get all the interesting files we need fast. We work, mainly, using the Open Source software products and Linux OS, and we consider the Sleuthkit and the Foremost two very useful tools, but for reaching our target they were too complicated and time consuming to use. For this reason we developed the Selective File Dumper, a Linux Bash script which makes it possible to extract all the referenced, deleted and unallocated files and finally to perform a keyword search, in a simple way.

  1. New techniques for computing the ideal class group and a system of fundamental units in number fields

    CERN Document Server

    Biasse, Jean-François

    2012-01-01

    We describe a new algorithm for computing the ideal class group, the regulator and a system of fundamental units in number fields under the generalized Riemann hypothesis. We use sieving techniques adapted from the number field sieve algorithm to derive relations between elements of the ideal class group, and $p$-adic approximations to manage the loss of precision during the computation of units. This new algorithm is particularily efficient for number fields of small degree for which a speed-up of an order of magnitude is achieved with respect to the standard methods.

  2. Can a stepwise steady flow computational fluid dynamics model reproduce unsteady particulate matter separation for common unit operations?

    Science.gov (United States)

    Pathapati, Subbu-Srikanth; Sansalone, John J

    2011-07-01

    Computational fluid dynamics (CFD) is emerging as a model for resolving the fate of particulate matter (PM) by unit operations subject to rainfall-runoff loadings. However, compared to steady flow CFD models, there are greater computational requirements for unsteady hydrodynamics and PM loading models. Therefore this study examines if integrating a stepwise steady flow CFD model can reproduce PM separation by common unit operations loaded by unsteady flow and PM loadings, thereby reducing computational effort. Utilizing monitored unit operation data from unsteady events as a metric, this study compares the two CFD modeling approaches for a hydrodynamic separator (HS), a primary clarifier (PC) tank, and a volumetric clarifying filtration system (VCF). Results indicate that while unsteady CFD models reproduce PM separation of each unit operation, stepwise steady CFD models result in significant deviation for HS and PC models as compared to monitored data; overestimating the physical size requirements of each unit required to reproduce monitored PM separation results. In contrast, the stepwise steady flow approach reproduces PM separation by the VCF, a combined gravitational sedimentation and media filtration unit operation that provides attenuation of turbulent energy and flow velocity.

  3. Quality characteristic association analysis of computer numerical control machine tool based on meta-action assembly unit

    Directory of Open Access Journals (Sweden)

    Yan Ran

    2016-01-01

    Full Text Available As everyone knows, assembly quality plays a very important role in final product quality. Since computer numerical control machine tool is a large system with complicated structure and function, and there are complex association relationships among quality characteristics in assembly process, then it is difficult and inaccurate to analyze the whole computer numerical control machine tool quality characteristic association at one time. In this article, meta-action assembly unit is proposed as the basic analysis unit, of which quality characteristic association is studied to guarantee the whole computer numerical control machine tool assembly quality. First, based on “Function-Motion-Action” decomposition structure, the definitions of meta-action and meta-action assembly unit are introduced. Second, manufacturing process association and meta-action assembly unit quality characteristic association are discussed. Third, after understanding the definitions of information entropy and relative entropy, the concrete meta-action assembly unit quality characteristic association analysis steps based on relative entropy are described in detail. And finally, the lifting piston translation assembly unit of automatic pallet changer is taken as an example, the association degree between internal leakage and the influence factors of part quality characteristics and mate-relationships among them are calculated to figure out the most influential factors, showing the correctness and feasibility of this method.

  4. USGS 1:1,000,000-Scale Urban Areas of the United States 201504 FileGDB

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set includes urban areas in the United States, Puerto Rico, and the U.S. Virgin Islands. The data were derived from the 2010 TIGER/Line Urban Areas data...

  5. USGS 1:1,000,000-Scale Indian Lands of the United States 201412 FileGDB

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows Indian lands of the United States. For the most part, only areas of 320 acres or more are included; some smaller areas deemed to be important or...

  6. USGS Small-scale Dataset - Cities and Towns of the United States 201403 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer includes cities and towns in the United States, Puerto Rico, and the U.S. Virgin Islands. A city or town is a place with a recorded population,...

  7. USGS 1:1,000,000-Scale Federal Lands of the United States 201412 FileGDB

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer consists of federally owned or administered lands of the United States, Puerto Rico, and the U.S. Virgin Islands. For the most part, only areas of 320...

  8. 76 FR 28303 - Requiring Residents Who Live Outside the United States To File Petitions According to Form...

    Science.gov (United States)

    2011-05-17

    ..., innovation, or the ability of United States-based companies to compete with foreign-based companies in... flexibility analysis is not required. This procedural rule will impact only individuals, not small entities as..., of $100 million or more in any one year, and it will not significantly or uniquely affect small...

  9. USGS Small-scale Dataset - Ports of the United States 201406 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows major ports in the United States, Puerto Rico, and the U.S. Virgin Islands. A port is a city, town, or urban area with a harbor where ships load...

  10. Usefulness of computed tomography hounsfield unit measurement for diagnosis of congenital cholesteatoma

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang Hyuk; Kim, Yong Woo; Baik, Seung Kug; Hwang, Jae Yeon; Lee, Il Woo [Medical Research Institute, Pusan National University Yangsan Hospital, College of Medicine, Pusan National University, Yangsan (Korea, Republic of)

    2014-02-15

    To evaluate the usefulness of Hounsfield unit (HU) measurements for diagnosing of congenital cholesteatoma. A total of 43 patients who underwent surgery due to middle ear cavity lesions were enrolled. Twenty-one patients were confirmed to have congenital cholesteatoma by histopathological results and the other 22 patients were confirmed to have otitis media (OM) by operation. Their computed tomography images were retrospectively reviewed. We measured HU of the soft tissue mass in the middle ear cavity. In addition, we evaluated the largest diameter and location of the mass, the presence of bony erosion in the ear ossicle, and the status of the tympanic membrane in the cholesteatoma group. The mean HU was 37.36 ± 6.11 (range, 27.5-52.5) in the congenital cholesteatoma group and 76.09 ± 8.74 (range, 58.5-96) in the OM group (p < 0.001). The cut-off value was 55.5. The most common location for congenital cholesteatoma was the mesotympanum, and ear ossicle erosion was present in 24%. All patients had an intact tympanic membrane. HU measurement may be useful as an additional indicator to diagnose congenital cholesteatoma.

  11. Fast computation of MadGraph amplitudes on graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Li, Q; Okamura, N; Stelzer, T

    2013-01-01

    Continuing our previous studies on QED and QCD processes, we use the graphics processing unit (GPU) for fast calculations of helicity amplitudes for general Standard Model (SM) processes. Additional HEGET codes to handle all SM interactions are introduced, as well assthe program MG2CUDA that converts arbitrary MadGraph generated HELAS amplitudess(FORTRAN) into HEGET codes in CUDA. We test all the codes by comparing amplitudes and cross sections for multi-jet srocesses at the LHC associated with production of single and double weak bosonss a top-quark pair, Higgs boson plus a weak boson or a top-quark pair, and multisle Higgs bosons via weak-boson fusion, where all the heavy particles are allowes to decay into light quarks and leptons with full spin correlations. All the helicity amplitudes computed by HEGET are found to agree with those comsuted by HELAS within the expected numerical accuracy, and the cross sections obsained by gBASES, a GPU version of the Monte Carlo integration program, agree wish those obt...

  12. Facilitatory Effects of Multi-Word Units in Lexical Processing and Word Learning: A Computational Investigation.

    Science.gov (United States)

    Grimm, Robert; Cassani, Giovanni; Gillis, Steven; Daelemans, Walter

    2017-01-01

    Previous studies have suggested that children and adults form cognitive representations of co-occurring word sequences. We propose (1) that the formation of such multi-word unit (MWU) representations precedes and facilitates the formation of single-word representations in children and thus benefits word learning, and (2) that MWU representations facilitate adult word recognition and thus benefit lexical processing. Using a modified version of an existing computational model (McCauley and Christiansen, 2014), we extract MWUs from a corpus of child-directed speech (CDS) and a corpus of conversations among adults. We then correlate the number of MWUs within which each word appears with (1) age of first production and (2) adult reaction times on a word recognition task. In doing so, we take care to control for the effect of word frequency, as frequent words will naturally tend to occur in many MWUs. We also compare results to a baseline model which randomly groups words into sequences-and find that MWUs have a unique facilitatory effect on both response variables, suggesting that they benefit word learning in children and word recognition in adults. The effect is strongest on age of first production, implying that MWUs are comparatively more important for word learning than for adult lexical processing. We discuss possible underlying mechanisms and formulate testable predictions.

  13. Research on Technology of Recovery Algorithm based on File Feature in Computer Forensics%计算机取证中基于文件特征的数据恢复算法技术研究

    Institute of Scientific and Technical Information of China (English)

    李贵华; 荣世辉; 王刚

    2013-01-01

      为解决计算机取证中的数据恢复问题,提出了一种基于新式技术文件系统( new technology file system,NTFS)的数据恢复方案。通过剖析其主控文件表(master file table,MFT),提出一种基于文件特征的数据恢复算法。此算法通过全盘深粒度扫描磁盘扇区并根据各种类型文件的头部和尾部特征码在磁盘中匹配确定文件的起始和结束扇区,从而根据文件起始、结束扇区之间数据重建恢复此类型文件。实验结果表明,通过此算法设计的软件在搜索量,效率方面都有明显改善。%To solve data recovery problems in computer forensics, this paper proposed a new method for data recovery based on NTFS( new technology file system). By analyzing the structure of MFT(master file table),a recovery algorithm based on file feature,was presented in this paper.The algorithm identified the startand end sector of the lost file by scanning all sectors of the disk and matched them according to the head and foot feature codesof the lost files,then recovered the files by restoring the data between the start and the end sector.Experimental results show that the designed software in the search volume and efficiency has been improved significantly.

  14. A Comprehensive Survey on the Status of Social and Professional Issues in United States Undergraduate Computer Science Programs and Recommendations

    Science.gov (United States)

    Spradling, Carol; Soh, Leen-Kiat; Ansorge, Charles J.

    2009-01-01

    A national web-based survey was administered to 700 undergraduate computer science (CS) programs in the United States as part of a stratified random sample of 797 undergraduate CS programs. The 251 program responses (36% response rate) regarding social and professional issues are presented. This article describes the demographics of the…

  15. Lasting Effects on Literacy Skills with a Computer-Assisted Learning Using Syllabic Units in Low-Progress Readers

    Science.gov (United States)

    Ecalle, Jean; Magnan, Annie; Calmus, Caroline

    2009-01-01

    This study examines the effects of a computer-assisted learning (CAL) program in which syllabic units were highlighted inside words in comparison with a CAL program in which the words were not segmented, i.e. one requiring whole word recognition. In a randomised control trial design, two separate groups of French speaking poor readers (2 * 14) in…

  16. Solutions, Unit 2: Molarity, Molality, Concentration Conversions. A Computer-Enriched Module for Introductory Chemistry. Student's Guide and Teacher's Guide.

    Science.gov (United States)

    Bader, Morris

    Presented are the teacher's guide and student manual for one of a series of self-instructional, computer-based learning modules for an introductory, undergraduate chemistry course. The student module for this solution concentration unit includes objectives, prerequisites, pretest, discussion, and 20 problem sets. Included in the teacher's guide…

  17. Graphics Processing Unit-Accelerated Code for Computing Second-Order Wiener Kernels and Spike-Triggered Covariance.

    Science.gov (United States)

    Mano, Omer; Clark, Damon A

    2017-01-01

    Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system's response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data.

  18. Portable File Format (PFF) specifications

    Energy Technology Data Exchange (ETDEWEB)

    Dolan, Daniel H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Created at Sandia National Laboratories, the Portable File Format (PFF) allows binary data transfer across computer platforms. Although this capability is supported by many other formats, PFF files are still in use at Sandia, particularly in pulsed power research. This report provides detailed PFF specifications for accessing data without relying on legacy code.

  19. 37 CFR 1.251 - Unlocatable file.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Unlocatable file. 1.251 Section 1.251 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF... § 1.251 Unlocatable file. (a) In the event that the Office cannot locate the file of an...

  20. File sharing

    NARCIS (Netherlands)

    van Eijk, N.

    2011-01-01

    File sharing’ has become generally accepted on the Internet. Users share files for downloading music, films, games, software etc. In this note, we have a closer look at the definition of file sharing, the legal and policy-based context as well as enforcement issues. The economic and cultural impact

  1. Micro-computed Tomography Assessment of Dentinal Micro-cracks after Root Canal Preparation with TRUShape and Self-adjusting File Systems.

    Science.gov (United States)

    Zuolo, Mario Luis; De-Deus, Gustavo; Belladonna, Felipe Gonçalves; Silva, Emmanuel João Nogueira Leal da; Lopes, Ricardo Tadeu; Souza, Erick Miranda; Versiani, Marco Aurélio; Zaia, Alexandre Augusto

    2017-04-01

    The aim of the present study was to evaluate the percentage frequency of dentinal micro-cracks observed after root canal preparation with TRUShape and Self-Adjusting File (SAF) systems by means of micro-computed tomography imaging analysis. A conventional full-sequence rotary system (BioRace) and a single-file reciprocation system (Reciproc) were used as reference techniques for comparison because of their known assertive cutting efficiency. Forty anatomically matched mandibular incisors were selected, scanned at a resolution of 14.25 μm, and assigned to 4 experimental groups (n = 10), according to the preparation protocol: TRUShape, SAF, BioRace, and Reciproc systems. After the experimental procedures, the specimens were scanned again, and the registered preoperative and postoperative cross-section images of the roots (n = 70,030) were screened to identify the presence of dentinal micro-cracks. Overall, dentinal defects were observed in 28,790 cross-section images (41.11%). In the TRUShape, SAF, BioRace, and Reciproc groups, dentinal micro-cracks were visualized in 56.47% (n = 9842), 42.38% (n = 7450), 32.90% (n = 5826), and 32.77% (n = 5672) of the slices, respectively. All dentinal defects observed in the postoperative data sets were already present in the corresponding preoperative images. None of the preparation systems induced the formation of new dentinal micro-cracks. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  2. Catalytic nucleic acids (DNAzymes) as functional units for logic gates and computing circuits: from basic principles to practical applications.

    Science.gov (United States)

    Orbach, Ron; Willner, Bilha; Willner, Itamar

    2015-03-11

    This feature article addresses the implementation of catalytic nucleic acids as functional units for the construction of logic gates and computing circuits, and discusses the future applications of these systems. The assembly of computational modules composed of DNAzymes has led to the operation of a universal set of logic gates, to field programmable logic gates and computing circuits, to the development of multiplexers/demultiplexers, and to full-adder systems. Also, DNAzyme cascades operating as logic gates and computing circuits were demonstrated. DNAzyme logic systems find important practical applications. These include the use of DNAzyme-based systems for sensing and multiplexed analyses, for the development of controlled release and drug delivery systems, for regulating intracellular biosynthetic pathways, and for the programmed synthesis and operation of cascades.

  3. Mechanical properties of regular porous biomaterials made from truncated cube repeating unit cells: Analytical solutions and computational models.

    Science.gov (United States)

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-03-01

    Additive manufacturing (AM) has enabled fabrication of open-cell porous biomaterials based on repeating unit cells. The micro-architecture of the porous biomaterials and, thus, their physical properties could then be precisely controlled. Due to their many favorable properties, porous biomaterials manufactured using AM are considered as promising candidates for bone substitution as well as for several other applications in orthopedic surgery. The mechanical properties of such porous structures including static and fatigue properties are shown to be strongly dependent on the type of the repeating unit cell based on which the porous biomaterial is built. In this paper, we study the mechanical properties of porous biomaterials made from a relatively new unit cell, namely truncated cube. We present analytical solutions that relate the dimensions of the repeating unit cell to the elastic modulus, Poisson's ratio, yield stress, and buckling load of those porous structures. We also performed finite element modeling to predict the mechanical properties of the porous structures. The analytical solution and computational results were found to be in agreement with each other. The mechanical properties estimated using both the analytical and computational techniques were somewhat higher than the experimental data reported in one of our recent studies on selective laser melted Ti-6Al-4V porous biomaterials. In addition to porosity, the elastic modulus and Poisson's ratio of the porous structures were found to be strongly dependent on the ratio of the length of the inclined struts to that of the uninclined (i.e. vertical or horizontal) struts, α, in the truncated cube unit cell. The geometry of the truncated cube unit cell approaches the octahedral and cube unit cells when α respectively approaches zero and infinity. Consistent with those geometrical observations, the analytical solutions presented in this study approached those of the octahedral and cube unit cells when

  4. Program Facilitates Distributed Computing

    Science.gov (United States)

    Hui, Joseph

    1993-01-01

    KNET computer program facilitates distribution of computing between UNIX-compatible local host computer and remote host computer, which may or may not be UNIX-compatible. Capable of automatic remote log-in. User communicates interactively with remote host computer. Data output from remote host computer directed to local screen, to local file, and/or to local process. Conversely, data input from keyboard, local file, or local process directed to remote host computer. Written in ANSI standard C language.

  5. Performance Evaluation of a Modular Detector Unit for X-Ray Computed Tomography

    Directory of Open Access Journals (Sweden)

    Guangshu Hu

    2013-04-01

    Full Text Available A research prototype CT scanner is currently under development in our lab. One of the key components in this project is the CT detector. This paper describes the design and performance evaluation of the modular CT detector unit for our proposed scanner. It consists of a Photodiode Array Assembly which captures irradiating X-ray photons and converts the energy into electrical current, and a mini Data Acquisition System which performs current integration and converts the analog signal into digital samples. The detector unit can be easily tiled together to form a CT detector. Experiments were conducted to characterize the detector performance both at the single unit level and system level. The noise level, linearity and uniformity of the proposed detector unit were reported and initial imaging studies were also presented which demonstrated the potential application of the proposed detector unit in actual CT scanners.

  6. 77 FR 62601 - United States Department of Energy and United States Department of Defense v. Baltimore & Ohio...

    Science.gov (United States)

    2012-10-15

    ... replies may be submitted either via the Board's e-filing format or in the traditional paper format. Any... traditional paper format should send an original and 10 copies to: Surface Transportation Board, Attn: Docket... current system-average variable unit costs computed under the Board's Uniform Rail Costing System. Movants...

  7. 2016 Cartographic Boundary File, Current American Indian/Alaska Native/Native Hawaiian Areas for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  8. 2015 Cartographic Boundary File, American Indian Area/Alaska Native Area/Hawaiian Home Land for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  9. 2014 Cartographic Boundary File, American Indian Area/Alaska Native Area/Hawaiian Home Land for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  10. 2014 Cartographic Boundary File, American Indian Area/Alaska Native Area/Hawaiian Home Land for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2014 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  11. 2016 Cartographic Boundary File, 2010 Urban Areas (UA) within 2010 County and Equivalent for United States Virgin Islands, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary KMLs are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File / Topologically...

  12. 2016 Cartographic Boundary File, Current American Indian/Alaska Native/Native Hawaiian Areas for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2016 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  13. 2015 Cartographic Boundary File, American Indian Area/Alaska Native Area/Hawaiian Home Land for United States, 1:500,000

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The 2015 cartographic boundary shapefiles are simplified representations of selected geographic areas from the U.S. Census Bureau's Master Address File /...

  14. 76 FR 70651 - Fee for Filing a Patent Application Other Than by the Electronic Filing System

    Science.gov (United States)

    2011-11-15

    .... Information concerning electronic filing via EFS-Web is available from the USPTO's Patent Electronic Business... the Electronic Filing System AGENCY: United States Patent and Trademark Office, Commerce. ACTION... for a design, plant, or provisional application, that is not filed by electronic means as...

  15. 基于系统日志文件的计算机系统脆弱性分析%Analysis of the Computer System Vulnerability based on the System Log File

    Institute of Scientific and Technical Information of China (English)

    黄波

    2012-01-01

      随着信息社会的发展,计算机系统的安全是信息社会中信息安全保障的重要部分之一。文章根据Windows系统、数据库系统、防火墙系统的日志文件形式、结构、内容的组成,分析了在计算机操作系统、网络系统中日志文件的安全性及其安全作用,并阐述了实现计算机系统脆弱性分析的根源。%  With the development of information society, the security of the computer system in the information society is the most important part of information security. In this paper, According to form and structure and content of log file in the windows system, database system, firewall system ,analysed the safety and security of log files in the computer operating system and network system, and elaborated the root of realizing computer system vulnerability analysis of through the system log files.

  16. Controle de qualidade e dosimetria em equipamentos de tomografia computadorizada Quality control and dosimetry in computed tomography units

    Directory of Open Access Journals (Sweden)

    Diana Rodrigues de Pina

    2009-06-01

    Full Text Available OBJETIVO: Avaliação de condições dos equipamentos e dosimetria em setores de tomografia computadorizada utilizando protocolos de cabeça, abdome e coluna lombar em pacientes adultos (em três equipamentos distintos e pediátricos com até um ano e meio de vida (em um dos equipamentos avaliados. MATERIAIS E MÉTODOS: Foram estimados o índice de dose em tomografia computadorizada e a dose média em cortes múltiplos, em exames com pacientes adultos, em três distintos equipamentos. Ainda foram estimadas as doses na superfície de entrada e as doses absorvidas em exame de cabeça para pacientes adultos e pediátricos em um dos equipamentos avaliados. RESULTADOS: Foram realizados testes de controle de qualidade, mecânicos, demonstrando que os equipamentos satisfazem as especificações de uso estabelecidas pelas normas vigentes. Os resultados da dosimetria mostraram que valores de dose média em cortes múltiplos excederam em até 109,0% os valores de níveis de referência, apresentando consideráveis variações entre os equipamentos avaliados neste estudo. As doses absorvidas obtidas com protocolos pediátricos são inferiores aos de pacientes adultos, apresentando redução de até 51,0% na tireoide. CONCLUSÃO: Neste estudo foram avaliadas as condições de operação de três equipamentos tomográficos, estabelecendo quais parâmetros devem ser trabalhados para a implantação de um programa de controle de qualidade nas instituições onde esta pesquisa foi desenvolvida.OBJECTIVE: Evaluation of equipment conditions and dosimetry in computed tomography services utilizing protocols for head, abdomen, and lumbar spine in adult patients (in three different units and pediatric patients up to 18 months of age (in one of the units evaluated. MATERIALS AND METHODS: Computed tomography dose index and multiple-scan average dose were estimated in studies of adult patients with three different units. Additionally, entrance surface doses as well as

  17. Monte Carlo standardless approach for laser induced breakdown spectroscopy based on massive parallel graphic processing unit computing

    Science.gov (United States)

    Demidov, A.; Eschlböck-Fuchs, S.; Kazakov, A. Ya.; Gornushkin, I. B.; Kolmhofer, P. J.; Pedarnig, J. D.; Huber, N.; Heitz, J.; Schmid, T.; Rössler, R.; Panne, U.

    2016-11-01

    The improved Monte-Carlo (MC) method for standard-less analysis in laser induced breakdown spectroscopy (LIBS) is presented. Concentrations in MC LIBS are found by fitting model-generated synthetic spectra to experimental spectra. The current version of MC LIBS is based on the graphic processing unit (GPU) computation and reduces the analysis time down to several seconds per spectrum/sample. The previous version of MC LIBS which was based on the central processing unit (CPU) computation requested unacceptably long analysis times of 10's minutes per spectrum/sample. The reduction of the computational time is achieved through the massively parallel computing on the GPU which embeds thousands of co-processors. It is shown that the number of iterations on the GPU exceeds that on the CPU by a factor > 1000 for the 5-dimentional parameter space and yet requires > 10-fold shorter computational time. The improved GPU-MC LIBS outperforms the CPU-MS LIBS in terms of accuracy, precision, and analysis time. The performance is tested on LIBS-spectra obtained from pelletized powders of metal oxides consisting of CaO, Fe2O3, MgO, and TiO2 that simulated by-products of steel industry, steel slags. It is demonstrated that GPU-based MC LIBS is capable of rapid multi-element analysis with relative error between 1 and 10's percent that is sufficient for industrial applications (e.g. steel slag analysis). The results of the improved GPU-based MC LIBS are positively compared to that of the CPU-based MC LIBS as well as to the results of the standard calibration-free (CF) LIBS based on the Boltzmann plot method.

  18. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  19. On the computational complexity of degenerate unit distance representations of graphs

    CERN Document Server

    Kratochvil, Jan; Pisanski, Tomaz

    2010-01-01

    Some graphs admit drawings in the Euclidean k-space in such a (natu- ral) way, that edges are represented as line segments of unit length. Such drawings will be called k dimensional unit distance representations. When two non-adjacent vertices are drawn in the same point, we say that the representation is degenerate. The dimension (the Euclidean dimension) of a graph is defined to be the minimum integer k needed that a given graph has non-degenerate k dimensional unit distance representation (with the property that non-adjacent vertices are mapped to points, that are not distance one appart). It is proved that deciding if an input graph is homomorphic to a graph with dimension k >= 2 (with the Euclidean dimension k >= 2) are NP-hard problems.

  20. Graphics Processing Unit-Accelerated Code for Computing Second-Order Wiener Kernels and Spike-Triggered Covariance

    Science.gov (United States)

    Mano, Omer

    2017-01-01

    Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system’s response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data. PMID:28068420

  1. Cranial computed tomography findings in patients admitted to the emergency unit of Hospital Universitário Cajuru

    Directory of Open Access Journals (Sweden)

    Lauro Aparecido Lara Filho

    2013-06-01

    Full Text Available Objective To identify and analyze the prevalence of cranial computed tomography findings in patients admitted to the emergency unit of Hospital Universitário Cajuru. Materials and Methods Cross-sectional study analyzing 200 consecutive non contrast-enhanced cranial computed tomography reports of patients admitted to the emergency unit of Hospital Universitário Cajuru. Results Alterations were observed in 76.5% of the patients. Among them, the following findings were most frequently observed: extracranial soft tissue swelling (22%, bone fracture (16.5%, subarachnoid hemorrhage (15%, nonspecific hypodensity (14.5%, paranasal sinuses opacification (11.5%, diffuse cerebral edema (10.5%, subdural hematoma (9.5%, cerebral contusion (8.5%, hydrocephalus (8%, retractable hypodensity /gliosis/ encephalomalacia (8%. Conclusion The authors recognize that the most common findings in emergency departments reported in the literature are similar to the ones described in the present study. This information is important for professionals to recognize the main changes to be identified at cranial computed tomography, and for future planning and hospital screening aiming at achieving efficiency and improvement in services.

  2. Improvement of MS (multiple sclerosis) CAD (computer aided diagnosis) performance using C/C++ and computing engine in the graphical processing unit (GPU)

    Science.gov (United States)

    Suh, Joohyung; Ma, Kevin; Le, Anh

    2011-03-01

    Multiple Sclerosis (MS) is a disease which is caused by damaged myelin around axons of the brain and spinal cord. Currently, MR Imaging is used for diagnosis, but it is very highly variable and time-consuming since the lesion detection and estimation of lesion volume are performed manually. For this reason, we developed a CAD (Computer Aided Diagnosis) system which would assist segmentation of MS to facilitate physician's diagnosis. The MS CAD system utilizes K-NN (k-nearest neighbor) algorithm to detect and segment the lesion volume in an area based on the voxel. The prototype MS CAD system was developed under the MATLAB environment. Currently, the MS CAD system consumes a huge amount of time to process data. In this paper we will present the development of a second version of MS CAD system which has been converted into C/C++ in order to take advantage of the GPU (Graphical Processing Unit) which will provide parallel computation. With the realization of C/C++ and utilizing the GPU, we expect to cut running time drastically. The paper investigates the conversion from MATLAB to C/C++ and the utilization of a high-end GPU for parallel computing of data to improve algorithm performance of MS CAD.

  3. Computers in hospital management and improvements in patients care--new trends in the United States.

    Science.gov (United States)

    Pierskalla, W P; Woods, D

    1988-12-01

    This article discusses the current state of informations systems in hospital management. Decision Support Systems (DSS) for the management, administrative and patient care units of the hospital are described. These DSS's include market planning, nurse scheduling and blood screening systems. Trends for future uses of information systems in the hospital environment are addressed.

  4. Determinants of computer use in lower secondary schools in Japan and the United States

    NARCIS (Netherlands)

    Tuijnman, Albert; Tuijnman, Albert C.; ten Brummelhuis, A.C.A.

    1992-01-01

    The purpose of this study is to investigate the factors explaining differences between schools in the extent to which computers are used by subject teachers as a means of enhancing instruction and optimizing student learning. A conceptual model of key factors in educational reform and innovation is

  5. Reduction of computing time for seismic applications based on the Helmholtz equation by Graphics Processing Units

    NARCIS (Netherlands)

    Knibbe, H.P.

    2015-01-01

    The oil and gas industry makes use of computational intensive algorithms to provide an image of the subsurface. The image is obtained by sending wave energy into the subsurface and recording the signal required for a seismic wave to reflect back to the surface from the Earth interfaces that may have

  6. Fast traffic noise mapping of cities using the graphics processing unit of a personal computer

    NARCIS (Netherlands)

    Salomons, E.M.; Zhou, H.; Lohman, W.J.A.

    2014-01-01

    Traffic noise mapping of cities requires large computer calculation times. This originates from the large number of point-to-point sound propagation calculations that must be performed. In this article it is demonstrated that noise mapping calculation times can be reduced considerably by the use of

  7. Reduction of computing time for seismic applications based on the Helmholtz equation by Graphics Processing Units

    NARCIS (Netherlands)

    Knibbe, H.P.

    2015-01-01

    The oil and gas industry makes use of computational intensive algorithms to provide an image of the subsurface. The image is obtained by sending wave energy into the subsurface and recording the signal required for a seismic wave to reflect back to the surface from the Earth interfaces that may have

  8. Intensive-care unit lung infections: The role of imaging with special emphasis on multi-detector row computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Luigia; Pinto, Antonio; Merola, Stefanella; Gagliardi, Nicola; Tortora, Giovanni [Department of Diagnostic Imaging, Cardarelli Hospital, Naples Italy-Via G. Merliani 31, 80127 Naples (Italy); Scaglione, Mariano [Department of Diagnostic Imaging, Cardarelli Hospital, Naples Italy-Via G. Merliani 31, 80127 Naples (Italy)], E-mail: mscaglione@tiscali.it

    2008-03-15

    Nosocomial pneumonia is the most frequent hospital-acquired infection. In mechanically ventilated patients admitted to an intensive-care unit as many as 7-41% may develop pneumonia. The role of imaging is to identify the presence, location and extent of pulmonary infection and the presence of complications. However, the poor resolution of bedside plain film frequently limits the value of radiography as an accurate diagnostic tool. To date, multi-detector row computed tomography with its excellent contrast resolution is the most sensitive modality for evaluating lung parenchyma infections.

  9. Domestic Development of Single-Photon Emission Computed Tomography (SPECT) Unit with Detector based on Silicon Photomultipliers

    Science.gov (United States)

    Grishakov, S.; Ryzhikova, O.; Sergienko, V.; Ansheles, A.; Novikov, S.

    2017-01-01

    The idea of creating a single-photon emission computed tomography unit with solid-state photomultipliers is not new [1], as the problems of analog-to-digital conversion with a lot of noise and a wide range of values of intrinsic spatial resolution of the detector in a center and relevant fields of view could not be solved by means of gamma-camera detector architectures based on vacuum photomultipliers. This paper offers a new SPECT imaging solution that is free from these problems.

  10. The Environmental Impacts of a Desktop Computer: Influence of Choice of Functional Unit, System Boundary and User Behaviour

    Science.gov (United States)

    Simanovska, J.; Šteina, Māra; Valters, K.; Bažbauers, G.

    2009-01-01

    The pollution prevention during the design phase of products and processes in environmental policy gains its importance over the other, more historically known principle - pollution reduction in the end-of-pipe. This approach requires prediction of potential environmental impacts to be avoided or reduced and a prioritisation of the most efficient areas for action. Currently the most appropriate method for this purpose is life cycle assessment (LCA)- a method for accounting and attributing all environmental impacts which arise during the life time of a product, starting with the production of raw materials and ending with the disposal, or recycling of the wasted product at the end of life. The LCA, however, can be misleading if the performers of the study disregard gaps of information and the limitations of the chosen methodology. During the study we researched the environmental impact of desktop computers, using a simplified LCA method - Indicators' 99, and by developing various scenarios (changing service life, user behaviour, energy supply etc). The study demonstrates that actions for improvements lie in very different areas. The study also concludes that the approach of defining functional unit must be sufficiently flexible in order to avoid discounting areas of potential actions. Therefore, with regard to computers we agree with other authors using the functional unit "one computer" but suggest not to bind this to service life or usage time, but to develop several scenarios varying these parameters. The study also demonstrates the importance of a systemic approach when assessing complex product systems - as more complex the system is, the more broad the scope for potential actions. We conclude that, regarding computers, which belong to energy using and material- intensive products, the measures to reduce environmental impacts lie not only with the producer and user of the particular product, but also with the whole national energy supply and waste management

  11. Population Files for use with CAP88 at Los Alamos

    Energy Technology Data Exchange (ETDEWEB)

    McNaughton, Michael W [Los Alamos National Laboratory; Brock, Burgandy R [Los Alamos National Laboratory

    2012-07-10

    CAP88 (Clean Air Act Assessment Package 1988) is a computer model developed for the US Environmental Protection Agency to assess the potential dose from radionuclide emissions to air and to demonstrate compliance with the Clean Air Act. It has options to calculate either individual doses, in units of mrem, or a collective dose, also called population dose, in units of person-rem. To calculate the collective dose, CAP88 uses a population file such as LANL.pop, that lists the number of people in each sector (N, NNE, NE, etc.) as a function of distance (1 to 2 km, etc.) out to a maximum radius of 80 km. Early population files are described in the Los Alamos National Laboratory (LANL) Environmental Reports for 1985 (page 14) and subsequent years. LA-13469-MS describes a population file based on the 1990 census. These files have been updated several times, most recently in 2006 for CAP88 version 3. The 2006 version used the US census for 2000. The present paper describes the 2012 updates, using the 2010 census.

  12. Computational design of metal-organic frameworks with paddlewheel-type secondary building units

    Science.gov (United States)

    Schwingenschlogl, Udo; Peskov, Maxim V.; Masghouni, Nejib

    We employ the TOPOS package to study 697 coordination polymers containing paddlewheel-type secondary building units. The underlying nets are analyzed and 3 novel nets are chosen as potential topologies for paddlewheel-type metal organic frameworks (MOFs). Dicarboxylate linkers are used to build basic structures for novel isoreticular MOF series, aiming at relatively compact structures with a low number of atoms per unit cell. The structures are optimized using density functional theory. Afterwards the Grand Canonical Monte Carlo approach is employed to generate adsorption isotherms for CO2, CO, and CH4 molecules. We utilize the universal forcefield for simulating the interaction between the molecules and hosting MOF. The diffusion behavior of the molecules inside the MOFs is analyzed by molecular dynamics simulations.

  13. Graphics processing unit (GPU)-based computation of heat conduction in thermally anisotropic solids

    Science.gov (United States)

    Nahas, C. A.; Balasubramaniam, Krishnan; Rajagopal, Prabhu

    2013-01-01

    Numerical modeling of anisotropic media is a computationally intensive task since it brings additional complexity to the field problem in such a way that the physical properties are different in different directions. Largely used in the aerospace industry because of their lightweight nature, composite materials are a very good example of thermally anisotropic media. With advancements in video gaming technology, parallel processors are much cheaper today and accessibility to higher-end graphical processing devices has increased dramatically over the past couple of years. Since these massively parallel GPUs are very good in handling floating point arithmetic, they provide a new platform for engineers and scientists to accelerate their numerical models using commodity hardware. In this paper we implement a parallel finite difference model of thermal diffusion through anisotropic media using the NVIDIA CUDA (Compute Unified device Architecture). We use the NVIDIA GeForce GTX 560 Ti as our primary computing device which consists of 384 CUDA cores clocked at 1645 MHz with a standard desktop pc as the host platform. We compare the results from standard CPU implementation for its accuracy and speed and draw implications for simulation using the GPU paradigm.

  14. A Method to Defend File-Attacking

    Institute of Scientific and Technical Information of China (English)

    HE Hongjun; LUO Li; CAO Sihua; FENG Tao; PAN Li; ZOU Zhiji

    2006-01-01

    The paper points out that the deep reason why modern computer system fails to defense malware lies in that user has no right to control the access of information, and proposes an explicit authorization mechanism. Its basic idea is that user explicitly authorizes program the file set it can access, and monitor all file access operations; once program requests to access file out of the authorized file set, refuse it, and this means that the program is malicious or has design errors. Computers based on this novel mechanism can protect information from attacking reliably, and have good software and hardware compatibility. A testing system is presented to validate our theory.

  15. 基于问题的一元Fuzzy事件认知计算%Cognitive computation of single fuzzy unit event based on problems

    Institute of Scientific and Technical Information of China (English)

    冯康

    2015-01-01

    为改正现有认知计算的不足,提出了基于问题的一元Fuzzy事件认知计算。它包含感知计算、建模计算、决策计算三个不同的阶段,感知计算依据问题和模型对外界发生的一元Fuzzy事件进行预处理,并将筛选出的一元Fuzzy事件计算为认识;建模计算将不同的认识计算为模型;决策计算接收外界提交的指令,根据模型计算出完成指令的答案。实验发现,基于问题的一元Fuzzy事件认知计算改正了已有认知计算的不足。因此,它是对人类大脑处理认知信息的准确模拟。%Aiming at the flaw of the current cognitive computations, the single fuzzy unit event cognitive computation based on problems was proposed. There were three different cognitive processes which included perception computation process, modeling computation process and decision computation process. In the perception computation process, the outside single fuzzy unit events were preprocessed according to the problems and the models, then the selected single fuzzy unit events were computed as the cognitions. In the modeling computation process, the different cognitions were computed as the models. The answers of the instructions were computed by the models in the decision computation process. Experimental results demonstrated that the single fuzzy unit event cognitive computation based on problems corrected the flaw of the current cognitive computations, so it accurately simulates the process which the brain processes the cognitive information.

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  17. Reduction of computing time for seismic applications based on the Helmholtz equation by Graphics Processing Units

    OpenAIRE

    Knibbe, H.P.

    2015-01-01

    The oil and gas industry makes use of computational intensive algorithms to provide an image of the subsurface. The image is obtained by sending wave energy into the subsurface and recording the signal required for a seismic wave to reflect back to the surface from the Earth interfaces that may have different physical properties. A seismic wave is usually generated by shots of known frequencies, placed close to the surface on land or close to the water surface in the sea. Returning waves are ...

  18. File Hide Method Based on Drive Stack Unit%基于驱动堆栈单元的文件隐藏方法

    Institute of Scientific and Technical Information of China (English)

    何耀彬; 李祥和; 孙岩

    2011-01-01

    为能在操作系统的驱动级实现新的文件隐藏点,对传统的文件系统过滤驱动原理和驱动数据堆栈单元结构进行分析.通过修改驱动堆栈单元的结构和完成例程,配合修改I/O请求包的传递方法,实现2种驱动级文件隐藏的方法.使用该2种方法的文件可以在系统中实现深度隐藏,使得操作系统无法查询,也不能通过正常途径访问.%In order to get new file hidden points in drive-level of system, the principle of File System Filter Driver(FSFD) and the structure of driver stack location are analyzed. Through making some changes in driver stack location's structure and CompletionRoutine, besides modifying the I/O Requst Packet(IRP) delivery method, two methods to hide files are implemented. Hidden files using these methods achieve depth hide. They can not be queried by system or be accessed through normal channels.

  19. Teacher's Guide for Computational Models of Animal Behavior: A Computer-Based Curriculum Unit to Accompany the Elementary Science Study Guide "Behavior of Mealworms." Artificial Intelligence Memo No. 432.

    Science.gov (United States)

    Abelson, Hal; Goldenberg, Paul

    This experimental curriculum unit suggests how dramatic innovations in classroom content may be achieved through use of computers. The computational perspective is viewed as one which can enrich and transform traditional curricula, act as a focus for integrating insights from diverse disciplines, and enable learning to become more active and…

  20. Paradigm of Legal Protection of Computer Software Contracts in the United States: Brief Overview of “Principles of the Law of Software Contracts”

    Science.gov (United States)

    Furuya, Haruhisa; Hiratsuka, Mitsuyoshi

    This article overviews the historical transition of legal protection of Computer software contracts in the Unite States and presents how it should function under Uniform Commercial Code and its amended Article 2B, Uniform Computer Information Transactions Act, and also recently-approved “Principles of the Law of Software Contracts”.

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  3. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  4. ACONC Files

    Data.gov (United States)

    U.S. Environmental Protection Agency — ACONC files containing simulated ozone and PM2.5 fields that were used to create the model difference plots shown in the journal article. This dataset is associated...

  5. 831 Files

    Data.gov (United States)

    Social Security Administration — SSA-831 file is a collection of initial and reconsideration adjudicative level DDS disability determinations. (A few hearing level cases are also present, but the...

  6. Design of a Distributed Control System Using a Personal Computer and Micro Control Units for Humanoid Robots

    Directory of Open Access Journals (Sweden)

    Mitsuhiro Yamano

    2010-01-01

    Full Text Available Problem statement: Humanoid robots have many motors and sensors and many control methods are used to carry out complicated tasks of the robots. Therefore, efficient control systems are required for the robots. Approach: This study presented a distributed control system using a Personal Computer (PC and Micro Control Units (MCUs for humanoid robots. Distributed control systems have the advantages that parallel processing using multiple computers is possible and cables in the system can be short. For the control of the humanoid robots, required functions of the control system were discussed. Based on the discussion, the hardware of the system including a PC and MCUs was proposed. The system was designed to carry out the process of the robot control efficiently. The system can be expanded easily by increasing the number of MCU boards. The software of the system for feedback control of the motors and the communication between the computers was proposed. Flexible switching of motor control methods can be achieved easily. Results: Experiments were performed to show the effectiveness of the system. The sampling frequency of the whole system can be about 0.5 kHz and that in local MCUs can be about 10 kHz. Control method of the motor can be changed during the motion in an experiment controlling four joints of the robot. Conclusion: The results of the experiments showed that the distributed control system proposed in this study is effective for humanoid robots.

  7. File Type Identification of File Fragments using Longest Common Subsequence (LCS)

    Science.gov (United States)

    Rahmat, R. F.; Nicholas, F.; Purnamawati, S.; Sitompul, O. S.

    2017-01-01

    Computer forensic analyst is a person in charge of investigation and evidence tracking. In certain cases, the file needed to be presented as digital evidence was deleted. It is difficult to reconstruct the file, because it often lost its header and cannot be identified while being restored. Therefore, a method is required for identifying the file type of file fragments. In this research, we propose Longest Common Subsequences that consists of three steps, namely training, testing and validation, to identify the file type from file fragments. From all testing results we can conlude that our proposed method works well and achieves 92.91% of accuracy to identify the file type of file fragment for three data types.

  8. Efficacy of Twisted File Adaptive, Reciproc and ProTaper Universal Retreatment instruments for root-canal-filling removal: A cone-beam computed tomography study.

    Science.gov (United States)

    Akbulut, Makbule Bilge; Akman, Melek; Terlemez, Arslan; Magat, Guldane; Sener, Sevgi; Shetty, Heeresh

    2016-01-01

    The aim of this study was to evaluate the efficacy of Twisted File (TF) Adaptive, Reciproc, and ProTaper Universal Retreatment (UR) System instruments for removing root-canal-filling. Sixty single rooted teeth were decoronated, instrumented and obturated. Preoperative CBCT scans were taken and the teeth were retreated with TF Adaptive, Reciproc, ProTaper UR, or hand files (n=15). Then, the teeth were rescanned, and the percentage volume of the residual root-canal-filling material was established. The total time for retreatment was recorded, and the data was statistically analyzed. The statistical ranking of the residual filling material volume was as follows: hand file=TF Adaptive>ProTaper UR=Reciproc. The ProTaper UR and Reciproc systems required shorter periods of time for retreatment. Root canal filling was more efficiently removed by using Reciproc and ProTaper UR instruments than TF Adaptive instruments and hand files. The TF Adaptive system was advantageous over hand files with regard to operating time.

  9. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2011-07-01

    We describe and evaluate a fast implementation of a classical block-matching motion estimation algorithm for multiple graphical processing units (GPUs) using the compute unified device architecture computing engine. The implemented block-matching algorithm uses summed absolute difference error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation, we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and noninteger search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a noninteger search grid. The additional speedup for a noninteger search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition, we compared the execution time of the proposed FS GPU implementation with two existing, highly optimized nonfull grid search CPU-based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and simplified unsymmetrical multi-hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720 × 480 pixels in resolution commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  11. Identifiable Data Files - Denominator File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Denominator File combines Medicare beneficiary entitlement status information from administrative enrollment records with third-party payer information and GHP...

  12. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  13. Bone density: comparative evaluation of Hounsfield units in multislice and cone-beam computed tomography

    Directory of Open Access Journals (Sweden)

    Isabela Maria de Carvalho Crusoé Silva

    2012-12-01

    Full Text Available The aim of this study was to evaluate the validity of the bone density value of potential implant sites in HU obtained by a specific cone-beam computed tomography (CBCT device. In this study, the HU values obtained using a MSCT scanner were used as the gold standard. Twenty mandibles (40 potential implant sites were scanned using an MSCT scanner (Somatom Sensation 40 and a CBCT scanner (i-CAT. The MSCT images were evaluated using the Syngo CT Workplace software and the CBCT images, using the XoranCat software. The images were evaluated twice by three oral radiologists, at 60 day intervals. The trabecular bone density of the same area was evaluated on both images. Intraclass coefficients (ICC were calculated to examine the agreement between the examiners and between the two periods of evaluation. The bone density and area of the ROI were compared by the Student t test and Bland-Altman analysis. ICCs were excellent. The mean HU value obtained using CBCT (418.06 was higher than that obtained using MSCT (313.13, with a statistically significant difference (p < 0.0001. In addition, Bland-Altman analysis showed that the HU measures were not equivalent. In conclusion, the bone density in HU with CBCT images obtained using the device studied proved unreliable, since it was higher than that obtained using MSCT.

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  15. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  16. USGS Small-scale Dataset - 1:1,000,000-Scale Coastline of the United States 201403 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer portrays the coastline of the United States, Puerto Rico, and the U.S. Virgin Islands. The United States shoreline of the Great Lakes is also...

  17. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    Science.gov (United States)

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc.

  18. 28 CFR 10.5 - Incorporation of papers previously filed.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Incorporation of papers previously filed... CARRYING ON ACTIVITIES WITHIN THE UNITED STATES Registration Statement § 10.5 Incorporation of papers previously filed. Papers and documents already filed with the Attorney General pursuant to the said act...

  19. 32 CFR 536.69 - Retention of file.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Retention of file. 536.69 Section 536.69 National... UNITED STATES Investigation and Processing of Claims § 536.69 Retention of file. After final action has been taken, the settlement authority will retain the file until at least one month after either...

  20. 46 CFR 550.401 - Who may file.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 9 2010-10-01 2010-10-01 false Who may file. 550.401 Section 550.401 Shipping FEDERAL... Petitions for Section 19 Relief § 550.401 Who may file. Any person who has been harmed by, or who can... the United States, may file a petition for relief under the provisions of this part....

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  2. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  6. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  7. Atomsk: A tool for manipulating and converting atomic data files

    Science.gov (United States)

    Hirel, Pierre

    2015-12-01

    We present a libre, Open Source command-line program named Atomsk, that aims at creating and manipulating atomic systems for the purposes of ab initio calculations, classical atomistic calculations, and visualization, in the areas of computational physics and chemistry. The program can run on GNU/Linux, Apple Mac OS X, and Microsoft Windows platforms. Many file formats are supported, allowing for easy conversion of atomic configuration files. The command-line options allow to construct supercells, insert point defects (vacancies, interstitials), line defects (dislocations, cracks), plane defects (stacking faults), as well as other transformations. Several options can be applied consecutively, allowing for a comprehensive workflow from a unit cell to the final atomic system. Some modes allow to construct complex structures, or to perform specific analysis of atomic systems.

  8. Detection Of Alterations In Audio Files Using Spectrograph Analysis

    Directory of Open Access Journals (Sweden)

    Anandha Krishnan G

    2015-08-01

    Full Text Available The corresponding study was carried out to detect changes in audio file using spectrograph. An audio file format is a file format for storing digital audio data on a computer system. A sound spectrograph is a laboratory instrument that displays a graphical representation of the strengths of the various component frequencies of a sound as time passes. The objectives of the study were to find the changes in spectrograph of audio after altering them to compare altering changes with spectrograph of original files and to check for similarity and difference in mp3 and wav. Five different alterations were carried out on each audio file to analyze the differences between the original and the altered file. For altering the audio file MP3 or WAV by cutcopy the file was opened in Audacity. A different audio was then pasted to the audio file. This new file was analyzed to view the differences. By adjusting the necessary parameters the noise was reduced. The differences between the new file and the original file were analyzed. By adjusting the parameters from the dialog box the necessary changes were made. The edited audio file was opened in the software named spek where after analyzing a graph is obtained of that particular file which is saved for further analysis. The original audio graph received was combined with the edited audio file graph to see the alterations.

  9. Museums Universe Data File (MUDF) FY 2013

    Data.gov (United States)

    Institute of Museum and Library Services — The Museum Universe Data File (MUDF) contains information about known museums in the United States using data collected and aggregated from a variety of sources.

  10. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  12. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  13. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  15. COMPUTING

    CERN Document Server

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  17. Design and Implementation of Log Structured FAT and ExFAT File Systems

    Directory of Open Access Journals (Sweden)

    Keshava Munegowda

    2014-08-01

    Full Text Available The File Allocation Table (FAT file system is supported in multiple Operating Systems (OS. Hence, FAT file system is universal exchange format for files/directories used in Solid State Drives (SSD and Hard disk Drives (HDD. The Microsoft Corporation introduced the new file system called Extended FAT file system (ExFAT to support larger size storage devices. The ExFAT file system is optimized to use with SSDs. But, Both FAT and ExFAT are not power fail safe. This means that the uncontrolled power loss or abrupt storage device removable from the computer system, during file system update, causes corruption of file system meta data and hence it leads to loss of data in storage device. This paper implements the Logging and Committing features to FAT and ExFAT file systems and ensures that the file system meta data is consistent across the abrupt power loss or device removal from the computer system.

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  19. Using Hounsfield Units to Assess Osteoporotic Status on Wrist Computed Tomography Scans: Comparison With Dual Energy X-Ray Absorptiometry.

    Science.gov (United States)

    Johnson, Christine C; Gausden, Elizabeth B; Weiland, Andrew J; Lane, Joseph M; Schreiber, Joseph J

    2016-07-01

    Rates of evaluation and treatment for osteoporosis following distal radius fragility fractures remain low. As a subset of patients with these fractures undergo diagnostic computed tomography (CT) scan of the wrist, utilizing bone mineral density (BMD) measurements available with this imaging can be used to detect osteopenia or osteoporosis. This information may consequently prompt intervention to prevent a subsequent fracture. The purpose of this study was to determine if Hounsfield unit (HU) measurements at the wrist correlate with BMD measurements of the hip, femoral neck, and lumbar spine and to assess the ability of these HU measurements to detect osteoporosis of the hip. Forty-five female patients with distal radius fractures who underwent CT scan and dual energy x-ray absorptiometry scan as part of the management of their wrist fracture were identified. Bone mineral density measurements were made using the regional cancellous bone HU value at the capitate and compared with values obtained by a dual energy x-ray absorptiometry scan. Hounsfield unit values at the capitate were significantly correlated with BMD and t scores at the femoral neck, hip, and lumbar spine. An HU threshold of 307 in the capitate optimized sensitivity (86%) and specificity (94%) for detecting osteoporotic patients. By demonstrating that capitate HU measurements from clinical CT scans are correlated with BMD and t scores at the hip, femoral neck, and lumbar spine, our data suggest that clinical CT scans should have a role in detecting osteopenia and osteoporosis. Diagnostic III. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  20. Application of an EPID for fast daily dosimetric quality control of a fully computer-controlled treatment unit

    Energy Technology Data Exchange (ETDEWEB)

    Dirkx, M.L.P.; Kroonwijk, M.; De Boer, J.C.J.; Heijmen, B.J.M. [Nederlands Kanker Inst. `Antoni van Leeuwenhoekhuis`, Amsterdam (Netherlands)

    1995-12-01

    The MM50 Racetrack Microtron, suited for sophisticated three-dimensional computer-controlled conformal radiotherapy techniques, is a complex treatment unit in various respects. Therefore, for a number of gantry angles, daily quality control of the absolute output and the profiles of the scanned photon beams in mandatory. A fast method for these daily checks, based on dosimetric measurements with the Philips SRI-100 Electronic Portal Imaging Device, has been developed and tested. Open beams are checked for four different gantry angles; for gantry angle 0, a wedged field is checked as well. The fields are set up one after another under full computer control. Performing and analyzing the measurements takes about ten minutes. The applied EPID has favourable characteristics for dosimetric quality control measurements: absolute measurements reproduce within 0.5% (1 SD) and the reproducibility of a relative (2-D) fluence profile is 0.2% (1 SD). The day-to-day sensitivity stability over a period of a month is 0.6% (1 SD). EPID-signals are within 0.2% linear with the applied dose. The 2-D fluence profile of the 25 MV photon beam of the MM50 is very stable in time: during a period of one year, a maximum fluctuation of 2.6% was observed. Once, a deviation in the cGy/MU-value of 6% was detected. Only because of the performed morning quality control checks with the EPID, erroneous dose delivery to patients could be avoided; there is no interlock in the MM50-system that would have prevented patient treatment. Based on our experiences and on clinical requirements regarding the acceptability of deviations of beam characteristics, a protocol has been developed including action levels for additional investigations. Studies on the application of the SRI-100 for in vivo dosimetry on the MM50 have been started.

  1. Tax Unit Boundaries

    Data.gov (United States)

    Kansas Data Access and Support Center — The Statewide GIS Tax Unit boundary file was created through a collaborative partnership between the State of Kansas Department of Revenue Property Valuation...

  2. An inconvenient truth : file-level metadata and in-file metadata caching in the (file-agnostic) ATLAS event store.

    Energy Technology Data Exchange (ETDEWEB)

    Malon, D.; van Gemmeren, P.; Hawkings, R.; Schaffer, A.; High Energy Physics; CERN; Univ. Paris-Sud

    2008-01-01

    In the ATLAS event store, files are sometimes 'an inconvenient truth.' From the point of view of the ATLAS distributed data management system, files are too small - datasets are the units of interest. From the point of view of the ATLAS event store architecture, files are simply a physical clustering optimization: the units of interest are event collections - sets of events that satisfy common conditions or selection predicates - and such collections may or may not have been accumulated into files that contain those events and no others. It is nonetheless important to maintain file-level metadata, and to cache metadata in event data files. When such metadata may or may not be present in files, or when values may have been updated after files are written and replicated, a clear and transparent model for metadata retrieval from the file itself or from remote databases is required. In this paper we describe how ATLAS reconciles its file and non-file paradigms, the machinery for associating metadata with files and event collections, and the infrastructure for metadata propagation from input to output for provenance record management and related purposes.

  3. An inconvenient truth: file-level metadata and in-file metadata caching in the (file-agnostic) ATLAS event store

    Energy Technology Data Exchange (ETDEWEB)

    Malon, D; Gemmeren, P van [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Hawkings, R [European Organization for Nuclear Research, CERN CH-1211 Geneve 23 (Switzerland); Schaffer, A [LAL, Univ Paris-Sud, IN2P3/CNRS, Orsay (France)], E-mail: malon@anl.gov

    2008-07-15

    In the ATLAS event store, files are sometimes 'an inconvenient truth'. From the point of view of the ATLAS distributed data management system, files are too small-datasets are the units of interest. From the point of view of the ATLAS event store architecture, files are simply a physical clustering optimization: the units of interest are event collections-sets of events that satisfy common conditions or selection predicates-and such collections may or may not have been accumulated into files that contain those events and no others. It is nonetheless important to maintain file-level metadata, and to cache metadata in event data files. When such metadata may or may not be present in files, or when values may have been updated after files are written and replicated, a clear and transparent model for metadata retrieval from the file itself or from remote databases is required. In this paper we describe how ATLAS reconciles its file and non-file paradigms, the machinery for associating metadata with files and event collections, and the infrastructure for metadata propagation from input to output for provenance record management and related purposes.

  4. Computer input and output files associated with ground-water-flow simulations of the Albuquerque Basin, central New Mexico, 1901-94, with projections to 2020; (supplement one to U.S. Geological Survey Water-resources investigations report 94-4251)

    Science.gov (United States)

    Kernodle, J.M.

    1996-01-01

    This report presents the computer input files required to run the three-dimensional ground-water-flow model of the Albuquerque Basin, central New Mexico, documented in Kernodle and others (Kernodle, J.M., McAda, D.P., and Thorn, C.R., 1995, Simulation of ground-water flow in the Albuquerque Basin, central New Mexico, 1901-1994, with projections to 2020: U.S. Geological Survey Water-Resources Investigations Report 94-4251, 114 p.). Output files resulting from the computer simulations are included for reference.

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  6. The optimal parameter design for a welding unit of manufacturing industry by Taguchi method and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Zahraee, S.M.; Chegeni, A.; Toghtamish, A.

    2016-07-01

    Manufacturing systems include a complicated combination of resources, such as materials, labors, and machines. Hence, when the manufacturing systems are faced with a problem related to the availability of resources it is difficult to identify the root of the problem accurately and effectively. Managers and engineers in companies are trying to achieve a robust production line based on the maximum productivity. The main goal of this paper is to design a robust production line, taking productivity into account in the selected manufacturing industry. This paper presents the application of Taguchi method along with computer simulation for finding an optimum factor setting for three controllable factors, which are a number of welding machines, hydraulic machines, and cutting machines by analyzing the effect of noise factors in a selected manufacturing industry. Based on the final results, the optimal design parameter of welding unit of in the selected manufacturing industry will be obtained when factor A is located at level 2 and B and C are located at level 1. Therefore, maximum productive desirability is achieved when the number of welding machines, hydraulic machines, and cutting machines is equal to 17, 2, and 1, respectively. This paper has a significant role in designing a robust production line by considering the lowest cost and timely manner based on the Taguchi method. (Author)

  7. Economic Impacts of Potential Foot and Mouth Disease Agro-terrorism in the United States: A Computable General Equilibrium Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Oladosu, Gbadebo A [ORNL; Rose, Adam [University of Southern California, Los Angeles; Bumsoo, Lee [University of Illinois

    2013-01-01

    The foot and mouth disease (FMD) virus has high agro-terrorism potential because it is contagious, can be easily transmitted via inanimate objects and can be spread by wind. An outbreak of FMD in developed countries results in massive slaughtering of animals (for disease control) and disruptions in meat supply chains and trade, with potentially large economic losses. Although the United States has been FMD-free since 1929, the potential of FMD as a deliberate terrorist weapon calls for estimates of the physical and economic damage that could result from an outbreak. This paper estimates the economic impacts of three alternative scenarios of potential FMD attacks using a computable general equilibrium (CGE) model of the US economy. The three scenarios range from a small outbreak successfully contained within a state to a large multi-state attack resulting in slaughtering of 30 percent of the national livestock. Overall, the value of total output losses in our simulations range between $37 billion (0.15% of 2006 baseline economic output) and $228 billion (0.92%). Major impacts stem from the supply constraint on livestock due to massive animal slaughtering. As expected, the economic losses are heavily concentrated in agriculture and food manufacturing sectors, with losses ranging from $23 billion to $61 billion in the two industries.

  8. An evaluation of cone-beam computed tomography use in postgraduate orthodontic programs in the United States and Canada.

    Science.gov (United States)

    Smith, Bradley R; Park, Jae Hyun; Cederberg, Robert A

    2011-01-01

    The purpose of this study was to evaluate the use of cone-beam computed tomography (CBCT) in postgraduate orthodontic residency programs. An anonymous electronic survey was sent to the program director/chair of each of the sixty-nine United States and Canadian postgraduate orthodontic programs, with thirty-six (52.2 percent) of these programs responding. Overall, 83.3 percent of programs reported having access to a CBCT scanner, while 73.3 percent reported regular usage. The vast majority (81.8 percent) used CBCT mainly for specific diagnostic purposes, while 18.2 percent (n=4) used CBCT as a diagnostic tool for every patient. Orthodontic residents received both didactic and practical (hands-on) training or solely didactic training in 59.1 percent and 31.8 percent of programs, respectively. Operation of the CBCT scanner was the responsibility of radiology technicians (54.4 percent), both radiology technicians and orthodontic residents (31.8 percent), and orthodontic residents alone (13.6 percent). Interpretation of CBCT results was the responsibility of a radiologist in 59.1 percent of programs, while residents were responsible for reading and referring abnormal findings in 31.8 percent of programs. Overall, postgraduate orthodontic program CBCT accessibility, usage, training, and interpretation were consistent in Eastern and Western regions, and most CBCT use was for specific diagnostic purposes of impacted/supernumerary teeth, craniofacial anomalies, and temporomandibular joint (TMJ) disorders.

  9. GPUDePiCt: A Parallel Implementation of a Clustering Algorithm for Computing Degenerate Primers on Graphics Processing Units.

    Science.gov (United States)

    Cickovski, Trevor; Flor, Tiffany; Irving-Sachs, Galen; Novikov, Philip; Parda, James; Narasimhan, Giri

    2015-01-01

    In order to make multiple copies of a target sequence in the laboratory, the technique of Polymerase Chain Reaction (PCR) requires the design of "primers", which are short fragments of nucleotides complementary to the flanking regions of the target sequence. If the same primer is to amplify multiple closely related target sequences, then it is necessary to make the primers "degenerate", which would allow it to hybridize to target sequences with a limited amount of variability that may have been caused by mutations. However, the PCR technique can only allow a limited amount of degeneracy, and therefore the design of degenerate primers requires the identification of reasonably well-conserved regions in the input sequences. We take an existing algorithm for designing degenerate primers that is based on clustering and parallelize it in a web-accessible software package GPUDePiCt, using a shared memory model and the computing power of Graphics Processing Units (GPUs). We test our implementation on large sets of aligned sequences from the human genome and show a multi-fold speedup for clustering using our hybrid GPU/CPU implementation over a pure CPU approach for these sequences, which consist of more than 7,500 nucleotides. We also demonstrate that this speedup is consistent over larger numbers and longer lengths of aligned sequences.

  10. The optimal parameter design for a welding unit of manufacturing industry by Taguchi method and computer simulation

    Directory of Open Access Journals (Sweden)

    Seyed Mojib Zahraee

    2016-05-01

    Full Text Available Purpose: Manufacturing systems include a complicated combination of resources, such as materials, labors, and machines. Hence, when the manufacturing systems are faced with a problem related to the availability of resources it is difficult to identify the root of the problem accurately and effectively. Managers and engineers in companies are trying to achieve a robust production line based on the maximum productivity. The main goal of this paper is to design a robust production line, taking productivity into account in the selected manufacturing industry. Design/methodology/approach: This paper presents the application of Taguchi method along with computer simulation for finding an optimum factor setting for three controllable factors, which are a number of welding machines, hydraulic machines, and cutting machines by analyzing the effect of noise factors in a selected manufacturing industry. Findings and Originality/value: Based on the final results, the optimal design parameter of welding unit of in the selected manufacturing industry will be obtained when factor A is located at level 2 and B and C are located at level 1. Therefore, maximum productive desirability is achieved when the number of welding machines, hydraulic machines, and cutting machines is equal to 17, 2, and 1, respectively. This paper has a significant role in designing a robust production line by considering the lowest cost and timely manner based on the Taguchi method.

  11. Common File Formats.

    Science.gov (United States)

    Mills, Lauren

    2014-03-21

    An overview of the many file formats commonly used in bioinformatics and genome sequence analysis is presented, including various data file formats, alignment file formats, and annotation file formats. Example workflows illustrate how some of the different file types are typically used.

  12. Effect of Computer Animation Technique on Students' Comprehension of the "Solar System and Beyond" Unit in the Science and Technology Course

    Science.gov (United States)

    Aksoy, Gokhan

    2013-01-01

    The purpose of this study is to determine the effect of computer animation technique on academic achievement of students in the "Solar System and Beyond" unit lecture as part of the Science and Technology course of the seventh grade in primary education. The sample of the study consists of 60 students attending to the 7th grade of primary school…

  13. Chemical Equilibrium, Unit 4: Equilibria in Acid-Base Systems. A Computer-Enriched Module for Introductory Chemistry. Student's Guide and Teacher's Guide.

    Science.gov (United States)

    Settle, Frank A., Jr.

    Presented are the teacher's guide and student materials for one of a series of self-instructional, computer-based learning modules for an introductory, undergraduate chemistry course. The student manual for this acid-base equilibria unit includes objectives, prerequisites, pretest, a discussion of equilibrium constants, and 20 problem sets.…

  14. 77 FR 12830 - Pershing County Water Conservation District; Notice of Intent To File License Application, Filing...

    Science.gov (United States)

    2012-03-02

    ... Pershing County, Nevada. The project occupies 0.01 acre of United States lands administered by the Bureau... Energy Regulatory Commission Pershing County Water Conservation District; Notice of Intent To File....: 14327-000. c. Date Filed: November 22, 2011. d. Submitted by: Pershing County Water...

  15. [Digital library for archiving files of radiology and medical imaging].

    Science.gov (United States)

    Duvauferrier, R; Rambeau, M; Moulène, F

    1993-01-01

    The Conseil des Enseignants de Radiologie de France in collaboration with the Ilab-TSI company and Schering laboratories has developed a computer programme allowing the storage and consultation of radiological teaching files. This programme, developed on Macintosh from standard Hypercard and Quicktime applications, allows, in consultation mode, the multicriteria search and visualisation of selected radiological files. In the author mode, new files can be included after digitalizing the author's own images or after obtaining images from another image library. This programme, which allows juxtaposition of digitalised radiological files, is designed to be extremely open and can be easily combined with other computer-assisted teaching or computer-assisted presentation applications.

  16. USGS Small-scale Dataset - 1:1,000,000-Scale Streams of the United States 201403 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows streams of the United States, Puerto Rico, and the U.S. Virgin Islands. The map layer was produced primarily from the Medium-Resolution and...

  17. USGS Small-scale Dataset - 1:1,000,000-Scale County Boundary Lines of the United States 201406 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer portrays the county boundaries of the United States, Puerto Rico, and the U.S. Virgin Islands as linework. The map layer was derived from the...

  18. USGS 1:1,000,000-Scale Federal Lands of the United States - Parkways and Scenic Rivers 201506 FileGDB

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer portrays the linear federally owned or administered land features (i.e., national parkways, wild and scenic rivers, etc.) of the United States and...

  19. USGS Small-scale Dataset - Railroad and Bus Passenger Stations of the United States 201207 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows Amtrak intercity railroad and bus passenger terminals in the United States. There are no Amtrak stations in Alaska or Hawaii. The data are a...

  20. USGS Small-scale Dataset - 1:1,000,000-Scale National Boundaries of the United States 201403 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer portrays the boundaries of the United States, Puerto Rico, and the U.S. Virgin Islands. The map layer was created by extracting county polygon...

  1. USGS Small-scale Dataset - 1:1,000,000-Scale Contours of the Conterminous United States 201404 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows elevation contour lines for the conterminous United States. The map layer was derived from the 100-meter resolution elevation data set which is...

  2. USGS Small-scale Dataset - Global Map: 1:1,000,000-Scale Streams of the United States 201406 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer includes Global Map data showing streams in the United States, Puerto Rico, and the U.S. Virgin Islands. The data are a modified version of the...

  3. USGS Small-scale Dataset - Global Map: 1:1,000,000-Scale Canals and Aqueducts of the United States 201406 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer includes Global Map data showing the canals, aqueducts, and the Intracoastal Waterway in the United States, Puerto Rico, and the U.S. Virgin Islands....

  4. USGS Small-scale Dataset - Global Map: 1:1,000,000-Scale Political Boundary Lines of the United States 201403 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer includes Global Map data showing the boundaries of counties and equivalent entities of the United States, Puerto Rico, and the U.S. Virgin Islands....

  5. Computer Viruses. Technology Update.

    Science.gov (United States)

    Ponder, Tim, Comp.; Ropog, Marty, Comp.; Keating, Joseph, Comp.

    This document provides general information on computer viruses, how to help protect a computer network from them, measures to take if a computer becomes infected. Highlights include the origins of computer viruses; virus contraction; a description of some common virus types (File Virus, Boot Sector/Partition Table Viruses, Trojan Horses, and…

  6. HUD GIS Boundary Files

    Data.gov (United States)

    Department of Housing and Urban Development — The HUD GIS Boundary Files are intended to supplement boundary files available from the U.S. Census Bureau. The files are for community planners interested in...

  7. Parallel file system with metadata distributed across partitioned key-value store c

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  8. Design of a computer software for calculation of required barrier against radiation at the diagnostic x-ray units

    Directory of Open Access Journals (Sweden)

    S.A. Rahimi

    2005-01-01

    Full Text Available Background and purpose : Instalation of protective barrier against diagnostic x-ray is generally done based on the recommendations of NCRP49. There are analytic methods for designing protective barriers howerer, they lack sufficient efficiency and considering the NCRP49 reports, designing mechanical protective barrier in order to protect the initial x-ray radiation and absorption of the ray quality of such radiation is different.Therefore, the protective barrier for each radiation is measured separately. In this study, a computer software was designed to calculate the needed barrier with high accuracy.Materials and methods: Calculation of required protective barrier particularly when two or more generators are in use at diagnostic x-ray units and or installed diagnostic equipments do not have proper room space and the limitations for other clanges in parameters which are time- consuming and impossible to be manually calculated. For proper determination of thichness of the protective barrier, relevant information about curves of radiation weakness, dose limit etc should be entered. This program was done in windows and designed in such a way that the operator works easily, flexibility of the program is acceptable and its accuracy and sensitivity is high.Results : Results of this program indicate that, in most cases, in x-ray units required protective barrier was not used. Meanwhile sometimes shielding is more than what required which lacks technical standards and cost effectiveness. When the application index is contrasting zero, thichness of NCRP49 calculation is about 20% less than the calculated rate done by the method of this study. When the applied index is equal to zero (that is the only situation where the second barrier is considered, thickness of requined barrier is about 15% less than the lead barrier and concrete barrier calculated in this project is 8% less than that calculated by McGuire method.Conclusion : In this study proper

  9. Computer input and output files associated with ground-water-flow simulations of the Albuquerque Basin, central New Mexico, 1901-95, with projections to 2020; (supplement three to U.S. Geological Survey Water-resources investigations report 94-4251)

    Science.gov (United States)

    Kernodle, J.M.

    1996-01-01

    This report presents the computer input files required to run the three-dimensional ground-water-flow model of the Albuquerque Basin, central New Mexico, documented in Kernodle and others (Kernodle, J.M., McAda, D.P., and Thorn, C.R., 1995, Simulation of ground-water flow in the Albuquerque Basin, central New Mexico, 1901-1994, with projections to 2020: U.S. Geological Survey Water-Resources Investigations Report 94-4251, 114 p.) and revised by Kernodle (Kernodle, J.M., 1998, Simulation of ground-water flow in the Albuquerque Basin, 1901-95, with projections to 2020 (supplement two to U.S. Geological Survey Water-Resources Investigations Report 94-4251): U.S. Geological Survey Open-File Report 96-209, 54 p.). Output files resulting from the computer simulations are included for reference.

  10. Aspects on Transfer of Aided - Design Files

    Science.gov (United States)

    Goanta, A. M.; Anghelache, D. G.

    2016-08-01

    At this stage of development of hardware and software, each company that makes design software packages has a certain type of file created and customized in time to distinguish that company from its competitors. Thus today are widely known the DWG files belonging AutoCAD, IPT / IAM belonging to Inventor, PAR / ASM of Solid Edge's, PRT from the NX and so on. Behind every type of file there is a mathematical model which is common to more types of files. A specific aspect of the computer -aided design is that all softwares are working with both individual parts and assemblies, but their approach is different in that some use the same type of file both for each part and for the whole (PRT ), while others use different types of files (IPT / IAM, PAR / ASM, etc.). Another aspect of the computer -aided design is to transfer files between different companies which use different software packages or even the same software package but in different versions. Each of these situations generates distinct issues. Thus, to solve the partial reading by a project different from the native one, transfer files of STEP and IGES type are used

  11. Schools (Students) Exchanging CAD/CAM Files over the Internet.

    Science.gov (United States)

    Mahoney, Gary S.; Smallwood, James E.

    This document discusses how students and schools can benefit from exchanging computer-aided design/computer-aided manufacturing (CAD/CAM) files over the Internet, explains how files are exchanged, and examines the problem of selected hardware/software incompatibility. Key terms associated with information search services are defined, and several…

  12. 43 CFR 4.1352 - Who may file; where to file; when to file.

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Who may file; where to file; when to file... Indian Lands) § 4.1352 Who may file; where to file; when to file. (a) The applicant or operator may file... to file a timely request constitutes a waiver of the opportunity for a hearing before OSM makes...

  13. Fact File

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    @@ Are delegates selected according to predetermined quotas of ethnicity and gender? The delegates who attend the CPC National Congress represent a broad spectrum of Party members.They include leading officials at various levels, and rank and file Party members working at the front line of production and those from more regular walks of life.A large proportion of the delegates are model Party members who have made outstanding contributions in various sectors and undertakings of the economy, science and technology, national defense, politics and law, education, public relations, public health, culture and sports.

  14. USGS Small-scale Dataset - Global Map: Cities and Towns of the United States 201403 FileGDB 10.1

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer includes Global Map data showing cities and towns in the United States, Puerto Rico, and the U.S. Virgin Islands. The data are a modified version of...

  15. Results of an Experimental Program to Provide Low Cost Computer Searches of the NASA Information File to University Graduate Students in the Southeast. Final Report.

    Science.gov (United States)

    Smetana, Frederick O.; Phillips, Dennis M.

    In an effort to increase dissemination of scientific and technological information, a program was undertaken whereby graduate students in science and engineering could request a computer-produced bibliography and/or abstracts of documents identified by the computer. The principal resource was the National Aeronautics and Space Administration…

  16. Text Classification: Classifying Plain Source Files with Neural Network

    Directory of Open Access Journals (Sweden)

    Jaromir Veber

    2010-10-01

    Full Text Available The automated text file categorization has an important place in computer engineering, particularly in the process called data management automation. A lot has been written about text classification and the methods allowing classification of these files are well known. Unfortunately most studies are theoretical and for practical implementation more research is needed. I decided to contribute with a research focused on creating of a classifier for different kinds of programs (source files, scripts…. This paper will describe practical implementation of the classifier for text files depending on file content.

  17. Ongoing evaluation of ease-of-use and usefulness of wireless tablet computers within an ambulatory care unit.

    Science.gov (United States)

    Murphy, Kevin C; Wong, Frances L; Martin, Lee Ann; Edmiston, Dave

    2009-01-01

    This ongoing research is to assess user acceptance of wireless convertible tablet portable computers in their support of patient care within the clinic environment and to determine their impact on workload reduction for the information staff. A previous publication described our initial experience with a limited wireless environment. There, we tested the premise that wireless convertible tablet computers were equivalent to desktop computers in their support of user tasks. Feedback from users demonstrated that convertible tablet computers were not able to replace desktop computers. Poor network access was a weakness as well as the "cognitive overhead" encountered due to technical problems. This paper describes our further experience with a centre-wide wireless implementation while using a new wireless device. The new tablets, which have some unique functions that existing desktop computers do not provide, have been well received by the clinicians.

  18. Provider of Services File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The POS file consists of two data files, one for CLIA labs and one for 18 other provider types. The file names are CLIA and OTHER. If downloading the file, note it...

  19. Centralization vs. Decentralization in a Multi-Unit Organization: A Computational Model of a Retail Chain as a Multi-Agent Adaptive System

    OpenAIRE

    Myong-Hun Chang; Harrington, Joseph E.

    2000-01-01

    This paper explores the effect of organizational structure - in terms of the allocation of authority - on the rate of innovation in multi-unit organizations such as retail chains and multi-plant manufacturers. A computational model is developed in which store managers continually search for better practices. In a decentralized organization, a store manager adopts a new practice if it raises her store's profit. Headquarters (HQ) is assumed to observe the new practice and then decides whether t...

  20. 43 CFR 44.55 - Can a unit of general local government protest the results of payment computations?

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Can a unit of general local government... Secretary of the Interior FINANCIAL ASSISTANCE, LOCAL GOVERNMENTS State and Local Governments' Responsibilities After the Department Distributes Payments § 44.55 Can a unit of general local government...

  1. The Computer and Society: Our Servant-Our Master? A Unit of the Social Education Materials Project, "People and Change."

    Science.gov (United States)

    Brownlow, David; And Others

    Arranged in three sections, this resource for secondary school students provides an introduction to the computer's impact on society. The first section surveys historical methods of recording and storing information: clay tablets, papyrus, and books. The second section describes how computers work and ways they can be used. Also considered are the…

  2. Reduction of computing time for least-squares migration based on the Helmholtz equation by graphics processing units

    NARCIS (Netherlands)

    Knibbe, H.; Vuik, C.; Oosterlee, C.W.

    2015-01-01

    In geophysical applications, the interest in least-squares migration (LSM) as an imaging algorithm is increasing due to the demand for more accurate solutions and the development of high-performance computing. The computational engine of LSM in this work is the numerical solution of the 3D Helmholtz

  3. According to Researchers in the United States Using A Computer Can Have An Impact on Those Suffering from Arthritis

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    @@ Even though computers have become increasingly common in daily life, little is known about how their use on a daily basis might affect those with arthritis; it is estimated that as many as 56% percent of the workforce use computers at work and 62% of households own one.

  4. Semiempirical and DFT computations of the influence of Tb(III) dopant on unit cell dimensions of cerium(III) fluoride.

    Science.gov (United States)

    Shyichuk, Andrii; Runowski, Marcin; Lis, Stefan; Kaczkowski, Jakub; Jezierski, Andrzej

    2015-01-30

    Several computational methods, both semiempirical and ab initio, were used to study the influence of the amount of dopant on crystal cell dimensions of CeF3 doped with Tb(3+) ions (CeF3 :Tb(3+) ). AM1, RM1, PM3, PM6, and PM7 semiempirical parameterization models were used, while the Sparkle model was used to represent the lanthanide cations in all cases. Ab initio calculations were performed by means of GGA+U/PBE projector augmented wave density functional theory. The computational results agree well with the experimental data. According to both computation and experiment, the crystal cell parameters undergo a linear decrease with increasing amount of the dopant. The computations performed using Sparkle/PM3 and DFT methods resulted in the best agreement with the experiment with the average deviation of about 1% in both cases. Typical Sparkle/PM3 computation on a 2×2×2 supercell of CeF3:Tb3+ lasted about two orders of magnitude shorter than the DFT computation concerning a unit cell of this material. © 2014 Wiley Periodicals, Inc.

  5. Electroholographic display unit for three-dimensional display by use of special-purpose computational chip for holography and reflective LCD panel.

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Shiraki, Atsushi; Masuda, Nobuyuki; Ito, Tomoyoshi

    2005-05-30

    We developed an electroholography unit, which consists of a special-purpose computational chip for holography and a reflective liquid-crystal display (LCD) panel, for a three-dimensional (3D) display. The special-purpose chip can compute a computer-generated hologram of 800x600 grids in size from a 3D object consisting of approximately 400 points in approximately 0.15 seconds. The pixel pitch and resolution of the LCD panel are 12 mum and 800x600 grids, respectively. We implemented the special purpose chip and LCD panel on a printed circuit board of approximately 28cmx13cm in size. After the calculation, the computer-generated hologram produced by the special-purpose chip is displayed on the LCD panel. When we illuminate a reference light to the LCD panel, we can observe a 3D animation of approximately 3cmx3cmx3cm in size. In the present paper, we report the electroholographic display unit together with a simple 3D display system.

  6. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    Directory of Open Access Journals (Sweden)

    Kui Liu

    2017-02-01

    Full Text Available This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI. More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©. The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs. The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  7. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    Science.gov (United States)

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  8. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    Science.gov (United States)

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-01-01

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684

  9. Further computer appreciation

    CERN Document Server

    Fry, T F

    2014-01-01

    Further Computer Appreciation is a comprehensive cover of the principles and aspects in computer appreciation. The book starts by describing the development of computers from the first to the third computer generations, to the development of processors and storage systems, up to the present position of computers and future trends. The text tackles the basic elements, concepts and functions of digital computers, computer arithmetic, input media and devices, and computer output. The basic central processor functions, data storage and the organization of data by classification of computer files,

  10. The Alvey Conference in Edinburgh: A Review of the United Kingdom’s Research Program in Computer Science.

    Science.gov (United States)

    2014-09-26

    A conference to review the UK’s Alvey Program of research in computer science was held in Edinburgh from 24 through 27 June 1985. This report summarizes the speakers’ comments about the progress of the Alvey Program.

  11. 20th Session of the East, Central and South-East Europe Division of the United Nations Group of Experts on Geographical Names; Working Group on Toponymic Data Files and Gazetteers; EuroGeographics – EuroGeoNames Workshop, Zagreb, February 9–11, 2011

    Directory of Open Access Journals (Sweden)

    Željko Hećimović

    2011-06-01

    Full Text Available Conferences on geographical name standardization were organized by the State Geodetic Administration and held in Zagreb from February 9 to 11, 2011: 20th Session of the East Central and South-East Europe Division of the United Nations Group of Experts on Geographical Names (ECSEED of UNGEGN, Working Group on Toponymic Data Files and Gazetteers (WG TDFG and EuroGeoGraphics – EuroGeoNames Workshop (EGN.

  12. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    Science.gov (United States)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  13. Decay data file based on the ENSDF file

    Energy Technology Data Exchange (ETDEWEB)

    Katakura, J. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    A decay data file with the JENDL (Japanese Evaluated Nuclear Data Library) format based on the ENSDF (Evaluated Nuclear Structure Data File) file was produced as a tentative one of special purpose files of JENDL. The problem using the ENSDF file as primary source data of the JENDL decay data file is presented. (author)

  14. 29 CFR 4000.28 - What if I send a computer disk?

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission...

  15. ActSds and OdfSds: Programs for Converting INTERACT and The Observer Data Files into SDIS Timed-Event Sequential Data Files

    OpenAIRE

    Bakeman, Roger; Quera, Vicenç

    2008-01-01

    Programs for converting Mangold International’s INTERACT and Noldus Information Technology’s The Observer data files to Sequential Data Interchange Standard (SDIS) timed-event sequential data files are described. Users who convert their INTERACT or The Observer data files can then take advantage of various flexible and powerful data modification and computational procedures available in the Generalized Sequential Querier (GSEQ), a program that assumes SDIS-formatted files.

  16. ActSds and OdfSds: programs for converting INTERACT and The Observer data files into SDIS timed-event sequential data files.

    Science.gov (United States)

    Bakeman, Roger; Quera, Vicenç

    2008-08-01

    In this article, we describe programs for converting Mangold International's INTERACT and Noldus Information Technology's The Observer data files to sequential data interchange standard (SDIS) timed-event sequential data files. Users who convert their INTERACT or The Observer data files can then take advantage of various flexible and powerful data modification and computational procedures available in the Generalized Sequential Querier, a program that assumes SDIS-formatted files.

  17. Tax_Units_2011_Final

    Data.gov (United States)

    Kansas Data Access and Support Center — The Statewide GIS Tax Unit boundary file was created through a collaborative partnership between the State of Kansas Department of Revenue Property Valuation...

  18. 50 CFR 90.14 - Waterfowl depredation complaints; where filed.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false Waterfowl depredation complaints; where filed. 90.14 Section 90.14 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF... Surplus Grain § 90.14 Waterfowl depredation complaints; where filed. Any person having an interest in...

  19. 10 CFR 205.308 - Filing schedule and annual reports.

    Science.gov (United States)

    2010-01-01

    ...) Persons authorized to transmit electric energy from the United States shall promptly file all supplements... 10 Energy 3 2010-01-01 2010-01-01 false Filing schedule and annual reports. 205.308 Section 205.308 Energy DEPARTMENT OF ENERGY OIL ADMINISTRATIVE PROCEDURES AND SANCTIONS Electric Power...

  20. 15 CFR 90.6 - Where to file challenge.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Where to file challenge. 90.6 Section 90.6 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade BUREAU OF THE... Where to file challenge. A challenge must be prepared in writing by the unit of government and is to...

  1. Demographics of undergraduates studying games in the United States: a comparison of computer science students and the general population

    Science.gov (United States)

    McGill, Monica M.; Settle, Amber; Decker, Adrienne

    2013-06-01

    Our study gathered data to serve as a benchmark of demographics of undergraduate students in game degree programs. Due to the high number of programs that are cross-disciplinary with computer science programs or that are housed in computer science departments, the data is presented in comparison to data from computing students (where available) and the US population. Participants included students studying games at four nationally recognized postsecondary institutions. The results of the study indicate that there is no significant difference between the ratio of men to women studying in computing programs or in game degree programs, with women being severely underrepresented in both. Women, blacks, Hispanics/Latinos, and heterosexuals are underrepresented compared to the US population. Those with moderate and conservative political views and with religious affiliations are underrepresented in the game student population. Participants agree that workforce diversity is important and that their programs are adequately diverse, but only one-half of the participants indicated that diversity has been discussed in any of their courses.

  2. A Tale of Two Countries: Successes and Challenges in K-12 Computer Science Education in Israel and the United States

    Science.gov (United States)

    Gal-Ezer, Judith; Stephenson, Chris

    2014-01-01

    This article tells a story of K-12 computer science in two different countries. These two countries differ profoundly in culture, language, government and state structure, and in their education systems. Despite these differences, however, they share the pursuit of excellence and high standards in K-12 education. In Israel, curriculum is…

  3. A Tale of Two Countries: Successes and Challenges in K-12 Computer Science Education in Israel and the United States

    Science.gov (United States)

    Gal-Ezer, Judith; Stephenson, Chris

    2014-01-01

    This article tells a story of K-12 computer science in two different countries. These two countries differ profoundly in culture, language, government and state structure, and in their education systems. Despite these differences, however, they share the pursuit of excellence and high standards in K-12 education. In Israel, curriculum is…

  4. System Documentation for the U.S. Army Ambulatory Care Data Base (ACDB) Study: Mainframe, Personal Computer and Optical Scanner File Structure

    Science.gov (United States)

    1988-11-01

    Therapy SCAN690.FIL Plastic Surgery SCAN700.FIL Podiatry SCAN71O.FIL Preventive Medicine SCAN720.FIL Primary Care SCAN730.FIL Psychiatry SCAN740.FIL...45 37 25 -2 680 PHYSICAL THERAPY 1 373 65 19 4 37 34 17 12 -2 690 PLASTIC SURGERY 1 270 22 4 22 27 26 26 -1 700 PODIATRY 1 324 51 3 18 42 39 -1 710...Illness N New R Revisit INJURIES INJ Injuries BAS/TMC 1 PT 2 Unit sport 3 MOS/duty 4 Field duty 5 Motor veh accident 6 Airborne Ex. OCC HEALTH N New R

  5. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  6. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    Science.gov (United States)

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method.

  7. Standard interface file handbook

    Energy Technology Data Exchange (ETDEWEB)

    Shapiro, A.; Huria, H.C. (Cincinnati Univ., OH (United States))

    1992-10-01

    This handbook documents many of the standard interface file formats that have been adopted by the US Department of Energy to facilitate communications between and portability of, various large reactor physics and radiation transport software packages. The emphasis is on those files needed for use of the VENTURE/PC diffusion-depletion code system. File structures, contents and some practical advice on use of the various files are provided.

  8. 43 CFR 4.1381 - Who may file; when to file; where to file.

    Science.gov (United States)

    2010-10-01

    ... may file; when to file; where to file. (a) Any person who receives a written decision issued by OSM under 30 CFR 773.28 on a challenge to an ownership or control listing or finding may file a request for... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Who may file; when to file; where to...

  9. Securing the AliEn File Catalogue - Enforcing authorization with accountable file operations

    Science.gov (United States)

    Schreiner, Steffen; Bagnasco, Stefano; Sankar Banerjee, Subho; Betev, Latchezar; Carminati, Federico; Vladimirovna Datskova, Olga; Furano, Fabrizio; Grigoras, Alina; Grigoras, Costin; Mendez Lorenzo, Patricia; Peters, Andreas Joachim; Saiz, Pablo; Zhu, Jianlin

    2011-12-01

    The AliEn Grid Services, as operated by the ALICE Collaboration in its global physics analysis grid framework, is based on a central File Catalogue together with a distributed set of storage systems and the possibility to register links to external data resources. This paper describes several identified vulnerabilities in the AliEn File Catalogue access protocol regarding fraud and unauthorized file alteration and presents a more secure and revised design: a new mechanism, called LFN Booking Table, is introduced in order to keep track of access authorization in the transient state of files entering or leaving the File Catalogue. Due to a simplification of the original Access Envelope mechanism for xrootd-protocol-based storage systems, fundamental computational improvements of the mechanism were achieved as well as an up to 50% reduction of the credential's size. By extending the access protocol with signed status messages from the underlying storage system, the File Catalogue receives trusted information about a file's size and checksum and the protocol is no longer dependent on client trust. Altogether, the revised design complies with atomic and consistent transactions and allows for accountable, authentic, and traceable file operations. This paper describes these changes as part and beyond the development of AliEn version 2.19.

  10. Protecting Your Computer from Viruses

    Science.gov (United States)

    Descy, Don E.

    2006-01-01

    A computer virus is defined as a software program capable of reproducing itself and usually capable of causing great harm to files or other programs on the same computer. The existence of computer viruses--or the necessity of avoiding viruses--is part of using a computer. With the advent of the Internet, the door was opened wide for these…

  11. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain?Computer Interface Feature Extraction

    OpenAIRE

    J. Adam Wilson; Williams, Justin C.

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a ...

  12. Proceedings of the International Workshop on Computational Electronics Held at Leeds University (United Kingdom) on August 11-13 1993

    Science.gov (United States)

    1993-08-01

    J.N. Tsitsiklis, Parallel and Distributed Computation, Prentice Hall Inc., 1989 171 R.D. Williams . "Performance of dynamic load balancing algorithms...with the a partition using the exact separated potentials ADI ( SPADI ). Figure I compares these in a first-order scheme. We plot the fractional...NPADL After t -200, SPADI fluctuates due to round-off error. Using second-order operator separation ("true" ADI), the APADI scheme is significantly

  13. 75 FR 19633 - Combined Notice of Filings No. 1

    Science.gov (United States)

    2010-04-15

    ...: Midwest Independent Transmission System Operator, Inc. Description: Midwest Independent Transmission... Distribution Service between the Transmission Distribution Business Unit of SCE. Filed Date: 04/06/2010..., Inc submits updated summary schedules for the Transmission and Local Facilities Agreement for...

  14. Museums Universe Data File (MUDF) FY 2014 3rd Quarter

    Data.gov (United States)

    Institute of Museum and Library Services — The Museum Universe Data File (MUDF) contains information about known museums in the United States using data collected and aggregated from a variety of sources.

  15. Efficient load rebalancing for distributed file system in Clouds

    Directory of Open Access Journals (Sweden)

    Mr. Mohan S. Deshmukh

    2016-05-01

    Full Text Available Cloud computing is an upcoming era in software industry. It’s a very vast and developing technology. Distributed file systems play an important role in cloud computing applications based on map reduce techniques. While making use of distributed file systems for cloud computing, nodes serves computing and storage functions at the same time. Given file is divided into small parts to use map reduce algorithms in parallel. But the problem lies here since in cloud computing nodes may be added, deleted or modified any time and also operations on files may be done dynamically. This causes the unequal load distribution of load among the nodes which leads to load imbalance problem in distributed file system. Newly developed distributed file system mostly depends upon central node for load distribution but this method is not helpful in large-scale and where chances of failure are more. Use of central node for load distribution creates a problem of single point dependency and chances of performance of bottleneck are more. As well as issues like movement cost and network traffic caused due to migration of nodes and file chunks need to be resolved. So we are proposing algorithm which will overcome all these problems and helps to achieve uniform load distribution efficiently. To verify the feasibility and efficiency of our algorithm we will be using simulation setup and compare our algorithm with existing techniques for the factors like load imbalance factor, movement cost and network traffic.

  16. Semantic File Annotation and Retrieval on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sadaqat Jan

    2011-01-01

    Full Text Available The rapid development of mobile technologies has facilitated users to generate and store files on mobile devices such as mobile phones and PDAs. However, it has become a challenging issue for users to efficiently and effectively search for files of interest in a mobile environment involving a large number of mobile nodes. This paper presents SemFARM framework which facilitates users to publish, annotate and retrieve files which are geographically distributed in a mobile network enabled by Bluetooth. The SemFARM framework is built on semantic web technologies in support of file retrieval on low-end mobile devices. A generic ontology is developed which defines a number of keywords, their possible domains and properties. Based on semantic reasoning, similarity degrees are computed to match user queries with published file descriptions. The SemFARM prototype is implemented using the Java mobile platform (J2ME. The performance of SemFARM is evaluated from a number of aspects in comparison with traditional mobile file systems and enhanced alternatives. Experimental results are encouraging showing the effectiveness of SemFARM in file retrieval. We can conclude that the use of semantic web technologies have facilitated file retrieval in mobile computing environments maximizing user satisfaction in searching for files of interest.

  17. Influence of flexion angle of files on the decentralization of oval canals during instrumentation

    OpenAIRE

    Maria Antonieta Veloso Carvalho OLIVEIRA; Letícia Duarte ALVES; Pereira,Analice Giovani; RAPOSO,Luís Henrique Araújo; João Carlos Gabrielli BIFFI

    2015-01-01

    The aim of this study was to evaluate the influence of the flexion angle of files on the decentralization of root canals during instrumentation. Fifteen lower incisors were instrumented with Protaper Universal files and radiographed in two directions (mesiodistal and buccolingual) before and after instrumentation with a #15 K-file in position for evaluating the flexion angle of files. The specimens were also scanned before and after instrumentation using micro-computed tomography to obtain th...

  18. GEODOC: the GRID document file, record structure and data element description

    Energy Technology Data Exchange (ETDEWEB)

    Trippe, T.; White, V.; Henderson, F.; Phillips, S.

    1975-11-06

    The purpose of this report is to describe the information structure of the GEODOC file. GEODOC is a computer based file which contains the descriptive cataloging and indexing information for all documents processed by the National Geothermal Information Resource Group. This file (along with other GRID files) is managed by DBMS, the Berkeley Data Base Management System. Input for the system is prepared using the IRATE Text Editing System with its extended (12 bit) character set, or punched cards.

  19. Computer Virus Protection

    Science.gov (United States)

    Rajala, Judith B.

    2004-01-01

    A computer virus is a program--a piece of executable code--that has the unique ability to replicate. Like biological viruses, computer viruses can spread quickly and are often difficult to eradicate. They can attach themselves to just about any type of file, and are spread by replicating and being sent from one individual to another. Simply having…

  20. Effects of postural and visual stressors on myofascial trigger point development and motor unit rotation during computer work.

    Science.gov (United States)

    Hoyle, Jeffrey A; Marras, William S; Sheedy, James E; Hart, Dennis E

    2011-02-01

    Musculoskeletal complaint rates are high among those performing low-level static exertions (LLSEs), such as computer users. However, our understanding of the causal mechanisms is lacking. It was hypothesized that myofascial trigger point (MTrP) development might be one causal mechanism to help explain these complaints and that static postural and visual demands may be contributing factors. Therefore, the purpose of this experiment was to examine MTrP development and the behavior of multiple parts of the trapezius muscle under postural and mental stress (represented by visual stress) conditions during computer work. Twelve subjects (six male and six female) were monitored for MTrP development via expert opinion, subject self-report, and cyclic changes in EMG median frequency across fourteen spatial locations. Results showed that MTrPs developed after one hour of continuous typing, despite the stress condition. Interestingly, both the high postural and high visual stress conditions resulted in significantly fewer median frequency cycles (3.76 and 5.35 cycles, respectively), compared to the baseline low stress condition (6.26 cycles). Lastly, the MTrP location as well as locations more medial to the spine showed significantly fewer cycles than other locations. Findings suggest that MTrPs may be one causal pathway for pain during LLSEs and both postural and visual demands may play a role in muscle activation patterns, perhaps attributing to MTrP development and resultant discomfort. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Cloud Computing

    CERN Document Server

    Mirashe, Shivaji P

    2010-01-01

    Computing as you know it is about to change, your applications and documents are going to move from the desktop into the cloud. I'm talking about cloud computing, where applications and files are hosted on a "cloud" consisting of thousands of computers and servers, all linked together and accessible via the Internet. With cloud computing, everything you do is now web based instead of being desktop based. You can access all your programs and documents from any computer that's connected to the Internet. How will cloud computing change the way you work? For one thing, you're no longer tied to a single computer. You can take your work anywhere because it's always accessible via the web. In addition, cloud computing facilitates group collaboration, as all group members can access the same programs and documents from wherever they happen to be located. Cloud computing might sound far-fetched, but chances are you're already using some cloud applications. If you're using a web-based email program, such as Gmail or Ho...

  2. 图形处理器在通用计算中的应用%Application of graphics processing unit in general purpose computation

    Institute of Scientific and Technical Information of China (English)

    张健; 陈瑞

    2009-01-01

    基于图形处理器(GPU)的计算统一设备体系结构(compute unified device architecture,CUDA)构架,阐述了GPU用于通用计算的原理和方法.在Geforce8800GT下,完成了矩阵乘法运算实验.实验结果表明,随着矩阵阶数的递增,无论是GPU还是CPU处理,速度都在减慢.数据增加100倍后,GPU上的运算时间仅增加了3.95倍,而CPU的运算时间增加了216.66倍.%Based on the CUDA (compute unified device architecture) of GPU (graphics processing unit), the technical fundamentals and methods for general purpose computation on GPU are introduced. The algorithm of matrix multiplication is simulated on Geforce8800 GT. With the increasing of matrix order, algorithm speed is slowed either on CPU or on GPU. After the data quantity increases to 100 times, the operation time only increased in 3.95 times on GPU, and 216.66 times on CPU.

  3. 1:2,000,000-scale Hydrologic Units of the United States

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set has been superseded by huc2m. This file contains hydrologic unit boundaries and codes for the conterminous United States along with Alaska, Hawaii,...

  4. (SUPERSEDED) 1:2,000,000-scale Hydrologic Units of the United States (SUPERSEDED)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This file contains hydrologic unit boundaries and codes for the conterminous United States along with Alaska, Hawaii, Puerto Rico and the U.S. Virgin Islands. It was...

  5. New orthopaedic implant management tool for computer-assisted planning, navigation, and simulation: from implant CAD files to a standardized XML-based implant database.

    Science.gov (United States)

    Sagbo, S; Blochaou, F; Langlotz, F; Vangenot, C; Nolte, L-P; Zheng, G

    2005-01-01

    Computer-Assisted Orthopaedic Surgery (CAOS) has made much progress over the last 10 years. Navigation systems have been recognized as important tools that help surgeons, and various such systems have been developed. A disadvantage of these systems is that they use non-standard formalisms and techniques. As a result, there are no standard concepts for implant and tool management or data formats to store information for use in 3D planning and navigation. We addressed these limitations and developed a practical and generic solution that offers benefits for surgeons, implant manufacturers, and CAS application developers. We developed a virtual implant database containing geometrical as well as calibration information for orthopedic implants and instruments, with a focus on trauma. This database has been successfully tested for various applications in the client/server mode. The implant information is not static, however, because manufacturers periodically revise their implants, resulting in the deletion of some implants and the introduction of new ones. Tracking these continuous changes and keeping CAS systems up to date is a tedious task if done manually. This leads to additional costs for system development, and some errors are inevitably generated due to the huge amount of information that has to be processed. To ease management with respect to implant life cycle, we developed a tool to assist end-users (surgeons, hospitals, CAS system providers, and implant manufacturers) in managing their implants. Our system can be used for pre-operative planning and intra-operative navigation, and also for any surgical simulation involving orthopedic implants. Currently, this tool allows addition of new implants, modification of existing ones, deletion of obsolete implants, export of a given implant, and also creation of backups. Our implant management system has been successfully tested in the laboratory with very promising results. It makes it possible to fill the current gap

  6. Utilizing HDF4 File Content Maps for the Cloud

    Science.gov (United States)

    Lee, Hyokyung Joe

    2016-01-01

    We demonstrate a prototype study that HDF4 file content map can be used for efficiently organizing data in cloud object storage system to facilitate cloud computing. This approach can be extended to any binary data formats and to any existing big data analytics solution powered by cloud computing because HDF4 file content map project started as long term preservation of NASA data that doesn't require HDF4 APIs to access data.

  7. Real-space density functional theory on graphical processing units: computational approach and comparison to Gaussian basis set methods

    CERN Document Server

    Andrade, Xavier

    2013-01-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code OCTOPUS, can reach a sustained performance of up to 90 GFlops for a single GPU, representing an important speed-up when compared to the CPU version of the code. Moreover, for some systems our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.

  8. Real-Space Density Functional Theory on Graphical Processing Units: Computational Approach and Comparison to Gaussian Basis Set Methods.

    Science.gov (United States)

    Andrade, Xavier; Aspuru-Guzik, Alán

    2013-10-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.

  9. Planning intensive care unit design using computer simulation modeling: optimizing integration of clinical, operational, and architectural requirements.

    Science.gov (United States)

    OʼHara, Susan

    2014-01-01

    Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.

  10. Adding Data Management Services to Parallel File Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, Scott [Univ. of California, Santa Cruz, CA (United States)

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decades the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file

  11. United States Adolescents' Television, Computer, Videogame, Smartphone, and Tablet Use: Associations with Sugary Drinks, Sleep, Physical Activity, and Obesity.

    Science.gov (United States)

    Kenney, Erica L; Gortmaker, Steven L

    2017-03-01

    To quantify the relationships between youth use of television (TV) and other screen devices, including smartphones and tablets, and obesity risk factors. TV and other screen device use, including smartphones, tablets, computers, and/or videogames, was self-reported by a nationally representative, cross-sectional sample of 24 800 US high school students (2013-2015 Youth Risk Behavior Surveys). Students also reported on health behaviors including sugar-sweetened beverage (SSB) intake, physical activity, sleep, and weight and height. Sex-stratified logistic regression models, adjusting for the sampling design, estimated associations between TV and other screen device use and SSB intake, physical activity, sleep, and obesity. Approximately 20% of participants used other screen devices for ≥5 hours daily. Watching TV ≥5 hours daily was associated with daily SSB consumption (aOR = 2.72, 95% CI: 2.23, 3.32) and obesity (aOR = 1.78, 95% CI: 1.40, 2.27). Using other screen devices ≥5 hours daily was associated with daily SSB consumption (aOR = 1.98, 95% CI: 1.69, 2.32), inadequate physical activity (aOR = 1.94, 95% CI: 1.69, 2.25), and inadequate sleep (aOR = 1.79, 95% CI: 1.54, 2.08). Using smartphones, tablets, computers, and videogames is associated with several obesity risk factors. Although further study is needed, families should be encouraged to limit both TV viewing and newer screen devices. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. 32 CFR 150.21 - Appeals by the United States.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Appeals by the United States. 150.21 Section 150... the United States. (a) Restricted filing. Only a representative of the government designated by the Judge Advocate General of the respective service may file an appeal by the United States under...

  13. Distributed Data Management and Distributed File Systems

    CERN Document Server

    Girone, Maria

    2015-01-01

    The LHC program has been successful in part due to the globally distributed computing resources used for collecting, serving, processing, and analyzing the large LHC datasets. The introduction of distributed computing early in the LHC program spawned the development of new technologies and techniques to synchronize information and data between physically separated computing centers. Two of the most challenges services are the distributed file systems and the distributed data management systems. In this paper I will discuss how we have evolved from local site services to more globally independent services in the areas of distributed file systems and data management and how these capabilities may continue to evolve into the future. I will address the design choices, the motivations, and the future evolution of the computing systems used for High Energy Physics.

  14. An examination of electronic file transfer between host and microcomputers for the AMPMODNET/AIMNET (Army Material Plan Modernization Network/Acquisition Information Management Network) classified network environment

    Energy Technology Data Exchange (ETDEWEB)

    Hake, K.A.

    1990-11-01

    This report presents the results of investigation and testing conducted by Oak Ridge National Laboratory (ORNL) for the Project Manager -- Acquisition Information Management (PM-AIM), and the United States Army Materiel Command Headquarters (HQ-AMC). It concerns the establishment of file transfer capabilities on the Army Materiel Plan Modernization (AMPMOD) classified computer system. The discussion provides a general context for micro-to-mainframe connectivity and focuses specifically upon two possible solutions for file transfer capabilities. The second section of this report contains a statement of the problem to be examined, a brief description of the institutional setting of the investigation, and a concise declaration of purpose. The third section lays a conceptual foundation for micro-to-mainframe connectivity and provides a more detailed description of the AMPMOD computing environment. It gives emphasis to the generalized International Business Machines, Inc. (IBM) standard of connectivity because of the predominance of this vendor in the AMPMOD computing environment. The fourth section discusses two test cases as possible solutions for file transfer. The first solution used is the IBM 3270 Control Program telecommunications and terminal emulation software. A version of this software was available on all the IBM Tempest Personal Computer 3s. The second solution used is Distributed Office Support System host electronic mail software with Personal Services/Personal Computer microcomputer e-mail software running with IBM 3270 Workstation Program for terminal emulation. Test conditions and results are presented for both test cases. The fifth section provides a summary of findings for the two possible solutions tested for AMPMOD file transfer. The report concludes with observations on current AMPMOD understanding of file transfer and includes recommendations for future consideration by the sponsor.

  15. Virus Alert: Ten Steps to Safe Computing.

    Science.gov (United States)

    Gunter, Glenda A.

    1997-01-01

    Discusses computer viruses and explains how to detect them; discusses virus protection and the need to update antivirus software; and offers 10 safe computing tips, including scanning floppy disks and commercial software, how to safely download files from the Internet, avoiding pirated software copies, and backing up files. (LRW)

  16. File access prediction using neural networks.

    Science.gov (United States)

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors.

  17. Identifiable Data Files - Name and Address File and Vital...

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Names and Addresses File and the Vital Status File are subsets of the data elements in the Enrollment Database (EDB). The particular information in each file is...

  18. [The design of a family of ultrasonic diagnostic units based on up-to-date computer technologies].

    Science.gov (United States)

    Trukhanov, A I; Nagulin, N E

    1996-01-01

    Small ultrasonic devices are now being used in medical practice. However, the vast majority of them represent designingly completed systems which cannot include additional soft- and hardwares to accumulate measurements and to set up databases, to make additional postprocessing, to transmit measurements along networks and telephone lines, etc. On the other hand, in our and foreign countries, work is under way in designing the systems of archiving and treating ultrasonic images, working places for a physician engaged in radiation diagnosis. But at the same time direct joining of ultrasonic equipment with standard computer facilities is impossible in many practical cases. To design ultrasonic devices by using Multimedia IBM PC technology which may integrate various informational media, such as display, sonic information, and textual data is one of the ways of solving this problem. With this, the hardwares of ultrasonic devices are realized in the construction of standard IBM PC modules. Sonomed ultrasonic devices may be considered to be alternatives of Multimedia IBM PC-based devices.

  19. Computer says 2.5 litres – how best to incorporate intelligent software into clinical decision making in the intensive care unit?

    Science.gov (United States)

    Lane, Katie; Boyd, Owen

    2009-01-01

    What will be the role of the intensivist when computer-assisted decision support reaches maturity? Celi's group reports that Bayesian theory can predict a patient's fluid requirement on day 2 in 78% of cases, based on data collected on day 1 and the known associations between those data, based on observations in previous patients in their unit. There are both advantages and limitations to the Bayesian approach, and this test study identifies areas for improvement in future models. Although such models have the potential to improve diagnostic and therapeutic accuracy, they must be introduced judiciously and locally to maximize their effect on patient outcome. Efficacy is thus far undetermined, and these novel approaches to patient management raise new challenges, not least medicolegal ones. PMID:19232073

  20. Computer says 2.5 litres--how best to incorporate intelligent software into clinical decision making in the intensive care unit?

    Science.gov (United States)

    Lane, Katie; Boyd, Owen

    2009-01-01

    What will be the role of the intensivist when computer-assisted decision support reaches maturity? Celi's group reports that Bayesian theory can predict a patient's fluid requirement on day 2 in 78% of cases, based on data collected on day 1 and the known associations between those data, based on observations in previous patients in their unit. There are both advantages and limitations to the Bayesian approach, and this test study identifies areas for improvement in future models. Although such models have the potential to improve diagnostic and therapeutic accuracy, they must be introduced judiciously and locally to maximize their effect on patient outcome. Efficacy is thus far undetermined, and these novel approaches to patient management raise new challenges, not least medicolegal ones.

  1. Unit6 reading 22-year-old computer programmer murdered——体验互动式阅读案例

    Institute of Scientific and Technical Information of China (English)

    赵冠华

    2015-01-01

    【案例背景】阅读教学是英语教学的重要组成部分,在初中英语教学中,阅读课也一直很受重视。可尽管教师与学生付出了很多努力,阅读教学的效果却不如人意,这种"高耗低效"的结果极大地困扰着教师。本人尝试在Unit6 reading 22-year-old computer programmer murdered中尝试运用了体验互动式阅读方式,收到了良好的教学效果【案例分析】

  2. Tandem processes promoted by a hydrogen shift in 6-arylfulvenes bearing acetalic units at ortho position: a combined experimental and computational study

    Directory of Open Access Journals (Sweden)

    Mateo Alajarin

    2016-02-01

    Full Text Available 6-Phenylfulvenes bearing (1,3-dioxolan or dioxan-2-yl substituents at ortho position convert into mixtures of 4- and 9-(hydroxyalkoxy-substituted benz[f]indenes as result of cascade processes initiated by a thermally activated hydrogen shift. Structurally related fulvenes with non-cyclic acetalic units afforded mixtures of 4- and 9-alkoxybenz[f]indenes under similar thermal conditions. Mechanistic paths promoted by an initial [1,4]-, [1,5]-, [1,7]- or [1,9]-H shift are conceivable for explaining these conversions. Deuterium labelling experiments exclude the [1,4]-hydride shift as the first step. A computational study scrutinized the reaction channels of these tandem conversions starting by [1,5]-, [1,7]- and [1,9]-H shifts, revealing that this first step is the rate-determining one and that the [1,9]-H shift is the one with the lowest energy barrier.

  3. Design of a linear detector array unit for high energy x-ray helical computed tomography and linear scanner

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong Tae; Park, Jong Hwan; Kim, Gi Yoon [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of); Kim, Dong Geun [Medical Imaging Department, ASTEL Inc., Seongnam (Korea, Republic of); Park, Shin Woong; Yi, Yun [Dept. of Electronics and Information Eng, Korea University, Seoul (Korea, Republic of); Kim, Hyun Duk [Research Center, Luvantix ADM Co., Ltd., Daejeon (Korea, Republic of)

    2016-11-15

    A linear detector array unit (LdAu) was proposed and designed for the high energy X-ray 2-d and 3-d imaging systems for industrial non-destructive test. Specially for 3-d imaging, a helical CT with a 15 MeV linear accelerator and a curved detector is proposed. the arc-shape detector can be formed by many LdAus all of which are arranged to face the focal spot when the source-to-detector distance is fixed depending on the application. An LdAu is composed of 10 modules and each module has 48 channels of CdWO{sub 4} (CWO) blocks and Si PIn photodiodes with 0.4 mm pitch. this modular design was made for easy manufacturing and maintenance. through the Monte carlo simulation, the CWO detector thickness of 17 mm was optimally determined. the silicon PIn photodiodes were designed as 48 channel arrays and fabricated with NTD (neutron transmutation doping) wafers of high resistivity and showed excellent leakage current properties below 1 nA at 10 V reverse bias. to minimize the low-voltage breakdown, the edges of the active layer and the guard ring were designed as a curved shape. the data acquisition system was also designed and fabricated as three independent functional boards; a sensor board, a capture board and a communication board to a Pc. this paper describes the design of the detectors (CWO blocks and Si PIn photodiodes) and the 3-board data acquisition system with their simulation results.

  4. Patient Assessment File (PAF)

    Data.gov (United States)

    Department of Veterans Affairs — The Patient Assessment File (PAF) database compiles the results of the Patient Assessment Instrument (PAI) questionnaire filled out for intermediate care Veterans...

  5. RRB Earnings File (RRBERN)

    Data.gov (United States)

    Social Security Administration — RRBERN contains records for all beneficiaries on the RRB's PSSVES file who's SSNs are validated through the SVES processing. Validated output is processed through...

  6. SCR Algorithm: Saving/Restoring States of File Systems

    Institute of Scientific and Technical Information of China (English)

    魏晓辉; 鞠九滨

    2000-01-01

    Fault-tolerance is very important in cluster computing and has been implemented in many famous cluster-computing systems using checkpoint/restart mechanisms. But existent check-pointing algorithms cannot restore the states of a file system when roll-backing the running of a program, so there axe many restrictions on file accesses in existent fault-tolerance systems. SCR algorithm, an algorithm based on atomic operation and consistent schedule, which can restore the states of file systems, is presented in this paper. In the SCR algorithm, system calls on file systems are classified into idem-potent operations and non-idem-potent operations. A non-idem-potent operation modifies a file system's states, while an idem-potent operation does not. SCR algorithm tracks changes of the file system states. It logs each non-idem-potent operation used by user programs and the information that can restore the operation in disks. When check-pointing roll-backing the program, SCR algorithm will revert the file system states to the last checkpoint time. By using SCR algorithm, users are allowed to use any file operation in their programs.

  7. A computer-assisted recording, diagnosis and management of the medically ill system for use in the intensive care unit: A preliminary report

    Directory of Open Access Journals (Sweden)

    John George

    2009-01-01

    Full Text Available Background: Computerized medical information systems have been popularized over the last two decades to improve quality and safety, and for decreasing medical errors. Aim: To develop a clinician-friendly computer-based support system in the intensive care unit (ICU that incorporates recording, reminders, alerts, checklists and diagnostic differentials for common conditions encountered in critical care. Materials and Methods: This project was carried out at the Medical ICU CMC Hospital, Vellore, in collaboration with the Computer Science Department, VIT University. The first phase was to design and develop monitoring and medication sheets. Terminologies such as checklists (intervention list that pops up at defined times for all patients, reminders (intervention unique to each patient and alerts (time-based, value-based, trend-based were defined. The diagnostic and intervention bundles were characterized in the second phase. The accuracy and reliability of the software to generate alerts, reminders and diagnoses was tested in the third phase. The fourth phase will be to integrate this with the hospital information system and the bedside monitors. Results: Alpha testing was performed using six scenarios written by intensivists. The software generated real-time alerts and reminders and provided diagnostic differentials relevant to critical care. Predefined interventions for each diagnostic possibility appeared as pop-ups. Problems identified during alpha testing were rectified prior to beta testing. Conclusions: The use of a computer-assisted monitoring, recording and diagnostic system appears promising. It is envisaged that further software refinements following beta testing would facilitate the improvement of quality and safety in the critical care environment.

  8. 77 FR 66497 - Self-Regulatory Organizations; The Depository Trust Company; Notice of Filing and Immediate...

    Science.gov (United States)

    2012-11-05

    ... and distributing information to its Participants using its proprietary computer to computer facility (``CCF'') files. In order to reduce risk, improve transparency and increase efficiency in the announcing... the end-of-day batch CCF files. Participants that have volunteered to participate in a pilot...

  9. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  10. Register file soft error recovery

    Science.gov (United States)

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  11. ESUSA: U. S. endangered species distribution file

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, J; Calef, C E

    1978-05-01

    A file containing distribution data on federally listed or proposed endangered species of the United States is described. Included are (a) the common name, (b) the scientific name, (c) the taxonomic family, (d) the OES/FWS/USDI group (mammal, bird, etc.), (e) the status, (f) the geographic distribution by counties, and (g) Federal Register references. Status types are endangered, threatened, proposed, under review, deleted, and rejected. Distribution is by Federal Information Processing Standard (FIPS) county code and is of four types: designated critical habitat, present range, potential range, and historic range. The file is currently being used in conjunction with similar data on projected future energy facilities to anticipate possible conflicts. However, the file would be useful to any project correlating endangered species with location information expressed by county. An example is as an aid in evaluating Forest Service or Bureau of Land Management proposed wilderness areas.

  12. FHEO Filed Cases

    Data.gov (United States)

    Department of Housing and Urban Development — The dataset is a list of all the Title VIII fair housing cases filed by FHEO from 1/1/2007 - 12/31/2012 including the case number, case name, filing date, state and...

  13. Next generation WLCG File Transfer Service (FTS)

    CERN Document Server

    CERN. Geneva

    2012-01-01

    LHC experiments at CERN and worldwide utilize WLCG resources and middleware components to perform distributed computing tasks. One of the most important tasks is reliable file replication. It is a complex problem, suffering from transfer failures, disconnections, transfer duplication, server and network overload, differences in storage systems, etc. To address these problems, EMI and gLite have provided the independent File Transfer Service (FTS) and Grid File Access Library (GFAL) tools. Their development started almost a decade ago, in the meantime, requirements in data management have changed - the old architecture of FTS and GFAL cannot keep support easily these changes. Technology has also been progressing: FTS and GFAL do not fit into the new paradigms (cloud, messaging, for example). To be able to serve the next stage of LHC data collecting (from 2013), we need a new generation of  these tools: FTS 3 and GFAL 2. We envision a service requiring minimal configuration, which can dynamically adapt to the...

  14. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    Science.gov (United States)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  15. 78 FR 45513 - Privacy Act of 1974; Computer Matching Program

    Science.gov (United States)

    2013-07-29

    .... DESCRIPTION OF COMPUTER MATCHING PROGRAM: Each participating SPAA will send ACF an electronic file of eligible public assistance client information. These files are non- Federal computer records maintained by the... on no more than 10,000,000 public assistance beneficiaries. 2. The DMDC computer database...

  16. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Science.gov (United States)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  17. Analog and hybrid computing

    CERN Document Server

    Hyndman, D E

    2013-01-01

    Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl

  18. Dynamic Non-Hierarchical File Systems for Exascale Storage

    Energy Technology Data Exchange (ETDEWEB)

    Long, Darrell E. [PI; Miller, Ethan L [Co PI

    2015-02-24

    This constitutes the final report for “Dynamic Non-Hierarchical File Systems for Exascale Storage”. The ultimate goal of this project was to improve data management in scientific computing and high-end computing (HEC) applications, and to achieve this goal we proposed: to develop the first, HEC-targeted, file system featuring rich metadata and provenance collection, extreme scalability, and future storage hardware integration as core design goals, and to evaluate and develop a flexible non-hierarchical file system interface suitable for providing more powerful and intuitive data management interfaces to HEC and scientific computing users. Data management is swiftly becoming a serious problem in the scientific community – while copious amounts of data are good for obtaining results, finding the right data is often daunting and sometimes impossible. Scientists participating in a Department of Energy workshop noted that most of their time was spent “...finding, processing, organizing, and moving data and it’s going to get much worse”. Scientists should not be forced to become data mining experts in order to retrieve the data they want, nor should they be expected to remember the naming convention they used several years ago for a set of experiments they now wish to revisit. Ideally, locating the data you need would be as easy as browsing the web. Unfortunately, existing data management approaches are usually based on hierarchical naming, a 40 year-old technology designed to manage thousands of files, not exabytes of data. Today’s systems do not take advantage of the rich array of metadata that current high-end computing (HEC) file systems can gather, including content-based metadata and provenance1 information. As a result, current metadata search approaches are typically ad hoc and often work by providing a parallel management system to the “main” file system, as is done in Linux (the locate utility), personal computers, and enterprise search

  19. Tax_Unit_Certification_Final_2012

    Data.gov (United States)

    Kansas Data Access and Support Center — The Statewide GIS Tax Unit boundary file was created through a collaborative partnership between the State of Kansas Department of Revenue Property Valuation...

  20. Tax_Units_Certification_2013_0301

    Data.gov (United States)

    Kansas Data Access and Support Center — The Statewide GIS Tax Unit boundary file was created through a collaborative partnership between the State of Kansas Department of Revenue Property Valuation...

  1. Health, United States, 2012: Men's Health

    Science.gov (United States)

    ... Disparities Report Healthy People Older Americans Health Report Rural-Urban Chartbook NCHS Health, United States, 2015 - Men's Health ... Disparities Report Healthy People Older Americans Health Report Rural-Urban Chartbook File Formats Help: How do I view ...

  2. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    Science.gov (United States)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  3. [Development of a computer-assisted thermoelectric Peltier cold test procedure with integrated photoplethysmography unit for noninvasive evaluation of acral skin circulation].

    Science.gov (United States)

    Klyscz, T; Hahn, M; Beck, W; Blazek, V; Rassner, G; Jünger, M

    1997-09-01

    Local cold provocation tests are an important, non-invasive diagnostic tool for collecting information about skin perfusion during exposure to cold. In patients suffering from vasospastic circulatory disorders such as Raynaud's phenomenon, it is of particular importance to be able to collect data about acral circulation during the cooling test in the asymptomatic intervals between naturally occurring attacks. By carrying out a series of cold provocation tests, for example, patient response to a newly initiated therapy can be assessed. Here we present a recently developed, computer-aided thermoelectric Peltier device with an integrated finger holder for carrying out local cold provocation tests. The electronic control unit of the Peltier element make it possible to cool or heat to predefined temperatures. At the same time, the temperature of both the finger holder and the skin can be measured. A photoplethysmographic sensor is also integrated within the device, enabling the response of the pulse waves to the controlled temperature changes to be monitored accurately. It is also possible to measure simultaneously laser Doppler flux and capillary pressure in the nailfold and to perform nailfold capillaroscopy to determine red blood cell velocity. The new device provides us with the technical means to study the interrelationship between acral skin perfusion and the thermal regulation of the skin.

  4. CIF (Crystallographic Information File): A Standard for Crystallographic Data Interchange

    Science.gov (United States)

    Brown, I. D.

    1996-01-01

    The Crystallographic Information File (CIF) uses the self-defining STAR file structure. This requires the creation of a dictionary of data names and definitions. A basic dictionary of terms needed to describe the crystal structures of small molecules was approved in 1991 and is currently used for the submission of papers to Acta Crystallographica C. A number of extensions to this dictionary are in preparation. By storing the dictionary itself as a STAR file, the definitions and relationships in the CIF dictionary become computer interpretable. This offers many possibilities for the automatic handling of crystallographic information. PMID:27805170

  5. The Design of a Secure File Storage System

    Science.gov (United States)

    1979-12-01

    8217brjnw Into the FM process meory to - m - check for proper discretionary access. The complete Dathname, in terms of the FSS file system, - passed to...research shows that a viable approach to the auestion of internal computer security exists. This approach, sometimes termed the "security kernel approach...eration is gninR on, aI significant advantage if the data file is long . kfter theJ file is stored by the 10 process, the FM process gets a ticket to the

  6. Value-Based File Retention: File Attributes as File Value and Information Waste Indicators

    NARCIS (Netherlands)

    Wijnhoven, Fons; Amrit, Chintan; Dietz, Pim

    2014-01-01

    Several file retention policy methods propose that a file retention policy should be based on file value. Though such a retention policy might increase the value of accessible files, the method to arrive at such a policy is underresearched. This article discusses how one can arrive at a method for d

  7. DCFPAK: Dose coefficient data file package for Sandia National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Eckerman, K.F.; Leggett, R.W.

    1996-07-31

    The FORTRAN-based computer package DCFPAK (Dose Coefficient File Package) has been developed to provide electronic access to the dose coefficient data files summarized in Federal Guidance Reports 11 and 12. DCFPAK also provides access to standard information regarding decay chains and assembles dose coefficients for all dosimetrically significant radioactive progeny of a specified radionuclide. DCFPAK was designed for application on a PC but, with minor modifications, may be implemented on a UNIX workstation.

  8. Low-Carbon Computing

    Science.gov (United States)

    Hignite, Karla

    2009-01-01

    Green information technology (IT) is grabbing more mainstream headlines--and for good reason. Computing, data processing, and electronic file storage collectively account for a significant and growing share of energy consumption in the business world and on higher education campuses. With greater scrutiny of all activities that contribute to an…

  9. Isothiourea-catalysed enantioselective pyrrolizine synthesis: synthetic and computational studies† †Electronic supplementary information (ESI) available: NMR spectra, HPLC analysis and computational co-ordinates. Data available.12 CCDC 1483759. For ESI and crystallographic data in CIF or other electronic format see DOI: 10.1039/c6ob01557c Click here for additional data file. Click here for additional data file. Click here for additional data file.

    Science.gov (United States)

    Stark, Daniel G.; Williamson, Patrick; Gayner, Emma R.; Musolino, Stefania F.; Kerr, Ryan W. F.; Taylor, James E.; Slawin, Alexandra M. Z.; O'Riordan, Timothy J. C.

    2016-01-01

    The catalytic enantioselective synthesis of a range of cis-pyrrolizine carboxylate derivatives with outstanding stereocontrol (14 examples, >95 : 5 dr, >98 : 2 er) through an isothiourea-catalyzed intramolecular Michael addition-lactonisation and ring-opening approach from the corresponding enone acid is reported. An optimised and straightforward three-step synthetic route to the enone acid starting materials from readily available pyrrole-2-carboxaldehydes is delineated, with benzotetramisole (5 mol%) proving the optimal catalyst for the enantioselective process. Ring-opening of the pyrrolizine dihydropyranone products with either MeOH or a range of amines leads to the desired products in excellent yield and enantioselectivity. Computation has been used to probe the factors leading to high stereocontrol, with the formation of the observed cis-steroisomer predicted to be kinetically and thermodynamically favoured. PMID:27489030

  10. Closed Claim Query File

    Data.gov (United States)

    Social Security Administration — This file is used to hold information about disability claims that have been closed and have been selected for sampling.Sampling is the process whereby OQR reviews...

  11. Provider of Services File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The POS file contains data on characteristics of hospitals and other types of healthcare facilities, including the name and address of the facility and the type of...

  12. MMLEADS Public Use File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare-Medicaid Linked Enrollee Analytic Data Source (MMLEADS) Public Use File (PUF) contains demographic, enrollment, condition prevalence, utilization, and...

  13. Patient Treatment File (PTF)

    Data.gov (United States)

    Department of Veterans Affairs — This database is part of the National Medical Information System (NMIS). The Patient Treatment File (PTF) contains a record for each inpatient care episode provided...

  14. USEEIO Satellite Files

    Data.gov (United States)

    U.S. Environmental Protection Agency — These files contain the environmental data as particular emissions or resources associated with a BEA sectors that are used in the USEEIO model. They are organized...

  15. File System Virtual Appliances

    Science.gov (United States)

    2010-05-01

    some of my closest friendships. Ashraf, Ippo and Neil made life much more fun. Mike Merideth introduced me to fine scotch, hi-fi speakers, and piano ...system virtual appliances (4) We analyze the sources of latency in traditional inter-VM communica- tion techniques and present a novel energy- and...tiple file system implementations within the Sun UNIX kernel” [52]. This was achieved through two techniques . First, outside the file system layer

  16. INTERNET: PENGIRIMAN FILE

    Directory of Open Access Journals (Sweden)

    Zainul Bakri

    2012-10-01

    Full Text Available Salah satu alasan pengguna komputer untuk berhubungan dengan Internet adalah mendapat kesempatan untuk menyalin ('download' informasi yang tersimpan dari server jaringan komputer lain (misalnya menyalin program aplikasi komputer, data mentah, dan sebagainya. File Transfer  Protocol (FfP adalah cara di Internet untuk mengirim file dari satu tempat ke komputer pengguna. Untuk keperluan ini dapat menggunakan program FTP khusus atau dengan menggunakan Web browser.

  17. Renewable Energy Atlas of the United States

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J. [Environmental Science Division; Hlava, K. [Environmental Science Division; Greenwood, H. [Environmentall Science Division; Carr, A. [Environmental Science Division

    2013-12-13

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. This report explains how to add the Atlas to your computer and install the associated software. The report also includes: A description of each of the components of the Atlas; Lists of the Geographic Information System (GIS) database content and sources; and A brief introduction to the major renewable energy technologies. The Atlas includes the following: A GIS database organized as a set of Environmental Systems Research Institute (ESRI) ArcGIS Personal GeoDatabases, and ESRI ArcReader and ArcGIS project files providing an interactive map visualization and analysis interface.

  18. An Approach to Analyze Physical Memory Image File of Mac OS X

    Institute of Scientific and Technical Information of China (English)

    LiJuan Xu; LianHai Wang

    2014-01-01

    Memory analysis is one of the key techniques in computer live forensics. Especially, the analysis of a Mac OS X operating system’ s memory image file plays an important role in identifying the running status of an apple computer. However, how to analyze the image file without using extra”mach-kernel” file is one of the unsolved difficulties. In this paper, we firstly compare several approaches for physical memory acquisition and analyze the effects of each approach on physical memory. Then, we discuss the traditional methods for the physical memory file analysis of Mac OS X. A novel physical memory image file analysis approach without using extra“mach-kernel” file is proposed base on the discussion. We verify the performance of the new approach on Mac OS X 10�8�2. The experimental results show that the proposed approach is simpler and more practical than previous ones.

  19. 76 FR 12398 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Bureau of the Public Debt (BPD...

    Science.gov (United States)

    2011-03-07

    ... comparison file compiled of records from our expanded Medicare Database (MDB) File system of records in order to support our administration of the prescription drug subsidy program. The MDB File system of... computer systems and provide the response file to us as soon as possible. This agreement covers...

  20. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  1. Cloud object store for archive storage of high performance computing data using decoupling middleware

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  2. Cloud object store for archive storage of high performance computing data using decoupling middleware

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  3. LASIP-III, a generalized processor for standard interface files. [For creating binary files from BCD input data and printing binary file data in BCD format (devised for fast reactor physics codes)

    Energy Technology Data Exchange (ETDEWEB)

    Bosler, G.E.; O' Dell, R.D.; Resnik, W.M.

    1976-03-01

    The LASIP-III code was developed for processing Version III standard interface data files which have been specified by the Committee on Computer Code Coordination. This processor performs two distinct tasks, namely, transforming free-field format, BCD data into well-defined binary files and providing for printing and punching data in the binary files. While LASIP-III is exported as a complete free-standing code package, techniques are described for easily separating the processor into two modules, viz., one for creating the binary files and one for printing the files. The two modules can be separated into free-standing codes or they can be incorporated into other codes. Also, the LASIP-III code can be easily expanded for processing additional files, and procedures are described for such an expansion. 2 figures, 8 tables.

  4. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  5. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  6. DSSTOX MASTER STRUCTURE-INDEX FILE: SDF FILE AND DOCUMENTATION

    Science.gov (United States)

    The DSSTox Master Structure-Index File serves to consolidate, manage, and ensure quality and uniformity of the chemical and substance information spanning all DSSTox Structure Data Files, including those in development but not yet published separately on this website.

  7. Exploring the Future of Out-of-Core Computing with Compute-Local Non-Volatile Memory

    Directory of Open Access Journals (Sweden)

    Myoungsoo Jung

    2014-01-01

    Full Text Available Drawing parallels to the rise of general purpose graphical processing units (GPGPUs as accelerators for specific high-performance computing (HPC workloads, there is a rise in the use of non-volatile memory (NVM as accelerators for I/O-intensive scientific applications. However, existing works have explored use of NVM within dedicated I/O nodes, which are distant from the compute nodes that actually need such acceleration. As NVM bandwidth begins to out-pace point-to-point network capacity, we argue for the need to break from the archetype of completely separated storage. Therefore, in this work we investigate co-location of NVM and compute by varying I/O interfaces, file systems, types of NVM, and both current and future SSD architectures, uncovering numerous bottlenecks implicit in these various levels in the I/O stack. We present novel hardware and software solutions, including the new Unified File System (UFS, to enable fuller utilization of the new compute-local NVM storage. Our experimental evaluation, which employs a real-world Out-of-Core (OoC HPC application, demonstrates throughput increases in excess of an order of magnitude over current approaches.

  8. United States Historical Climatology Network (US HCN) monthly temperature and precipitation data

    Energy Technology Data Exchange (ETDEWEB)

    Daniels, R.C. [ed.] [Univ. of Tennessee, Knoxville, TN (United States). Energy, Environment and Resources Center; Boden, T.A. [ed.] [Oak Ridge National Lab., TN (United States); Easterling, D.R.; Karl, T.R.; Mason, E.H.; Hughes, P.Y.; Bowman, D.P. [National Climatic Data Center, Asheville, NC (United States)

    1996-01-11

    This document describes a database containing monthly temperature and precipitation data for 1221 stations in the contiguous United States. This network of stations, known as the United States Historical Climatology Network (US HCN), and the resulting database were compiled by the National Climatic Data Center, Asheville, North Carolina. These data represent the best available data from the United States for analyzing long-term climate trends on a regional scale. The data for most stations extend through December 31, 1994, and a majority of the station records are serially complete for at least 80 years. Unlike many data sets that have been used in past climate studies, these data have been adjusted to remove biases introduced by station moves, instrument changes, time-of-observation differences, and urbanization effects. These monthly data are available free of charge as a numeric data package (NDP) from the Carbon Dioxide Information Analysis Center. The NDP includes this document and 27 machine-readable data files consisting of supporting data files, a descriptive file, and computer access codes. This document describes how the stations in the US HCN were selected and how the data were processed, defines limitations and restrictions of the data, describes the format and contents of the magnetic media, and provides reprints of literature that discuss the editing and adjustment techniques used in the US HCN.

  9. Architecture of a high-performance PACS based on a shared file system

    Science.gov (United States)

    Glicksman, Robert A.; Wilson, Dennis L.; Perry, John H.; Prior, Fred W.

    1992-07-01

    The Picture Archive and Communication System developed by Loral Western Development Laboratories and Siemens Gammasonics Incorporated utilizes an advanced, high speed, fault tolerant image file server or Working Storage Unit (WSU) combined with 100 Mbit per second fiber optic data links. This central shared file server is capable of supporting the needs of more than one hundred workstations and acquisition devices at interactive rates. If additional performance is required, additional working storage units may be configured in a hyper-star topology. Specialized processing and display hardware is used to enhance Apple Macintosh personal computers to provide a family of low cost, easy to use, yet extremely powerful medical image workstations. The Siemens LiteboxTM application software provides a consistent look and feel to the user interface of all workstation in the family. Modern database and wide area communications technologies combine to support not only large hospital PACS but also outlying clinics and smaller facilities. Basic RIS functionality is integrated into the PACS database for convenience and data integrity.

  10. A Centralized Control and Dynamic Dispatch Architecture for File Integrity Analysis

    Directory of Open Access Journals (Sweden)

    Ronald DeMara

    2006-02-01

    Full Text Available The ability to monitor computer file systems for unauthorized changes is a powerful administrative tool. Ideally this task could be performed remotely under the direction of the administrator to allow on-demand checking, and use of tailorable reporting and exception policies targeted to adjustable groups of network elements. This paper introduces M-FICA, a Mobile File Integrity and Consistency Analyzer as a prototype to achieve this capability using mobile agents. The M-FICA file tampering detection approach uses MD5 message digests to identify file changes. Two agent types, Initiator and Examiner, are used to perform file integrity tasks. An Initiator travels to client systems, computes a file digest, then stores those digests in a database file located on write-once media. An Examiner agent computes a new digest to compare with the original digests in the database file. Changes in digest values indicate that the file contents have been modified. The design and evaluation results for a prototype developed in the Concordia agent framework are described.

  11. DSSTOX MASTER STRUCTURE-INDEX FILE: SDF FILE AND ...

    Science.gov (United States)

    The DSSTox Master Structure-Index File serves to consolidate, manage, and ensure quality and uniformity of the chemical and substance information spanning all DSSTox Structure Data Files, including those in development but not yet published separately on this website. The DSSTox Master Structure-Index File serves to consolidate, manage, and ensure quality and uniformity of the chemical and substance information spanning all DSSTox Structure Data Files, including those in development but not yet published separately on this website.

  12. Challenges of Hidden Data in the Unused Area Two within Executable Files

    Directory of Open Access Journals (Sweden)

    A. W. Naji

    2009-01-01

    Full Text Available Problem statement: The executable files are one of the most important files in operating systems and in most systems designed by developers (programmers/software engineers, and then hiding information in these file is the basic goal for this study, because most users of any system cannot alter or modify the content of these files. There are many challenges of hidden data in the unused area two within executable files, which is dependencies of the size of the cover file with the size of hidden information, differences of the size of file before and after the hiding process, availability of the cover file after the hiding process to perform normally and detection by antivirus software as a result of changes made to the file. Approach: The system designed to accommodate the release mechanism that consists of two functions; first is the hiding of the information in the unused area 2 of PE-file (exe.file, through the execution of four process (specify the cover file, specify the information file, encryption of the information, and hiding the information and the second function is the extraction of the hiding information through three process (specify the steno file, extract the information, and decryption of the information. Results: The programs were coded in Java computer language and implemented on Pentium PC. The designed algorithms were intended to help in proposed system aim to hide and retract information (data file with in unused area 2 of any execution file (exe.file. Conclusion: Features of the short-term responses were simulated that the size of the hidden data does depend on the size of the unused area2 within cover file which is equal 20% from the size of exe.file before hiding process, most antivirus systems do not allow direct write in executable file, so the approach of the proposed system is to prevent the hidden information to observation of these systems and the exe.file still function as usual after the hiding process.

  13. M-FILE FOR MIX DESIGN OF STRUCTURAL LIGHTWEIGHT CONCRETE USING DEVELOPED MODELS

    Directory of Open Access Journals (Sweden)

    M. ABDULLAHI

    2011-08-01

    Full Text Available An m-file for mix design of structural lightweight concrete is presented. Mix design of structural lightweight concrete is conducted using guide in the standards. This may be tasking involving reading and understanding of the relevant standards. This renders the process inefficient and liable to errors in computations. A computer approach to mix design will alleviate this problem. An m-file was developed in MATLAB environment for the concrete mix design. The m-file has been tested and has proved to be efficient in computing the mix composition for the first trial batch of lightweight concrete mixes. It can also perform concrete mixture proportioning adjustment.

  14. Efficient compression of molecular dynamics trajectory files.

    Science.gov (United States)

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James

    2012-10-15

    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases.

  15. Measurements of file transfer rates over dedicated long-haul connections

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S [ORNL; Settlemyer, Bradley W [ORNL; Imam, Neena [ORNL; Hinkel, Gregory Carl [ORNL

    2016-01-01

    Wide-area file transfers are an integral part of several High-Performance Computing (HPC) scenarios. Dedicated network connections with high capacity, low loss rate and low competing traffic, are increasingly being provisioned over current HPC infrastructures to support such transfers. To gain insights into these file transfers, we collected transfer rate measurements for Lustre and xfs file systems between dedicated multi-core servers over emulated 10 Gbps connections with round trip times (rtt) in 0-366 ms range. Memory transfer throughput over these connections is measured using iperf, and file IO throughput on host systems is measured using xddprof. We consider two file system configurations: Lustre over IB network and xfs over SSD connected to PCI bus. Files are transferred using xdd across these connections, and the transfer rates are measured, which indicate the need to jointly optimize the connection and host file IO parameters to achieve peak transfer rates. In particular, these measurements indicate that (i) peak file transfer rate is lower than peak connection and host IO throughput, in some cases by as much as 50% or lower, (ii) xdd request sizes that achieve peak throughput for host file IO do not necessarily lead to peak file transfer rates, and (iii) parallelism in host IO and TCP transport does not always improve the file transfer rates.

  16. Cloud Based Log file analysis with Interactive Spark as a Service

    Directory of Open Access Journals (Sweden)

    Nurudeen Sherif

    2016-07-01

    Full Text Available The Software applications are usually programmed to generate some auxiliary text files referred to as log files. Such files are used throughout various stages of the software development, primarily for debugging and identification of errors. Use of log files makes debugging easier during testing. It permits following the logic of the program, at high level, while not having to run it in debug mode. Nowadays, log files are usually used at commercial software installations for the aim of permanent software observation and finetuning. Log files became a typical part of software application and are essential in operating systems, networks and distributed systems. Log files are usually the only way to determine and find errors in a software application, because probe effect has no effect on log file analysis. Log files are usually massive and may have an intricate structure. Though the method of generating log files is sort of easy and simple, log file analysis may well be an incredible task that needs immense computing resources and complex procedures.

  17. CEL_INTERROGATOR: A FREE AND OPEN SOURCE PACKAGE FOR AFFYMETRIX CEL FILE PARSING

    Science.gov (United States)

    CEL_Interrogator Package is a suite of programs designed to extract the average probe intensity and other information for each probe sequence from an Affymetrix GeneChip CEL file and unite them with their human-readable Affymetrix consensus sequence names. The resulting text file is suitable for di...

  18. 19 CFR 10.763 - Filing of claim for preferential tariff treatment upon importation.

    Science.gov (United States)

    2010-04-01

    ... RATE, ETC. United States-Morocco Free Trade Agreement Import Requirements § 10.763 Filing of claim for preferential tariff treatment upon importation. An importer may make a claim for MFTA preferential tariff... 19 Customs Duties 1 2010-04-01 2010-04-01 false Filing of claim for preferential tariff...

  19. 19 CFR 10.410 - Filing of claim for preferential tariff treatment upon importation.

    Science.gov (United States)

    2010-04-01

    ... RATE, ETC. United States-Chile Free Trade Agreement Import Requirements § 10.410 Filing of claim for preferential tariff treatment upon importation. (a) Declaration. In connection with a claim for preferential... 19 Customs Duties 1 2010-04-01 2010-04-01 false Filing of claim for preferential tariff...

  20. 19 CFR 10.803 - Filing of claim for preferential tariff treatment upon importation.

    Science.gov (United States)

    2010-04-01

    ... RATE, ETC. United States-Bahrain Free Trade Agreement Import Requirements § 10.803 Filing of claim for preferential tariff treatment upon importation. An importer may make a claim for BFTA preferential tariff... 19 Customs Duties 1 2010-04-01 2010-04-01 false Filing of claim for preferential tariff...