WorldWideScience

Sample records for multi-user computer facility

  1. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-01-01

    A proposal has been made to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multi-level, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data, through typical transformations and correlations, in under 30 sec. The throughput for such a facility, assuming five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600

  2. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-05-01

    A proposal has been made at LBL to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multilevel, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data through typical transformations and correlations in under 30 s. The throughput for such a facility, for five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600. 3 figures

  3. University multi-user facility survey-2010.

    Science.gov (United States)

    Riley, Melissa B

    2011-12-01

    Multi-user facilities serve as a resource for many universities. In 2010, a survey was conducted investigating possible changes and successful characteristics of multi-user facilities, as well as identifying problems in facilities. Over 300 surveys were e-mailed to persons identified from university websites as being involved with multi-user facilities. Complete responses were received from 36 facilities with an average of 20 years of operation. Facilities were associated with specific departments (22%), colleges (22%), and university research centers (8.3%) or were not affiliated with any department or college within the university (47%). The five most important factors to succeed as a multi-user facility were: 1) maintaining an experienced, professional staff in an open atmosphere; 2) university-level support providing partial funding; 3) broad client base; 4) instrument training programs; and 5) an effective leader and engaged strategic advisory group. The most significant problems were: 1) inadequate university financial support and commitment; 2) problems recovering full service costs from university subsidies and user fees; 3) availability of funds to repair and upgrade equipment; 4) inability to retain highly qualified staff; and 5) unqualified users dirtying/damaging equipment. Further information related to these issues and to fee structure was solicited. Overall, there appeared to be a decline in university support for facilities and more emphasis on securing income by serving clients outside of the institution and by obtaining grants from entities outside of the university.

  4. NENIMF: Northeast National Ion Microprobe Facility - A Multi-User Facility for SIMS Microanalysis

    Science.gov (United States)

    Layne, G. D.; Shimizu, N.

    2002-12-01

    The MIT-Brown-Harvard Regional Ion Microprobe Facility was one of the earliest multi-user facilities enabled by Dan Weill's Instrumentation and Facilities Program - and began with the delivery of a Cameca IMS 3f ion microprobe to MIT in 1978. The Northeast National Ion Microprobe Facility (NENIMF) is the direct descendant of this original facility. Now housed at WHOI, the facility incorporates both the original IMS 3f, and a new generation, high transmission-high resolution instrument - the Cameca IMS 1270. Purchased with support from NSF, and from a consortium of academic institutions in the Northeast (The American Museum of Natural History, Brown University, The Lamont-Doherty Earth Observatory, MIT, Rensselaer Polytechnic Institute, WHOI) - this latest instrument was delivered and installed during 1996. NENIMF continues to be supported by NSF EAR I&F as a multi-user facility for geochemical research. Work at NENIMF has extended the original design strength of the IMS 1270 for microanalytical U-Pb zircon geochronology to a wide variety of novel and improved techniques for geochemical research. Isotope microanalysis for studies in volcanology and petrology is currently the largest single component of facility activity. This includes the direct measurement of Pb isotopes in melt inclusions, an application developed at NENIMF, which is making an increasingly significant contribution to our understanding of basalt petrogenesis. This same technique has also been extended to the determination of Pb isotopes in detrital feldspar grains, for the study of sedimentary provenance and tectonics of the Himalayas and other terrains. The determination of δ11B in volcanic melt inclusions has also proven to be a powerful tool in the modeling of subduction-related magmatism. The recent development of δ34S and δ37Cl determination in glasses is being applied to studies of the behavior of these volatile elements in both natural and experimental systems. Other recent undertakings

  5. Muddy Learning: Evaluating Learning in Multi-User Computer-Based Environments

    National Research Council Canada - National Science Library

    McArthur, David

    1998-01-01

    ... (Multiple User Synthetic Environments), and MOOs (Multi-User Object Oriented), enables users to create new "rooms" in virtual worlds, define their own personnaes, and engage visitors in rich dialogues...

  6. Multi-user software of radio therapeutical calculation using a computational network

    International Nuclear Information System (INIS)

    Allaucca P, J.J.; Picon C, C.; Zaharia B, M.

    1998-01-01

    It has been designed a hardware and software system for a radiotherapy Department. It runs under an Operative system platform Novell Network sharing the existing resources and of the server, it is centralized, multi-user and of greater safety. It resolves a variety of problems and calculation necessities, patient steps and administration, it is very fast and versatile, it contains a set of menus and options which may be selected with mouse, direction arrows or abbreviated keys. (Author)

  7. Computed Ontology-based Situation Awareness of Multi-User Observations

    NARCIS (Netherlands)

    Fitrianie, S.; Rothkrantz, L.J.M.

    2009-01-01

    In recent years, we have developed a framework of human-computer interaction that offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting/drawing, gesture, text and visual symbols. The framework allows the rapid construction of a

  8. Multi-user data acquisition environment

    International Nuclear Information System (INIS)

    Storch, N.A.

    1983-01-01

    The typical data acquisition environment involves data collection and monitoring by a single user. However, in order to support experiments on the Mars facility at Lawrence Livermore National Laboratory, we have had to create a multi-user data acquisition environment where any user can control the data acquisition and several users can monitor and analyze data being collected in real time. This paper describes how we accomplished this on an HP A600 computer. It focuses on the overall system description and user communication with the tasks within the system. Our current implementation is one phase of a long-term software development project

  9. Bidirectional and Multi-User Telerehabilitation System: Clinical Effect on Balance, Functional Activity, and Satisfaction in Patients with Chronic Stroke Living in Long-Term Care Facilities

    Directory of Open Access Journals (Sweden)

    Kwan-Hwa Lin

    2014-07-01

    Full Text Available Background: The application of internet technology for telerehabilitation in patients with stroke has developed rapidly. Objective: The current study aimed to evaluate the effect of a bidirectional and multi-user telerehabilitation system on balance and satisfaction in patients with chronic stroke living in long-term care facilities (LTCFs. Method: This pilot study used a multi-site, blocked randomization design. Twenty-four participants from three LTCFs were recruited, and the participants were randomly assigned into the telerehabilitation (Tele and conventional therapy (Conv groups within each LTCF. Tele group received telerehabilitation but the Conv group received conventional therapy with two persons in each group for three sessions per week and for four weeks. The outcome measures included Berg Balance Scale (BBS, Barthel Index (BI, and the telerehabilitation satisfaction of the participants. Setting: A telerehabilitation system included “therapist end” in a laboratory, and the “client end” in LTCFs. The conventional therapy was conducted in LTCFs. Results: Training programs conducted for both the Tele and Conv groups showed significant effects within groups on the participant BBS as well as the total and self-care scores of BI. No significant difference between groups could be demonstrated. The satisfaction of participants between the Tele and the Conv groups also did not show significant difference. Conclusions: This pilot study indicated that the multi-user telerehabilitation program is feasible for improving the balance and functional activity similar to conventional therapy in patients with chronic stroke living in LTCFs.

  10. A Web-based Multi-user Interactive Visualization System For Large-Scale Computing Using Google Web Toolkit Technology

    Science.gov (United States)

    Weiss, R. M.; McLane, J. C.; Yuen, D. A.; Wang, S.

    2009-12-01

    We have created a web-based, interactive system for multi-user collaborative visualization of large data sets (on the order of terabytes) that allows users in geographically disparate locations to simultaneous and collectively visualize large data sets over the Internet. By leveraging asynchronous java and XML (AJAX) web development paradigms via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide remote, web-based users a web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota that provides high resolution visualizations to the order of 15 million pixels by Megan Damon. In the current version of our software, we have implemented a new, highly extensible back-end framework built around HTTP "server push" technology to provide a rich collaborative environment and a smooth end-user experience. Furthermore, the web application is accessible via a variety of devices including netbooks, iPhones, and other web- and javascript-enabled cell phones. New features in the current version include: the ability for (1) users to launch multiple visualizations, (2) a user to invite one or more other users to view their visualization in real-time (multiple observers), (3) users to delegate control aspects of the visualization to others (multiple controllers) , and (4) engage in collaborative chat and instant messaging with other users within the user interface of the web application. We will explain choices made regarding implementation, overall system architecture and method of operation, and the benefits of an extensible, modular design. We will also discuss future goals, features, and our plans for increasing scalability of the system which includes a discussion of the benefits potentially afforded us by a migration of server-side components to the Google Application Engine (http://code.google.com/appengine/).

  11. The Multi-User Droplet Combustion Apparatus: the Development and Integration Concept for Droplet Combustion Payloads in the Fluids and Combustion Facility Combustion Integrated Rack

    Science.gov (United States)

    Myhre, C. A.

    2002-01-01

    The Multi-user Droplet Combustion Apparatus (MDCA) is a multi-user facility designed to accommodate four different droplet combustion science experiments. The MDCA will conduct experiments using the Combustion Integrated Rack (CIR) of the NASA Glenn Research Center's Fluids and Combustion Facility (FCF). The payload is planned for the International Space Station. The MDCA, in conjunction with the CIR, will allow for cost effective extended access to the microgravity environment, not possible on previous space flights. It is currently in the Engineering Model build phase with a planned flight launch with CIR in 2004. This paper provides an overview of the capabilities and development status of the MDCA. The MDCA contains the hardware and software required to conduct unique droplet combustion experiments in space. It consists of a Chamber Insert Assembly, an Avionics Package, and a multiple array of diagnostics. Its modular approach permits on-orbit changes for accommodating different fuels, fuel flow rates, soot sampling mechanisms, and varying droplet support and translation mechanisms to accommodate multiple investigations. Unique diagnostic measurement capabilities for each investigation are also provided. Additional hardware provided by the CIR facility includes the structural support, a combustion chamber, utilities for the avionics and diagnostic packages, and the fuel mixing capability for PI specific combustion chamber environments. Common diagnostics provided by the CIR will also be utilized by the MDCA. Single combustible fuel droplets of varying sizes, freely deployed or supported by a tether are planned for study using the MDCA. Such research supports how liquid-fuel-droplets ignite, spread, and extinguish under quiescent microgravity conditions. This understanding will help us develop more efficient energy production and propulsion systems on Earth and in space, deal better with combustion generated pollution, and address fire hazards associated with

  12. Joint Computing Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Raised Floor Computer Space for High Performance ComputingThe ERDC Information Technology Laboratory (ITL) provides a robust system of IT facilities to develop and...

  13. Multi-user software of radio therapeutical calculation using a computational network; Software multiusuario de calculo radioterapeutico usando una red de computo

    Energy Technology Data Exchange (ETDEWEB)

    Allaucca P, J.J.; Picon C, C.; Zaharia B, M. [Departamento de Radioterapia, Instituto de Enfermedades Neoplasicas, Av. Angamos Este 2520, Lima 34 (Peru)

    1998-12-31

    It has been designed a hardware and software system for a radiotherapy Department. It runs under an Operative system platform Novell Network sharing the existing resources and of the server, it is centralized, multi-user and of greater safety. It resolves a variety of problems and calculation necessities, patient steps and administration, it is very fast and versatile, it contains a set of menus and options which may be selected with mouse, direction arrows or abbreviated keys. (Author)

  14. Multi-User Hardware Solutions to Combustion Science ISS Research

    Science.gov (United States)

    Otero, Angel M.

    2001-01-01

    In response to the budget environment and to expand on the International Space Station (ISS) Fluids and Combustion Facility (FCF) Combustion Integrated Rack (CIR), common hardware approach, the NASA Combustion Science Program shifted focus in 1999 from single investigator PI (Principal Investigator)-specific hardware to multi-user 'Minifacilities'. These mini-facilities would take the CIR common hardware philosophy to the next level. The approach that was developed re-arranged all the investigations in the program into sub-fields of research. Then common requirements within these subfields were used to develop a common system that would then be complemented by a few PI-specific components. The sub-fields of research selected were droplet combustion, solids and fire safety, and gaseous fuels. From these research areas three mini-facilities have sprung: the Multi-user Droplet Combustion Apparatus (MDCA) for droplet research, Flow Enclosure for Novel Investigations in Combustion of Solids (FEANICS) for solids and fire safety, and the Multi-user Gaseous Fuels Apparatus (MGFA) for gaseous fuels. These mini-facilities will develop common Chamber Insert Assemblies (CIA) and diagnostics for the respective investigators complementing the capability provided by CIR. Presently there are four investigators for MDCA, six for FEANICS, and four for MGFA. The goal of these multi-user facilities is to drive the cost per PI down after the initial development investment is made. Each of these mini-facilities will become a fixture of future Combustion Science NASA Research Announcements (NRAs), enabling investigators to propose against an existing capability. Additionally, an investigation is provided the opportunity to enhance the existing capability to bridge the gap between the capability and their specific science requirements. This multi-user development approach will enable the Combustion Science Program to drive cost per investigation down while drastically reducing the time

  15. Computational Science Facility (CSF)

    Data.gov (United States)

    Federal Laboratory Consortium — PNNL Institutional Computing (PIC) is focused on meeting DOE's mission needs and is part of PNNL's overarching research computing strategy. PIC supports large-scale...

  16. TUNL computer facilities

    International Nuclear Information System (INIS)

    Boyd, M.; Edwards, S.E.; Gould, C.R.; Roberson, N.R.; Westerfeldt, C.R.

    1985-01-01

    The XSYS system has been relatively stable during the last year, and most of our efforts have involved routine software maintenance and enhancement of existing XSYS capabilities. Modifications were made in the MBD program GDAP to increase the execution speed in key GDAP routines. A package of routines has been developed to allow communication between the XSYS and the new Wien filter microprocessor. Recently the authors have upgraded their operating system from VSM V3.7 to V4.1. This required numerous modifications to XSYS, mostly in the command procedures. A new reorganized edition of the XSYS manual will be issued shortly. The TUNL High Resolution Laboratory's VAX 11/750 computer has been in operation for its first full year as a replacement for the PRIME 300 computer which was purchased in 1974 and retired nine months ago. The data acquisition system on the VAX has been in use for the past twelve months performing a number of experiments

  17. ARLearn and StreetLearn software for virtual reality and augmented reality multi user learning games

    NARCIS (Netherlands)

    Ternier, Stefaan; Klemke, Roland

    2012-01-01

    Ternier, S., & Klemke, R. (2011). ARLearn and StreetLearn software for virtual reality and augmented reality multi user learning games (Version 1.0) [Computer software]. Heerlen, The Netherlands: Open Universiteit in the Netherlands.

  18. AMRITA -- A computational facility

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, J.E. [California Inst. of Tech., CA (US); Quirk, J.J.

    1998-02-23

    Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

  19. Computer Security at Nuclear Facilities

    International Nuclear Information System (INIS)

    Cavina, A.

    2013-01-01

    This series of slides presents the IAEA policy concerning the development of recommendations and guidelines for computer security at nuclear facilities. A document of the Nuclear Security Series dedicated to this issue is on the final stage prior to publication. This document is the the first existing IAEA document specifically addressing computer security. This document was necessary for 3 mains reasons: first not all national infrastructures have recognized and standardized computer security, secondly existing international guidance is not industry specific and fails to capture some of the key issues, and thirdly the presence of more or less connected digital systems is increasing in the design of nuclear power plants. The security of computer system must be based on a graded approach: the assignment of computer system to different levels and zones should be based on their relevance to safety and security and the risk assessment process should be allowed to feed back into and influence the graded approach

  20. Managing a Safe and Successful Multi-User Spaceport

    Science.gov (United States)

    Dacko, Taylor; Ketterer, Kirk; Meade, Phillip

    2016-01-01

    Encouraged by the creation of the Office of Commercial Space Transportation within the U.S. Federal Aviation Administration (FAA) in 1984 and the Commercial Space Act of 1998, the National Aeronautics and Space Administration (NASA) now relies on an extensive network of support from commercial companies and organizations. At NASA's Kennedy Space Center (KSC), this collaboration opens competitive opportunities for launch providers, including repurposing underutilized Shuttle Program resources, constructing new facilities, and utilizing center services and laboratories. The resulting multi-user spaceport fosters diverse activity, though it engenders risk from hazards associated with various spaceflight processing activities. The KSC Safety & Mission Assurance (S&MA) Directorate, in coordination with the center's Spaceport Integration and Center Planning & Development organizations, has developed a novel approach to protect NASA's workforce, critical assets, and the public from hazardous, space-related activity associated with KSC's multi-user spaceport. For NASA KSC S&MA, the transformation to a multi-user spaceport required implementing methods to foster safe and successful commercial activity while resolving challenges involving: Retirement of the Space Shuttle program; Co-location of multiple NASA programs; Relationships between the NASA programs; Complex relationships between NASA programs and commercial partner operations in exclusive-use facilities; Complex relationships between NASA programs and commercial partner operations in shared-use facilities. NASA KSC S&MA challenges were met with long-term planning and solutions involving cooperation with the Spaceport Integration and Services Directorate. This directorate is responsible for managing active commercial partnerships with customer advocacy and services management, providing a dedicated and consistent level of support to a wide array of commercial operations. This paper explores these solutions, their

  1. Multi-user Activity Recognition in a Smart Home

    DEFF Research Database (Denmark)

    Wang, Liang; Gu, Tao; Tao, Xianping

    2010-01-01

    The advances of wearable sensors and wireless networks offer many opportunities to recognize human activities from sensor readings in pervasive computing. Existing work so far focus mainly on recognizing activities of a single user in a home environment. However, there are typically multiple...... inhabitants in a real home and they often perform activities together. In this paper, we investigate the problem of recognizing multi-user activities using wearable sensors in a home setting. We develop a multi-modal, wearable sensor platform to collect sensor data for multiple users, and study two temporal...

  2. Computer-Aided Facilities Management Systems (CAFM).

    Science.gov (United States)

    Cyros, Kreon L.

    Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…

  3. The ATLAS multi-user upgrade and potential applications

    Energy Technology Data Exchange (ETDEWEB)

    Mustapha, B.; Nolen, J. A.; Savard, G.; Ostroumov, P. N.

    2017-12-01

    With the recent integration of the CARIBU-EBIS charge breeder into the ATLAS accelerator system to provide for more pure and efficient charge breeding of radioactive beams, a multi-user upgrade of the ATLAS facility is being proposed to serve multiple users simultaneously. ATLAS was the first superconducting ion linac in the world and is the US DOE low-energy Nuclear Physics National User Facility. The proposed upgrade will take advantage of the continuous-wave nature of ATLAS and the pulsed nature of the EBIS charge breeder in order to simultaneously accelerate two beams with very close mass-to-charge ratios; one stable from the existing ECR ion source and one radioactive from the newly commissioned EBIS charge breeder. In addition to enhancing the nuclear physics program, beam extraction at different points along the linac will open up the opportunity for other potential applications; for instance, material irradiation studies at ~ 1 MeV/u and isotope production at ~ 6 MeV/u or at the full ATLAS energy of ~ 15 MeV/u. The concept and proposed implementation of the ATLAS multi-user upgrade will be presented. Future plans to enhance the flexibility of this upgrade will also be presented.

  4. 2015 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  5. 2014 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  6. Conducting Computer Security Assessments at Nuclear Facilities

    International Nuclear Information System (INIS)

    2016-06-01

    Computer security is increasingly recognized as a key component in nuclear security. As technology advances, it is anticipated that computer and computing systems will be used to an even greater degree in all aspects of plant operations including safety and security systems. A rigorous and comprehensive assessment process can assist in strengthening the effectiveness of the computer security programme. This publication outlines a methodology for conducting computer security assessments at nuclear facilities. The methodology can likewise be easily adapted to provide assessments at facilities with other radioactive materials

  7. Oak Ridge Leadership Computing Facility (OLCF)

    Data.gov (United States)

    Federal Laboratory Consortium — The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of standing up a supercomputer 100 times...

  8. Computing facility at SSC for detectors

    International Nuclear Information System (INIS)

    Leibold, P.; Scipiono, B.

    1990-01-01

    A description of the RISC-based distributed computing facility for detector simulaiton being developed at the SSC Laboratory is discussed. The first phase of this facility is scheduled for completion in early 1991. Included is the status of the project, overview of the concepts used to model and define system architecture, networking capabilities for user access, plans for support of physics codes and related topics concerning the implementation of this facility

  9. 2016 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, Jim [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.

  10. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  11. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  12. Center Planning and Development: Multi-User Spaceport Initiatives

    Science.gov (United States)

    Kennedy, Christopher John

    2015-01-01

    The Vehicle Assembly building at NASAs Kennedy Space Center has been used since 1966 to vertically assemble every launch vehicle, since the Apollo Program, launched from Launch Complex 39 (LC-39). After the cancellation of the Constellation Program in 2010 and the retirement of the Space Shuttle Program in 2011, the VAB faced an uncertain future. As the Space Launch System (SLS) gained a foothold as the future of American spaceflight to deep space, NASA was only using a portion of the VABs initial potential. With three high bays connected to the Crawler Way transportation system, the potential exists for up to three rockets to be simultaneously processed for launch. The Kennedy Space Center (KSC) Master plan, supported by the Center Planning and Development (CPD) Directorate, is guiding Kennedy toward a 21st century multi-user spaceport. This concept will maintain Kennedy as the United States premier gateway to space and provide multi-user operations through partnerships with the commercial aerospace industry. Commercial aerospace companies, now tasked with transporting cargo and, in the future, astronauts to the International Space Station (ISS) via the Commercial Resupply Service (CRS) and Commercial Crew Program (CCP), are a rapidly growing industry with increasing capabilities to make launch operations more economical for both private companies and the government. Commercial operations to Low Earth Orbit allow the government to focus on travel to farther destinations through the SLS Program. With LC-39B designated as a multi-use launch pad, companies seeking to use it will require an integration facility to assemble, integrate, and test their launch vehicle. An Announcement for Proposals (AFP) was released in June, beginning the process of finding a non-NASA user for High Bay 2 (HB2) and the Mobile Launcher Platforms (MLPs). An Industry Day, a business meeting and tour for interested companies and organizations, was also arranged to identify and answer any

  13. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  14. Computational Science at the Argonne Leadership Computing Facility

    Science.gov (United States)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  15. Computer codes for ventilation in nuclear facilities

    International Nuclear Information System (INIS)

    Mulcey, P.

    1987-01-01

    In this paper the authors present some computer codes, developed in the last years, for ventilation and radioprotection. These codes are used for safety analysis in the conception, exploitation and dismantlement of nuclear facilities. The authors present particularly: DACC1 code used for aerosol deposit in sampling circuit of radiation monitors; PIAF code used for modelization of complex ventilation system; CLIMAT 6 code used for optimization of air conditioning system [fr

  16. Multi-User Virtual Reality Therapy for Post-Stroke Hand Rehabilitation at Home

    Directory of Open Access Journals (Sweden)

    Daria Tsoupikova

    2016-04-01

    Full Text Available Our paper describes the development of a novel multi-user virtual reality (VR system for post-stroke rehabilitation that can be used independently in the home to improve upper extremity motor function. This is the pre-clinical phase of an ongoing collaborative, interdisciplinary research project at the Rehabilitation Institute of Chicago involving a team of engineers, researchers, occupational therapists and artists. This system was designed for creative collaboration within a virtual environment to increase patients' motivation, further engagement and to alleviate the impact of social isolation following stroke. This is a low-cost system adapted to everyday environments and designed to run on a personal computer that combines three VR environments with audio integration, wireless Kinect tracking and hand motion tracking sensors. Three different game exercises for this system were developed to encourage repetitive task practice, collaboration and competitive interaction. The system is currently being tested with 15 subjects in three settings: a multi-user VR, a single-user VR and at a tabletop with standard exercises to examine the level of engagement and to compare resulting functional performance across methods. We hypothesize that stroke survivors will become more engaged in therapy when training with a multi-user VR system and this will translate into greater gains.

  17. DEEP SPACE: High Resolution VR Platform for Multi-user Interactive Narratives

    Science.gov (United States)

    Kuka, Daniela; Elias, Oliver; Martins, Ronald; Lindinger, Christopher; Pramböck, Andreas; Jalsovec, Andreas; Maresch, Pascal; Hörtner, Horst; Brandl, Peter

    DEEP SPACE is a large-scale platform for interactive, stereoscopic and high resolution content. The spatial and the system design of DEEP SPACE are facing constraints of CAVETM-like systems in respect to multi-user interactive storytelling. To be used as research platform and as public exhibition space for many people, DEEP SPACE is capable to process interactive, stereoscopic applications on two projection walls with a size of 16 by 9 meters and a resolution of four times 1080p (4K) each. The processed applications are ranging from Virtual Reality (VR)-environments to 3D-movies to computationally intensive 2D-productions. In this paper, we are describing DEEP SPACE as an experimental VR platform for multi-user interactive storytelling. We are focusing on the system design relevant for the platform, including the integration of the Apple iPod Touch technology as VR control, and a special case study that is demonstrating the research efforts in the field of multi-user interactive storytelling. The described case study, entitled "Papyrate's Island", provides a prototypical scenario of how physical drawings may impact on digital narratives. In this special case, DEEP SPACE helps us to explore the hypothesis that drawing, a primordial human creative skill, gives us access to entirely new creative possibilities in the domain of interactive storytelling.

  18. Introducing ORACLE: Library Processing in a Multi-User Environment.

    Science.gov (United States)

    Queensland Library Board, Brisbane (Australia).

    Currently being developed by the State Library of Queensland, Australia, ORACLE (On-Line Retrieval of Acquisitions, Cataloguing, and Circulation Details for Library Enquiries) is a computerized library system designed to provide rapid processing of library materials in a multi-user environment. It is based on the Australian MARC format and fully…

  19. Oak Ridge Leadership Computing Facility Position Paper

    Energy Technology Data Exchange (ETDEWEB)

    Oral, H Sarp [ORNL; Hill, Jason J [ORNL; Thach, Kevin G [ORNL; Podhorszki, Norbert [ORNL; Klasky, Scott A [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL

    2011-01-01

    This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.

  20. Computer modeling of commercial refrigerated warehouse facilities

    International Nuclear Information System (INIS)

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-01-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented

  1. Recognizing Multi-user Activities using Body Sensor Networks

    DEFF Research Database (Denmark)

    Gu, Tao; Wang, Liang; Chen, Hanhua

    2011-01-01

    The advances of wireless networking and sensor technology open up an interesting opportunity to infer human activities in a smart home environment. Existing work in this paradigm focuses mainly on recognizing activities of a single user. In this work, we address the fundamental problem...... activity classes of data—for building activity models and design a scalable, noise-resistant, Emerging Pattern based Multi-user Activity Recognizer (epMAR) to recognize both single- and multi-user activities. We develop a multi-modal, wireless body sensor network for collecting real-world traces in a smart...... home environment, and conduct comprehensive empirical studies to evaluate our system. Results show that epMAR outperforms existing schemes in terms of accuracy, scalability and robustness....

  2. Multi-User Space Link Extension (SLE) System

    Science.gov (United States)

    Perkins, Toby

    2013-01-01

    The Multi-User Space (MUS) Link Extension system, a software and data system, provides Space Link Extension (SLE) users with three space data transfer services in timely, complete, and offline modes as applicable according to standards defined by the Consultative Committee for Space Data Systems (CCSDS). MUS radically reduces the schedule, cost, and risk of implementing a new SLE user system, minimizes operating costs with a lights-out approach to SLE, and is designed to require no sustaining engineering expense during its lifetime unless changes in the CCSDS SLE standards, combined with new provider implementations, force changes. No software modification to MUS needs to be made to support a new mission. Any systems engineer with Linux experience can begin testing SLE user service instances with MUS starting from a personal computer (PC) within five days. For flight operators, MUS provides a familiar-looking Web page for entering SLE configuration data received from SLE. Operators can also use the Web page to back up a space mission's entire set of up to approximately 500 SLE service instances in less than five seconds, or to restore or transfer from another system the same amount of data from a MUS backup file in about the same amount of time. Missions operate each MUS SLE service instance independently by sending it MUS directives, which are legible, plain ASCII strings. MUS directives are usually (but not necessarily) sent through a TCP-IP (Transmission Control Protocol Internet Protocol) socket from a MOC (Mission Operations Center) or POCC (Payload Operations Control Center) system, under scripted control, during "lights-out" spacecraft operation. MUS permits the flight operations team to configure independently each of its data interfaces; not only commands and telemetry, but also MUS status messages to the MOC. Interfaces can use single- or multiple-client TCP/IP server sockets, TCP/IP client sockets, temporary disk files, the system log, or standard in

  3. Multi-User GeoGebra for Virtual Math Teams

    Directory of Open Access Journals (Sweden)

    Gerry Stahl

    2010-05-01

    Full Text Available The Math Forum is an online resource center for pre-algebra, algebra, geometry and pre-calculus. Its Virtual Math Teams (VMT service provides an integrated web-based environment for small teams to discuss mathematics. The VMT collaboration environment now includes the dynamic mathematics application, GeoGebra. It offers a multi-user version of GeoGebra, which can be used in concert with VMT’s chat, web browsers, curricula and wiki repository.

  4. MUTAGEN: Multi-user tool for annotating GENomes

    DEFF Research Database (Denmark)

    Brugger, K.; Redder, P.; Skovgaard, Marie

    2003-01-01

    MUTAGEN is a free prokaryotic annotation system. It offers the advantages of genome comparison, graphical sequence browsers, search facilities and open-source for user-specific adjustments. The web-interface allows several users to access the system from standard desktop computers. The Sulfolobus...

  5. Computer Security at Nuclear Facilities (French Edition)

    International Nuclear Information System (INIS)

    2013-01-01

    category of the IAEA Nuclear Security Series, and deals with computer security at nuclear facilities. It is based on national experience and practices as well as publications in the fields of computer security and nuclear security. The guidance is provided for consideration by States, competent authorities and operators. The preparation of this publication in the IAEA Nuclear Security Series has been made possible by the contributions of a large number of experts from Member States. An extensive consultation process with all Member States included consultants meetings and open-ended technical meetings. The draft was then circulated to all Member States for 120 days to solicit further comments and suggestions. The comments received from Member States were reviewed and considered in the final version of the publication.

  6. Alteration and Implementation of the CP/M-86 Operating System for a Multi-User Environment.

    Science.gov (United States)

    1982-12-01

    THE CP/M-86 OPERATING SYSTEM FOR A MULTI-USER ENVIRONMENT by Thomas V. Almquist and David S. Stevens C-, December 1982 ,LU Thesis Advisor : U. R. Kodres...tool$ 044, robo O0eA 6^900091 Approved for public release; distribution unlimited Alteration and Implementation of the CP/M-86 Operating System for a...SCIENCE IN COMPUTER SCIENCE from the NAVAL POSTGRADUATE SCHOOL December 1982 Authors: Approved by: ..... .. . . . . . . . . Thesis Advisor Second

  7. Computer Profile of School Facilities Energy Consumption.

    Science.gov (United States)

    Oswalt, Felix E.

    This document outlines a computerized management tool designed to enable building managers to identify energy consumption as related to types and uses of school facilities for the purpose of evaluating and managing the operation, maintenance, modification, and planning of new facilities. Specifically, it is expected that the statistics generated…

  8. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  9. Polyimide and Metals MEMS Multi-User Processes

    KAUST Repository

    Arevalo, Arpys

    2016-11-01

    The development of a polyimide and metals multi-user surface micro-machining process for Micro-electro-mechanical Systems (MEMS) is presented. The process was designed to be as general as possible, and designed to be capable to fabricate different designs on a single silicon wafer. The process was not optimized with the purpose of fabricating any one specific device but can be tweaked to satisfy individual needs depending on the application. The fabrication process uses Polyimide as the structural material and three separated metallization layers that can be interconnected depending on the desired application. The technology allows the development of out-of-plane compliant mechanisms, which can be combined with six variations of different physical principles for actuation and sensing on a single processed silicon wafer. These variations are: electrostatic motion, thermal bimorph actuation, capacitive sensing, magnetic sensing, thermocouple-based sensing and radio frequency transmission and reception.

  10. Authenticated multi-user quantum key distribution with single particles

    Science.gov (United States)

    Lin, Song; Wang, Hui; Guo, Gong-De; Ye, Guo-Hua; Du, Hong-Zhen; Liu, Xiao-Fen

    2016-03-01

    Quantum key distribution (QKD) has been growing rapidly in recent years and becomes one of the hottest issues in quantum information science. During the implementation of QKD on a network, identity authentication has been one main problem. In this paper, an efficient authenticated multi-user quantum key distribution (MQKD) protocol with single particles is proposed. In this protocol, any two users on a quantum network can perform mutual authentication and share a secure session key with the assistance of a semi-honest center. Meanwhile, the particles, which are used as quantum information carriers, are not required to be stored, therefore the proposed protocol is feasible with current technology. Finally, security analysis shows that this protocol is secure in theory.

  11. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  12. Multilevel and multi-user sustainability assessment of farming systems

    Energy Technology Data Exchange (ETDEWEB)

    Van Passel, Steven, E-mail: Steven.vanpassel@uhasselt.be [Hasselt University, Faculty of Business Economics, Centre for Environmental Sciences, Agoralaan, Building D, 3590, Diepenbeek (Belgium); University of Antwerp, Department Bioscience Engineering, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Meul, Marijke [University College Ghent, Department of Biosciences and Landscape Architecture, Campus Schoonmeersen, Building C, Schoonmeersstraat 52, 9000, Gent (Belgium)

    2012-01-15

    Sustainability assessment is needed to build sustainable farming systems. A broad range of sustainability concepts, methodologies and applications already exists. They differ in level, focus, orientation, measurement, scale, presentation and intended end-users. In this paper we illustrate that a smart combination of existing methods with different levels of application can make sustainability assessment more profound, and that it can broaden the insights of different end-user groups. An overview of sustainability assessment tools on different levels and for different end-users shows the complementarities and the opportunities of using different methods. In a case-study, a combination of the sustainable value approach (SVA) and MOTIFS is used to perform a sustainability evaluation of farming systems in Flanders. SVA is used to evaluate sustainability at sector level, and is especially useful to support policy makers, while MOTIFS is used to support and guide farmers towards sustainability at farm level. The combined use of the two methods with complementary goals can widen the insights of both farmers and policy makers, without losing the particularities of the different approaches. To stimulate and support further research and applications, we propose guidelines for multilevel and multi-user sustainability assessments. - Highlights: Black-Right-Pointing-Pointer We give an overview of sustainability assessment tools for agricultural systems. Black-Right-Pointing-Pointer SVA and MOTIFS are used to evaluate the sustainability of dairy farming in Flanders. Black-Right-Pointing-Pointer Combination of methods with different levels broadens the insights of different end-user groups. Black-Right-Pointing-Pointer We propose guidelines for multilevel and multi-user sustainability assessments.

  13. Multilevel and multi-user sustainability assessment of farming systems

    International Nuclear Information System (INIS)

    Van Passel, Steven; Meul, Marijke

    2012-01-01

    Sustainability assessment is needed to build sustainable farming systems. A broad range of sustainability concepts, methodologies and applications already exists. They differ in level, focus, orientation, measurement, scale, presentation and intended end-users. In this paper we illustrate that a smart combination of existing methods with different levels of application can make sustainability assessment more profound, and that it can broaden the insights of different end-user groups. An overview of sustainability assessment tools on different levels and for different end-users shows the complementarities and the opportunities of using different methods. In a case-study, a combination of the sustainable value approach (SVA) and MOTIFS is used to perform a sustainability evaluation of farming systems in Flanders. SVA is used to evaluate sustainability at sector level, and is especially useful to support policy makers, while MOTIFS is used to support and guide farmers towards sustainability at farm level. The combined use of the two methods with complementary goals can widen the insights of both farmers and policy makers, without losing the particularities of the different approaches. To stimulate and support further research and applications, we propose guidelines for multilevel and multi-user sustainability assessments. - Highlights: ► We give an overview of sustainability assessment tools for agricultural systems. ► SVA and MOTIFS are used to evaluate the sustainability of dairy farming in Flanders. ► Combination of methods with different levels broadens the insights of different end-user groups. ► We propose guidelines for multilevel and multi-user sustainability assessments.

  14. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  15. Computer facilities for ISABELLE data handling

    International Nuclear Information System (INIS)

    Kramer, M.A.; Love, W.A.; Miller, R.J.; Zeller, M.

    1977-01-01

    The analysis of data produced by ISABELLE experiments will need a large system of computers. An official group of prospective users and operators of that system should begin planning now. Included in the array will be a substantial computer system at each ISABELLE intersection in use. These systems must include enough computer power to keep experimenters aware of the health of the experiment. This will require at least one very fast sophisticated processor in the system, the size depending on the experiment. Other features of the intersection systems must be a good, high speed graphic display, ability to record data on magnetic tape at 500 to 1000 KB, and a high speed link to a central computer. The operating system software must support multiple interactive users. A substantially larger capacity computer system, shared by the six intersection region experiments, must be available with good turnaround for experimenters while ISABELLE is running. A computer support group will be required to maintain the computer system and to provide and maintain software common to all experiments. Special superfast computing hardware or special function processors constructed with microprocessor circuitry may be necessary both in the data gathering and data processing work. Thus both the local and central processors should be chosen with the possibility of interfacing such devices in mind

  16. Integration of small computers in the low budget facility

    International Nuclear Information System (INIS)

    Miller, G.E.; Crofoot, T.A.

    1988-01-01

    Inexpensive computers (PC's) are well within the reach of low budget reactor facilities. It is possible to envisage many uses that will both improve capabilities of existing instrumentation and also assist operators and staff with certain routine tasks. Both of these opportunities are important for survival at facilities with severe budget and staffing limitations. (author)

  17. Proposal for Implementing Multi-User Database (MUD) Technology in an Academic Library.

    Science.gov (United States)

    Filby, A. M. Iliana

    1996-01-01

    Explores the use of MOO (multi-user object oriented) virtual environments in academic libraries to enhance reference services. Highlights include the development of multi-user database (MUD) technology from gaming to non-recreational settings; programming issues; collaborative MOOs; MOOs as distinguished from other types of virtual reality; audio…

  18. Multi-user distribution of polarization entangled photon pairs

    Energy Technology Data Exchange (ETDEWEB)

    Trapateau, J.; Orieux, A.; Diamanti, E.; Zaquine, I., E-mail: isabelle.zaquine@telecom-paristech.fr [LTCI, CNRS, Télécom ParisTech, Université Paris-Saclay, 75013 Paris (France); Ghalbouni, J. [Applied Physics Laboratory, Faculty of Sciences 2, Lebanese University, Campus Fanar, BP 90656 Jdeidet (Lebanon)

    2015-10-14

    We experimentally demonstrate multi-user distribution of polarization entanglement using commercial telecom wavelength division demultiplexers. The entangled photon pairs are generated from a broadband source based on spontaneous parametric down conversion in a periodically poled lithium niobate crystal using a double path setup employing a Michelson interferometer and active phase stabilisation. We test and compare demultiplexers based on various technologies and analyze the effect of their characteristics, such as losses and polarization dependence, on the quality of the distributed entanglement for three channel pairs of each demultiplexer. In all cases, we obtain a Bell inequality violation, whose value depends on the demultiplexer features. This demonstrates that entanglement can be distributed to at least three user pairs of a network from a single source. Additionally, we verify for the best demultiplexer that the violation is maintained when the pairs are distributed over a total channel attenuation corresponding to 20 km of optical fiber. These techniques are therefore suitable for resource-efficient practical implementations of entanglement-based quantum key distribution and other quantum communication network applications.

  19. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  20. Towards higher reliability of CMS computing facilities

    International Nuclear Information System (INIS)

    Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A

    2012-01-01

    The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.

  1. National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Van Arsdall, P.J. LLNL

    1998-01-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance

  2. Computers in experimental nuclear power facilities

    International Nuclear Information System (INIS)

    Jukl, M.

    1982-01-01

    The CIS 3000 information system is described used for monitoring the operating modes of large technological equipment. The CIS system consists of two ADT computers, an external drum store an analog input side, a bivalent input side, 4 control consoles with monitors and acoustic signalling, a print-out area with typewriters and punching machines and linear recorders. Various applications are described of the installed CIS configuration as is the general-purpose program for processing measured values into a protocol. The program operates in the conversational mode. Different processing variants are shown on the display monitor. (M.D.)

  3. Brookhaven Reactor Experiment Control Facility, a distributed function computer network

    International Nuclear Information System (INIS)

    Dimmler, D.G.; Greenlaw, N.; Kelley, M.A.; Potter, D.W.; Rankowitz, S.; Stubblefield, F.W.

    1975-11-01

    A computer network for real-time data acquisition, monitoring and control of a series of experiments at the Brookhaven High Flux Beam Reactor has been developed and has been set into routine operation. This reactor experiment control facility presently services nine neutron spectrometers and one x-ray diffractometer. Several additional experiment connections are in progress. The architecture of the facility is based on a distributed function network concept. A statement of implementation and results is presented

  4. Survey of computer codes applicable to waste facility performance evaluations

    International Nuclear Information System (INIS)

    Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.

    1988-01-01

    This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs

  5. COMPUTER ORIENTED FACILITIES OF TEACHING AND INFORMATIVE COMPETENCE

    Directory of Open Access Journals (Sweden)

    Olga M. Naumenko

    2010-09-01

    Full Text Available In the article it is considered the history of views to the tasks of education, estimations of its effectiveness from the point of view of forming of basic vitally important competences. Opinions to the problem in different countries, international organizations, corresponding experience of the Ukrainian system of education are described. The necessity of forming of informative competence of future teacher is reasonable in the conditions of application of the computer oriented facilities of teaching at the study of naturally scientific cycle subjects in pedagogical colleges. Prognosis estimations concerning the development of methods of application of computer oriented facilities of teaching are presented.

  6. Neutronic computational modeling of the ASTRA critical facility using MCNPX

    International Nuclear Information System (INIS)

    Rodriguez, L. P.; Garcia, C. R.; Milian, D.; Milian, E. E.; Brayner, C.

    2015-01-01

    The Pebble Bed Very High Temperature Reactor is considered as a prominent candidate among Generation IV nuclear energy systems. Nevertheless the Pebble Bed Very High Temperature Reactor faces an important challenge due to the insufficient validation of computer codes currently available for use in its design and safety analysis. In this paper a detailed IAEA computational benchmark announced by IAEA-TECDOC-1694 in the framework of the Coordinated Research Project 'Evaluation of High Temperature Gas Cooled Reactor (HTGR) Performance' was solved in support of the Generation IV computer codes validation effort using MCNPX ver. 2.6e computational code. In the IAEA-TECDOC-1694 were summarized a set of four calculational benchmark problems performed at the ASTRA critical facility. Benchmark problems include criticality experiments, control rod worth measurements and reactivity measurements. The ASTRA Critical Facility at the Kurchatov Institute in Moscow was used to simulate the neutronic behavior of nuclear pebble bed reactors. (Author)

  7. ARLearn and StreetLearn software for virtual reality and augmented reality multi user learning games

    NARCIS (Netherlands)

    Ternier, Stefaan; Klemke, Roland

    2012-01-01

    Ternier, S., & Klemke, R. (2011). ARLearn and StreetLearn software for virtual reality and augmented reality multi user learning games (Version 1.0) [Software Documentation]. Heerlen, The Netherlands: Open Universiteit in the Netherlands.

  8. Centralized computer-based controls of the Nova Laser Facility

    International Nuclear Information System (INIS)

    Krammen, J.

    1985-01-01

    This article introduces the overall architecture of the computer-based Nova Laser Control System and describes its basic components. Use of standard hardware and software components ensures that the system, while specialized and distributed throughout the facility, is adaptable. 9 references, 6 figures

  9. Computer-Assisted School Facility Planning with ONPASS.

    Science.gov (United States)

    Urban Decision Systems, Inc., Los Angeles, CA.

    The analytical capabilities of ONPASS, an on-line computer-aided school facility planning system, are described by its developers. This report describes how, using the Canoga Park-Winnetka-Woodland Hills Planning Area as a test case, the Department of City Planning of the city of Los Angeles employed ONPASS to demonstrate how an on-line system can…

  10. SIMULASI TEKNIK POWER CONTROL DAN MULTI USER DETECTION PADA SISTEM KOMUNIKASI DS-CDMA

    Directory of Open Access Journals (Sweden)

    Yuli Christyono

    2012-02-01

    Full Text Available CDMA is interference limited multiple access system. Because all users transmit on the same frequency,internal interference generated by the system is the most significant factor in determining system capacity andcall quality. The transmit power for each user must be reduced to limit interference, however, the power shouldbe enough to maintain the required Eb/No (signal to noise ratio for a satisfactory call quality. Maximumcapacity is achieved when Eb/No of every user is at the minimum level needed for the acceptable channelperformance. As the MS moves around, the RF environment continuously changes due to fast and slow fading,external interference, shadowing , and other factors. The aim of the dynamic power control is to limittransmitted power on both the links while maintaining link quality under all conditions. Additional advantagesare longer mobile battery life and longer life span of BTS power amplifiers.In this research will be made a sumulation of power control and multi user detection to avoid the interferencebetween MS.Observations show that the increasing number of users will decrease the value of Signal to Interfrence Ratio(SIR / SIR below the target. To cope the growing number of users increases can be done by iteration / updatingpower transmit so the convergence computation can be reached and target value SIR can be achieved. Inaddition, to reduce interference can also be done by extending the number of chips.

  11. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  12. Modern computer hardware and the role of central computing facilities in particle physics

    International Nuclear Information System (INIS)

    Zacharov, V.

    1981-01-01

    Important recent changes in the hardware technology of computer system components are reviewed, and the impact of these changes assessed on the present and future pattern of computing in particle physics. The place of central computing facilities is particularly examined, to answer the important question as to what, if anything, should be their future role. Parallelism in computing system components is considered to be an important property that can be exploited with advantage. The paper includes a short discussion of the position of communications and network technology in modern computer systems. (orig.)

  13. COMPUTER ORIENTED FACILITIES OF TEACHING AND INFORMATIVE COMPETENCE

    OpenAIRE

    Olga M. Naumenko

    2010-01-01

    In the article it is considered the history of views to the tasks of education, estimations of its effectiveness from the point of view of forming of basic vitally important competences. Opinions to the problem in different countries, international organizations, corresponding experience of the Ukrainian system of education are described. The necessity of forming of informative competence of future teacher is reasonable in the conditions of application of the computer oriented facilities of t...

  14. Shieldings for X-ray radiotherapy facilities calculated by computer

    International Nuclear Information System (INIS)

    Pedrosa, Paulo S.; Farias, Marcos S.; Gavazza, Sergio

    2005-01-01

    This work presents a methodology for calculation of X-ray shielding in facilities of radiotherapy with help of computer. Even today, in Brazil, the calculation of shielding for X-ray radiotherapy is done based on NCRP-49 recommendation establishing a methodology for calculating required to the elaboration of a project of shielding. With regard to high energies, where is necessary the construction of a labyrinth, the NCRP-49 is not very clear, so that in this field, studies were made resulting in an article that proposes a solution to the problem. It was developed a friendly program in Delphi programming language that, through the manual data entry of a basic design of architecture and some parameters, interprets the geometry and calculates the shields of the walls, ceiling and floor of on X-ray radiation therapy facility. As the final product, this program provides a graphical screen on the computer with all the input data and the calculation of shieldings and the calculation memory. The program can be applied in practical implementation of shielding projects for radiotherapy facilities and can be used in a didactic way compared to NCRP-49.

  15. Shielding Calculations for Positron Emission Tomography - Computed Tomography Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Baasandorj, Khashbayar [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Yang, Jeongseon [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-10-15

    Integrated PET-CT has been shown to be more accurate for lesion localization and characterization than PET or CT alone, and the results obtained from PET and CT separately and interpreted side by side or following software based fusion of the PET and CT datasets. At the same time, PET-CT scans can result in high patient and staff doses; therefore, careful site planning and shielding of this imaging modality have become challenging issues in the field. In Mongolia, the introduction of PET-CT facilities is currently being considered in many hospitals. Thus, additional regulatory legislation for nuclear and radiation applications is necessary, for example, in regulating licensee processes and ensuring radiation safety during the operations. This paper aims to determine appropriate PET-CT shielding designs using numerical formulas and computer code. Since presently there are no PET-CT facilities in Mongolia, contact was made with radiological staff at the Nuclear Medicine Center of the National Cancer Center of Mongolia (NCCM) to get information about facilities where the introduction of PET-CT is being considered. Well-designed facilities do not require additional shielding, which should help cut down overall costs related to PET-CT installation. According to the results of this study, building barrier thicknesses of the NCCM building is not sufficient to keep radiation dose within the limits.

  16. Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm

    International Nuclear Information System (INIS)

    Zu Yun-Xiao; Zhou Jie

    2012-01-01

    Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate. (geophysics, astronomy, and astrophysics)

  17. A versatile multi-user polyimide surface micromachinning process for MEMS applications

    KAUST Repository

    Carreno, Armando Arpys Arevalo

    2015-04-01

    This paper reports a versatile multi-user micro-fabrication process for MEMS devices, the \\'Polyimide MEMS Multi-User Process\\' (PiMMPs). The reported process uses polyimide as the structural material and three separate metallization layers that can be interconnected depending on the desired application. This process enables for the first time the development of out-of-plane compliant mechanisms that can be designed using six different physical principles for actuation and sensing on a wafer from a single fabrication run. These principles are electrostatic motion, thermal bimorph actuation, capacitive sensing, magnetic sensing, thermocouple-based sensing and radio frequency transmission and reception. © 2015 IEEE.

  18. Sensor-Based Human Activity Recognition in a Multi-user Scenario

    Science.gov (United States)

    Wang, Liang; Gu, Tao; Tao, Xianping; Lu, Jian

    Existing work on sensor-based activity recognition focuses mainly on single-user activities. However, in real life, activities are often performed by multiple users involving interactions between them. In this paper, we propose Coupled Hidden Markov Models (CHMMs) to recognize multi-user activities from sensor readings in a smart home environment. We develop a multimodal sensing platform and present a theoretical framework to recognize both single-user and multi-user activities. We conduct our trace collection done in a smart home, and evaluate our framework through experimental studies. Our experimental result shows that we achieve an average accuracy of 85.46% with CHMMs.

  19. Outage and ser performance of an opportunistic multi-user underlay cognitive network

    KAUST Repository

    Khan, Fahd Ahmed

    2012-10-01

    Consider a multi-user underlay cognitive network where multiple cognitive users concurrently share the spectrum with a primary network and a single secondary user is selected for transmission. The channel is assumed to have independent but not identical Nakagami-m fading. Closed form expressions for the outage performance and the symbol-error-rate performance of the opportunistic multi-user secondary network are derived when a peak interference power constraint is imposed on the secondary network in addition to the limited peak transmit power of each secondary user. © 2012 IEEE.

  20. Heavy-tailed distribution of the SSH Brute-force attack duration in a multi-user environment

    Science.gov (United States)

    Lee, Jae-Kook; Kim, Sung-Jun; Park, Chan Yeol; Hong, Taeyoung; Chae, Huiseung

    2016-07-01

    Quite a number of cyber-attacks to be place against supercomputers that provide highperformance computing (HPC) services to public researcher. Particularly, although the secure shell protocol (SSH) brute-force attack is one of the traditional attack methods, it is still being used. Because stealth attacks that feign regular access may occur, they are even harder to detect. In this paper, we introduce methods to detect SSH brute-force attacks by analyzing the server's unsuccessful access logs and the firewall's drop events in a multi-user environment. Then, we analyze the durations of the SSH brute-force attacks that are detected by applying these methods. The results of an analysis of about 10 thousands attack source IP addresses show that the behaviors of abnormal users using SSH brute-force attacks are based on human dynamic characteristics of a typical heavy-tailed distribution.

  1. A Tabletop Board Game Interface for Multi-User Interaction with a Storytelling System

    NARCIS (Netherlands)

    Alofs, T.; Theune, Mariet; Swartjes, I.M.T.; Camurri, A.; Costa, C.

    2011-01-01

    The Interactive Storyteller is an interactive storytelling system with a multi-user tabletop interface. Our goal was to design a generic framework combining emergent narrative, where stories emerge from the actions of autonomous intelligent agents, with the social aspects of traditional board games.

  2. A Multi-User Virtual Environment for Building and Assessing Higher Order Inquiry Skills in Science

    Science.gov (United States)

    Ketelhut, Diane Jass; Nelson, Brian C.; Clarke, Jody; Dede, Chris

    2010-01-01

    This study investigated novel pedagogies for helping teachers infuse inquiry into a standards-based science curriculum. Using a multi-user virtual environment (MUVE) as a pedagogical vehicle, teams of middle-school students collaboratively solved problems around disease in a virtual town called River City. The students interacted with "avatars" of…

  3. CGLXTouch: A multi-user multi-touch approach for ultra-high-resolution collaborative workspaces

    KAUST Repository

    Ponto, Kevin; Doerr, Kai; Wypych, Tom; Kooker, John; Kuester, Falko

    2011-01-01

    multi-touch tablet and phone devices, which can be added to and removed from the system on the fly. Events from these devices are tagged with a device identifier and are synchronized with the distributed display environment, enabling multi-user support

  4. A perspective on multi-user interaction design based on an understanding of domestic lighting conflict

    NARCIS (Netherlands)

    Niemantsverdriet, K.; van Essen, H.A.; Eggen, J.H.

    2017-01-01

    More and more connected systems are entering the social and shared home environment. Interaction with these systems is often rather individual and based on personal preferences, leading to conflicts in multi-user situations. In this paper, we aim to develop a perspective on how to design for

  5. Optimal One Bit Time Reversal For UWB Impulse Radio In Multi-User Wireless Communications

    DEFF Research Database (Denmark)

    Nguyen, Hung Tuan

    2008-01-01

    In this paper, with the purpose of further reducing the complexity of the system, while keeping its temporal and spatial focusing performance, we investigate the possibility of using optimal one bit time reversal (TR) system for impulse radio ultra wideband multi-user wireless communications...

  6. Multi-User Domain Object Oriented (MOO) as a High School Procedure for Foreign Language Acquisition.

    Science.gov (United States)

    Backer, James A.

    Foreign language students experience added difficulty when they are isolated from native speakers and from the culture of the target language. It has been posited that MOO (Multi-User Domain Object Oriented) may help overcome the geographical isolation of these students. MOOs are Internet-based virtual worlds in which people from all over the real…

  7. Sensor-based Human Activity Recognition in a Multi-user Scenario

    DEFF Research Database (Denmark)

    Wang, Liang; Gu, Tao; Tao, Xianping

    2009-01-01

    Existing work on sensor-based activity recognition focuses mainly on single-user activities. However, in real life, activities are often performed by multiple users involving interactions between them. In this paper, we propose Coupled Hidden Markov Models (CHMMs) to recognize multi-user activiti...

  8. The Argonne Leadership Computing Facility 2010 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Drugan, C. (LCF)

    2011-05-09

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued to provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers

  9. High resolution muon computed tomography at neutrino beam facilities

    International Nuclear Information System (INIS)

    Suerfu, B.; Tully, C.G.

    2016-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pion decay pipe at a neutrino beam facility and what can be achieved for momentum resolution in a muon spectrometer. Such an imaging system can be applied in archaeology, art history, engineering, material identification and whenever there is a need to image inside a transportable object constructed of dense materials

  10. A Robotic Coach Architecture for Elder Care (ROCARE) Based on Multi-User Engagement Models.

    Science.gov (United States)

    Fan, Jing; Bian, Dayi; Zheng, Zhi; Beuscher, Linda; Newhouse, Paul A; Mion, Lorraine C; Sarkar, Nilanjan

    2017-08-01

    The aging population with its concomitant medical conditions, physical and cognitive impairments, at a time of strained resources, establishes the urgent need to explore advanced technologies that may enhance function and quality of life. Recently, robotic technology, especially socially assistive robotics has been investigated to address the physical, cognitive, and social needs of older adults. Most system to date have predominantly focused on one-on-one human robot interaction (HRI). In this paper, we present a multi-user engagement-based robotic coach system architecture (ROCARE). ROCARE is capable of administering both one-on-one and multi-user HRI, providing implicit and explicit channels of communication, and individualized activity management for long-term engagement. Two preliminary feasibility studies, a one-on-one interaction and a triadic interaction with two humans and a robot, were conducted and the results indicated potential usefulness and acceptance by older adults, with and without cognitive impairment.

  11. QoE-based transmission strategies for multi-user wireless information and power transfer

    Directory of Open Access Journals (Sweden)

    Taehun Jung

    2015-12-01

    Full Text Available One solution to the problem of supplying energy to wireless networks is wireless power transfer. One such technology–electromagnetic radiation enabled wireless power transfer–will change traditional wireless networks. In this paper, we investigate a transmission strategy for multi-user wireless information and power transfer. We consider a multi-user multiple-input multiple-output (MIMO channel that includes one base station (BS and two user terminals (UT consisting of one energy harvesting (EH receiver and one information decoding (ID receiver. Our system provides transmission strategies that can be executed and implemented in practical scenarios. The paper then analyzes the rate–energy (R–E pair of our strategies and compares them to those of the theoretical optimal strategy. We furthermore propose a QoE-based mode selection algorithm by mapping the R–E pair to the utility functions.

  12. Leakage based precoding for multi-user MIMO-OFDM systems

    KAUST Repository

    Sadek, Mirette

    2011-08-01

    In downlink multi-user multiple-input multiple-output (MIMO) transmissions, several precoding schemes have been proposed to decrease interference among users. Notable among these precoding schemes is one that uses the signal-to-leakage-plus-noise ratio (SLNR) as an optimization criterion. In this paper, leveraging the efficiency of the SLNR optimization, we generalize this precoding scheme to MIMO orthogonal frequency division multiplexing (OFDM) multi-user systems where the OFDM is used to overcome the inter-symbol- interference (ISI) introduced by multipath channels. We also introduce a channel compensation technique that reconstructs the channel at the transmitter for every time instant given a significantly lower channel feedback rate by the receiver. © 2006 IEEE.

  13. Mechanisms for collaboration: a design and evaluation framework for multi-user interfaces

    OpenAIRE

    Yuill, Nicola; Rogers, Yvonne

    2012-01-01

    Multi-user interfaces are said to provide “natural” interaction in supporting collaboration, compared to individual and noncolocated technologies. We identify three mechanisms accounting for the success of such interfaces: high awareness of others' actions and intentions, high control over the interface, and high availability of background information. We challenge the idea that interaction over such interfaces is necessarily “natural” and argue that everyday interaction involves constraints ...

  14. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  15. Computer Security at Nuclear Facilities. Reference Manual (Arabic Edition)

    International Nuclear Information System (INIS)

    2011-01-01

    category of the IAEA Nuclear Security Series, and deals with computer security at nuclear facilities. It is based on national experience and practices as well as publications in the fields of computer security and nuclear security. The guidance is provided for consideration by States, competent authorities and operators. The preparation of this publication in the IAEA Nuclear Security Series has been made possible by the contributions of a large number of experts from Member States. An extensive consultation process with all Member States included consultants meetings and open-ended technical meetings. The draft was then circulated to all Member States for 120 days to solicit further comments and suggestions. The comments received from Member States were reviewed and considered in the final version of the publication.

  16. Computer Security at Nuclear Facilities. Reference Manual (Russian Edition)

    International Nuclear Information System (INIS)

    2012-01-01

    category of the IAEA Nuclear Security Series, and deals with computer security at nuclear facilities. It is based on national experience and practices as well as publications in the fields of computer security and nuclear security. The guidance is provided for consideration by States, competent authorities and operators. The preparation of this publication in the IAEA Nuclear Security Series has been made possible by the contributions of a large number of experts from Member States. An extensive consultation process with all Member States included consultants meetings and open-ended technical meetings. The draft was then circulated to all Member States for 120 days to solicit further comments and suggestions. The comments received from Member States were reviewed and considered in the final version of the publication.

  17. Computer Security at Nuclear Facilities. Reference Manual (Chinese Edition)

    International Nuclear Information System (INIS)

    2012-01-01

    category of the IAEA Nuclear Security Series, and deals with computer security at nuclear facilities. It is based on national experience and practices as well as publications in the fields of computer security and nuclear security. The guidance is provided for consideration by States, competent authorities and operators. The preparation of this publication in the IAEA Nuclear Security Series has been made possible by the contributions of a large number of experts from Member States. An extensive consultation process with all Member States included consultants meetings and open-ended technical meetings. The draft was then circulated to all Member States for 120 days to solicit further comments and suggestions. The comments received from Member States were reviewed and considered in the final version of the publication.

  18. Academic Computing Facilities and Services in Higher Education--A Survey.

    Science.gov (United States)

    Warlick, Charles H.

    1986-01-01

    Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…

  19. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    Science.gov (United States)

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  20. An analytical model for computation of reliability of waste management facilities with intermediate storages

    International Nuclear Information System (INIS)

    Kallweit, A.; Schumacher, F.

    1977-01-01

    A high reliability is called for waste management facilities within the fuel cycle of nuclear power stations which can be fulfilled by providing intermediate storage facilities and reserve capacities. In this report a model based on the theory of Markov processes is described which allows computation of reliability characteristics of waste management facilities containing intermediate storage facilities. The application of the model is demonstrated by an example. (orig.) [de

  1. A Suboptimal Scheme for Multi-User Scheduling in Gaussian Broadcast Channels

    KAUST Repository

    Zafar, Ammar; Alouini, Mohamed-Slim; Shaqfeh, Mohammad

    2014-01-01

    This work proposes a suboptimal multi-user scheduling scheme for Gaussian broadcast channels which improves upon the classical single user selection, while considerably reducing complexity as compared to the optimal superposition coding with successful interference cancellation. The proposed scheme combines the two users with the maximum weighted instantaneous rate using superposition coding. The instantaneous rate and power allocation are derived in closed-form, while the long term rate of each user is derived in integral form for all channel distributions. Numerical results are then provided to characterize the prospected gains of the proposed scheme.

  2. A Suboptimal Scheme for Multi-User Scheduling in Gaussian Broadcast Channels

    KAUST Repository

    Zafar, Ammar

    2014-05-28

    This work proposes a suboptimal multi-user scheduling scheme for Gaussian broadcast channels which improves upon the classical single user selection, while considerably reducing complexity as compared to the optimal superposition coding with successful interference cancellation. The proposed scheme combines the two users with the maximum weighted instantaneous rate using superposition coding. The instantaneous rate and power allocation are derived in closed-form, while the long term rate of each user is derived in integral form for all channel distributions. Numerical results are then provided to characterize the prospected gains of the proposed scheme.

  3. Robots, multi-user virtual environments and healthcare: synergies for future directions.

    Science.gov (United States)

    Moon, Ajung; Grajales, Francisco J; Van der Loos, H F Machiel

    2011-01-01

    The adoption of technology in healthcare over the last twenty years has steadily increased, particularly as it relates to medical robotics and Multi-User Virtual Environments (MUVEs) such as Second Life. Both disciplines have been shown to improve the quality of care and have evolved, for the most part, in isolation from each other. In this paper, we present four synergies between medical robotics and MUVEs that have the potential to decrease resource utilization and improve the quality of healthcare delivery. We conclude with some foreseeable barriers and future research directions for researchers in these fields.

  4. Multi-user quantum key distribution based on Bell states with mutual authentication

    International Nuclear Information System (INIS)

    Lin Song; Huang Chuan; Liu Xiaofen

    2013-01-01

    A new multi-user quantum key distribution protocol with mutual authentication is proposed on a star network. Here, two arbitrary users are able to perform key distribution with the assistance of a semi-trusted center. Bell states are used as information carriers and transmitted in a quantum channel between the center and one user. A keyed hash function is utilized to ensure the identities of three parties. Finally, the security of this protocol with respect to various kinds of attacks is discussed. (paper)

  5. Multi-user MIMO and carrier aggregation in 4G systems

    DEFF Research Database (Denmark)

    Cattoni, Andrea Fabio; Nguyen, Hung Tuan; Duplicy, Jonathan

    2012-01-01

    The market success of broadband multimediaenabled devices such as smart phones, tablets, and laptops is increasing the demand for wireless data capacity in mobile cellular systems. In order to meet such requirements, the introduction of advanced techniques for increasing the efficiency in spectrum...... usage was required. Multi User -Multiple Input Multiple Output (MU-MIMO) and Carrier Aggregation (CA) are two important techniques addressed by 3GPP for LTE and LTE-Advanced. The aim of the EU FP7 project on ”Spectrum Aggregation and Multiuser-MIMO: real-World Impact” (SAMURAI) is to investigate...

  6. Computer Security Incident Response Planning at Nuclear Facilities

    International Nuclear Information System (INIS)

    2016-06-01

    The purpose of this publication is to assist Member States in developing comprehensive contingency plans for computer security incidents with the potential to impact nuclear security and/or nuclear safety. It provides an outline and recommendations for establishing a computer security incident response capability as part of a computer security programme, and considers the roles and responsibilities of the system owner, operator, competent authority, and national technical authority in responding to a computer security incident with possible nuclear security repercussions

  7. Public Auditing with Privacy Protection in a Multi-User Model of Cloud-Assisted Body Sensor Networks

    Science.gov (United States)

    Li, Song; Cui, Jie; Zhong, Hong; Liu, Lu

    2017-01-01

    Wireless Body Sensor Networks (WBSNs) are gaining importance in the era of the Internet of Things (IoT). The modern medical system is a particular area where the WBSN techniques are being increasingly adopted for various fundamental operations. Despite such increasing deployments of WBSNs, issues such as the infancy in the size, capabilities and limited data processing capacities of the sensor devices restrain their adoption in resource-demanding applications. Though providing computing and storage supplements from cloud servers can potentially enrich the capabilities of the WBSNs devices, data security is one of the prevailing issues that affects the reliability of cloud-assisted services. Sensitive applications such as modern medical systems demand assurance of the privacy of the users’ medical records stored in distant cloud servers. Since it is economically impossible to set up private cloud servers for every client, auditing data security managed in the remote servers has necessarily become an integral requirement of WBSNs’ applications relying on public cloud servers. To this end, this paper proposes a novel certificateless public auditing scheme with integrated privacy protection. The multi-user model in our scheme supports groups of users to store and share data, thus exhibiting the potential for WBSNs’ deployments within community environments. Furthermore, our scheme enriches user experiences by offering public verifiability, forward security mechanisms and revocation of illegal group members. Experimental evaluations demonstrate the security effectiveness of our proposed scheme under the Random Oracle Model (ROM) by outperforming existing cloud-assisted WBSN models. PMID:28475110

  8. Public Auditing with Privacy Protection in a Multi-User Model of Cloud-Assisted Body Sensor Networks.

    Science.gov (United States)

    Li, Song; Cui, Jie; Zhong, Hong; Liu, Lu

    2017-05-05

    Wireless Body Sensor Networks (WBSNs) are gaining importance in the era of the Internet of Things (IoT). The modern medical system is a particular area where the WBSN techniques are being increasingly adopted for various fundamental operations. Despite such increasing deployments of WBSNs, issues such as the infancy in the size, capabilities and limited data processing capacities of the sensor devices restrain their adoption in resource-demanding applications. Though providing computing and storage supplements from cloud servers can potentially enrich the capabilities of the WBSNs devices, data security is one of the prevailing issues that affects the reliability of cloud-assisted services. Sensitive applications such as modern medical systems demand assurance of the privacy of the users' medical records stored in distant cloud servers. Since it is economically impossible to set up private cloud servers for every client, auditing data security managed in the remote servers has necessarily become an integral requirement of WBSNs' applications relying on public cloud servers. To this end, this paper proposes a novel certificateless public auditing scheme with integrated privacy protection. The multi-user model in our scheme supports groups of users to store and share data, thus exhibiting the potential for WBSNs' deployments within community environments. Furthermore, our scheme enriches user experiences by offering public verifiability, forward security mechanisms and revocation of illegal group members. Experimental evaluations demonstrate the security effectiveness of our proposed scheme under the Random Oracle Model (ROM) by outperforming existing cloud-assisted WBSN models.

  9. National facility for advanced computational science: A sustainable path to scientific discovery

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  10. An improved reconstruction algorithm based on multi-user detection for uplink grant-free NOMA

    Directory of Open Access Journals (Sweden)

    Hou Chengyan

    2017-01-01

    Full Text Available For the traditional orthogonal matching pursuit(OMP algorithm in multi-user detection(MUD for uplink grant-free NOMA, here is a poor BER performance, so in this paper we propose an temporal-correlation orthogonal matching pursuit algorithm(TOMP to realize muli-user detection. The core idea of the TOMP is to use the time correlation of the active user sets to achieve user activity and data detection in a number of continuous time slots. We use the estimated active user set in the current time slot as a priori information to estimate the active user sets for the next slot. By maintaining the active user set Tˆl of size K(K is the number of users, but modified in each iteration. Specifically, active user set is believed to be reliable in one iteration but shown error in another iteration, can be added to the set path delay Tˆl or removed from it. Theoretical analysis of the improved algorithm provide a guarantee that the multi-user can be successfully detected with a high probability. The simulation results show that the proposed scheme can achieve better bit error rate (BER performance in the uplink grant-free NOMA system.

  11. The right view from the wrong location: depth perception in stereoscopic multi-user virtual environments.

    Science.gov (United States)

    Pollock, Brice; Burton, Melissa; Kelly, Jonathan W; Gilbert, Stephen; Winer, Eliot

    2012-04-01

    Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a ray-intersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.

  12. Advancing MEMS Technology Usage through the MUMPS (Multi-User MEMS Processes) Program

    Science.gov (United States)

    Koester, D. A.; Markus, K. W.; Dhuler, V.; Mahadevan, R.; Cowen, A.

    1995-01-01

    In order to help provide access to advanced micro-electro-mechanical systems (MEMS) technologies and lower the barriers for both industry and academia, the Microelectronic Center of North Carolina (MCNC) and ARPA have developed a program which provides users with access to both MEMS processes and advanced electronic integration techniques. The four distinct aspects of this program, the multi-user MEMS processes (MUMP's), the consolidated micro-mechanical element library, smart MEMS, and the MEMS technology network are described in this paper. MUMP's is an ARPA-supported program created to provide inexpensive access to MEMS technology in a multi-user environment. It is both a proof-of-concept and educational tool that aids in the development of MEMS in the domestic community. MUMP's technologies currently include a 3-layer poly-silicon surface micromachining process and LIGA (lithography, electroforming, and injection molding) processes that provide reasonable design flexibility within set guidelines. The consolidated micromechanical element library (CaMEL) is a library of active and passive MEMS structures that can be downloaded by the MEMS community via the internet. Smart MEMS is the development of advanced electronics integration techniques for MEMS through the application of flip chip technology. The MEMS technology network (TechNet) is a menu of standard substrates and MEMS fabrication processes that can be purchased and combined to create unique process flows. TechNet provides the MEMS community greater flexibility and enhanced technology accessibility.

  13. Robust Transceivers Design for Multi-stream Multi-user MIMO Visible Light Communication

    KAUST Repository

    Sifaou, Houssem

    2017-11-27

    Visible light communication (VLC) is an emerging technique that uses light-emitting diodes to combine communication and illumination. It is considered as a promising scheme for indoor wireless communication that can be deployed at reduced costs, while offering high data rate performance. This paper focuses on the design of precoding and receiving schemes for downlink multi-user multiple-input multiple-output VLC systems using angle diversity receivers. Two major concerns need to be considered while solving such a problem. The first one is related to the inter-user interference, basically inherent to our consideration of a multi-user system, while the second results from the users’ mobility, causing imperfect channel estimates. To address both concerns, we propose robust precoding and receiver that solve the max-min SINR problem. The performance of the proposed VLC design is studied under different working conditions, where a significant gain of the proposed robust transceivers over their non-robust counterparts has been observed.

  14. Robust Transceivers Design for Multi-stream Multi-user MIMO Visible Light Communication

    KAUST Repository

    Sifaou, Houssem; Kammoun, Abla; Park, Kihong; Alouini, Mohamed-Slim

    2017-01-01

    Visible light communication (VLC) is an emerging technique that uses light-emitting diodes to combine communication and illumination. It is considered as a promising scheme for indoor wireless communication that can be deployed at reduced costs, while offering high data rate performance. This paper focuses on the design of precoding and receiving schemes for downlink multi-user multiple-input multiple-output VLC systems using angle diversity receivers. Two major concerns need to be considered while solving such a problem. The first one is related to the inter-user interference, basically inherent to our consideration of a multi-user system, while the second results from the users’ mobility, causing imperfect channel estimates. To address both concerns, we propose robust precoding and receiver that solve the max-min SINR problem. The performance of the proposed VLC design is studied under different working conditions, where a significant gain of the proposed robust transceivers over their non-robust counterparts has been observed.

  15. Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation

    Directory of Open Access Journals (Sweden)

    Suk-Ju Kang

    2016-12-01

    Full Text Available This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F1 score by up to 0.490, compared with benchmark algorithms.

  16. On-line satellite/central computer facility of the Multiparticle Argo Spectrometer System

    International Nuclear Information System (INIS)

    Anderson, E.W.; Fisher, G.P.; Hien, N.C.; Larson, G.P.; Thorndike, A.M.; Turkot, F.; von Lindern, L.; Clifford, T.S.; Ficenec, J.R.; Trower, W.P.

    1974-09-01

    An on-line satellite/central computer facility has been developed at Brookhaven National Laboratory as part of the Multiparticle Argo Spectrometer System (MASS). This facility consisting of a PDP-9 and a CDC-6600, has been successfully used in study of proton-proton interactions at 28.5 GeV/c. (U.S.)

  17. Computer applications for the Fast Flux Test Facility

    International Nuclear Information System (INIS)

    Worth, G.A.; Patterson, J.R.

    1976-01-01

    Computer applications for the FFTF reactor include plant surveillance functions and fuel handling and examination control functions. Plant surveillance systems provide the reactor operator with a selection of over forty continuously updated, formatted displays of correlated data. All data are checked for limits and validity and the operator is advised of any anomaly. Data are also recorded on magnetic tape for historical purposes. The system also provides calculated variables, such as reactor thermal power and anomalous reactivity. Supplementing the basic plant surveillance computer system is a minicomputer system that monitors the reactor cover gas to detect and characterize absorber or fuel pin failures. In addition to plant surveillance functions, computers are used in the FFTF for controlling selected refueling equipment and for post-irradiation fuel pin examination. Four fuel handling or examination systems operate under computer control with manual monitoring and over-ride capability

  18. Implementation of computer security at nuclear facilities in Germany

    Energy Technology Data Exchange (ETDEWEB)

    Lochthofen, Andre; Sommer, Dagmar [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany)

    2013-07-01

    In recent years, electrical and I and C components in nuclear power plants (NPPs) were replaced by software-based components. Due to the increased number of software-based systems also the threat of malevolent interferences and cyber-attacks on NPPs has increased. In order to maintain nuclear security, conventional physical protection measures and protection measures in the field of computer security have to be implemented. Therefore, the existing security management process of the NPPs has to be expanded to computer security aspects. In this paper, we give an overview of computer security requirements for German NPPs. Furthermore, some examples for the implementation of computer security projects based on a GRS-best-practice-approach are shown. (orig.)

  19. Implementation of computer security at nuclear facilities in Germany

    International Nuclear Information System (INIS)

    Lochthofen, Andre; Sommer, Dagmar

    2013-01-01

    In recent years, electrical and I and C components in nuclear power plants (NPPs) were replaced by software-based components. Due to the increased number of software-based systems also the threat of malevolent interferences and cyber-attacks on NPPs has increased. In order to maintain nuclear security, conventional physical protection measures and protection measures in the field of computer security have to be implemented. Therefore, the existing security management process of the NPPs has to be expanded to computer security aspects. In this paper, we give an overview of computer security requirements for German NPPs. Furthermore, some examples for the implementation of computer security projects based on a GRS-best-practice-approach are shown. (orig.)

  20. Computer-aided system for cryogenic research facilities

    International Nuclear Information System (INIS)

    Gerasimov, V.P.; Zhelamsky, M.V.; Mozin, I.V.; Repin, S.S.

    1994-01-01

    A computer-aided system is developed for the more effective choice and optimization of the design and manufacturing technologies of the superconductor for the magnet system of the International Thermonuclear Experimental Reactor (ITER) with the aim to ensure the superconductor certification. The computer-aided system provides acquisition, processing, storage and display of data describing the proceeding tests, the detection of any parameter deviations and their analysis. Besides, it generates commands for the equipment switch off in emergency situations. ((orig.))

  1. Action tagging in a multi-user indoor environment for behavioural analysis purposes.

    Science.gov (United States)

    Guerra, Claudio; Bianchi, Valentina; De Munari, Ilaria; Ciampolini, Paolo

    2015-01-01

    EU population is getting older, so that ICT-based solutions are expected to provide support in the challenges implied by the demographic change. At the University of Parma an AAL (Ambient Assisted Living) system, named CARDEA, has been developed. In this paper a new feature of the system is introduced, in which environmental and personal (i.e., wearable) sensors coexist, providing an accurate picture of the user's activity and needs. Environmental devices may greatly help in performing activity recognition and behavioral analysis tasks. However, in a multi-user environment, this implies the need of attributing environmental sensors outcome to a specific user, i.e., identifying the user when he performs a task detected by an environmental device. We implemented such an "action tagging" feature, based on information fusion, within the CARDEA environment, as an inexpensive, alternative solution to the problematic issue of indoor locationing.

  2. On the Impact of Multi-User Traffic Dynamics on Low Latency Communications

    DEFF Research Database (Denmark)

    Gerardino, Guillermo Andrés Pocovi; Pedersen, Klaus I.; Alvarez, Beatriz Soret

    2016-01-01

    In this paper we study the downlink latency performance in a multi-user cellular network. We use a flexible 5G radio frame structure, where the TTI size is configurable on a per-user basis according to their specific service requirements. Results show that at low system loads using a short TTI (e.......g. 0.25 ms) is an attractive solution to achieve low latency communications (LLC). The main benefits come from the low transmission delay required to transmit the payloads. However, as the load increases, longer TTI configurations with lower relative control overhead (and therefore higher spectral...... efficiency) provide better performance as these better cope with the non-negligible queuing delay. The presented results allow to conclude that support for scheduling with different TTI sizes is important for LLC and should be included in the future 5G....

  3. Multi-User Preemptive Scheduling For Critical Low Latency Communications in 5G Networks

    DEFF Research Database (Denmark)

    Abdul-Mawgood Ali Ali Esswie, Ali; Pedersen, Klaus

    2018-01-01

    5G new radio is envisioned to support three major service classes: enhanced mobile broadband (eMBB), ultrareliable low-latency communications (URLLC), and massive machine type communications. Emerging URLLC services require up to one millisecond of communication latency with 99.999% success...... probability. Though, there is a fundamental trade-off between system spectral efficiency (SE) and achievable latency. This calls for novel scheduling protocols which cross-optimize system performance on user-centric; instead of network-centric basis. In this paper, we develop a joint multi-user preemptive...... scheduling strategy to simultaneously cross-optimize system SE and URLLC latency. At each scheduling opportunity, available URLLC traffic is always given higher priority. When sporadic URLLC traffic appears during a transmission time interval (TTI), proposed scheduler seeks for fitting the URLLC-eMBB traffic...

  4. CGLXTouch: A multi-user multi-touch approach for ultra-high-resolution collaborative workspaces

    KAUST Repository

    Ponto, Kevin

    2011-06-01

    This paper presents an approach for empowering collaborative workspaces through ultra-high resolution tiled display environments concurrently interfaced with multiple multi-touch devices. Multi-touch table devices are supported along with portable multi-touch tablet and phone devices, which can be added to and removed from the system on the fly. Events from these devices are tagged with a device identifier and are synchronized with the distributed display environment, enabling multi-user support. As many portable devices are not equipped to render content directly, a remotely scene is streamed in. The presented approach scales for large numbers of devices, providing access to a multitude of hands-on techniques for collaborative data analysis. © 2011 Elsevier B.V. All rights reserved.

  5. Proof-of-Concept System for Opportunistic Spectrum Access in Multi-user Decentralized Networks

    Directory of Open Access Journals (Sweden)

    Sumit J. Darak

    2016-09-01

    Full Text Available Poor utilization of an electromagnetic spectrum and ever increasing demand for spectrum have led to surge of interests in opportunistic spectrum access (OSA based paradigms like cognitive radio and unlicensed LTE. In OSA for decentralized networks, frequency band selection from wideband spectrum is a challenging task since secondary users (SUs do not share any information with each other. In this paper, a new decision making policy (DMP has been proposed for OSA in the multi-user decentralized networks. First contribution is an accurate characterization of frequency bands using Bayes-UCB algorithm. Then, a novel SU orthogonization scheme using Bayes-UCB algorithm is proposed replacing randomization based scheme. At the end, USRP testbed has been developed for analyzing the performance of DMPs using real radio signals. Experimental results show that the proposed DMP offers significant improvement in spectrum utilization, fewer subband switching and collisions compared to other DMPs.

  6. Performance Analysis of Diversity-Controlled Multi-User Superposition Transmission for 5G Wireless Networks.

    Science.gov (United States)

    Yeom, Jeong Seon; Chu, Eunmi; Jung, Bang Chul; Jin, Hu

    2018-02-10

    In this paper, we propose a novel low-complexity multi-user superposition transmission (MUST) technique for 5G downlink networks, which allows multiple cell-edge users to be multiplexed with a single cell-center user. We call the proposed technique diversity-controlled MUST technique since the cell-center user enjoys the frequency diversity effect via signal repetition over multiple orthogonal frequency division multiplexing (OFDM) sub-carriers. We assume that a base station is equipped with a single antenna but users are equipped with multiple antennas. In addition, we assume that the quadrature phase shift keying (QPSK) modulation is used for users. We mathematically analyze the bit error rate (BER) of both cell-edge users and cell-center users, which is the first theoretical result in the literature to the best of our knowledge. The mathematical analysis is validated through extensive link-level simulations.

  7. Following the actors and avatars of massively multi-user online role-playing games

    DEFF Research Database (Denmark)

    Jensen, Sisse Siggaard

    2007-01-01

    held in high esteem by a group, or guild, of avatars and actors, these are activities, which may be conceived of as being complex, reflective practices. To become a skilled, professional, high-level avatar is hard work, it may take months, and only then, can the avatar perform without the many......’ conceptions of the virtual worlds, 2) their choices and constructions of mediating avatars, 3) the diversity of social interactions, 4) the constructions of self experienced and expressed while reflecting on action and communication, and 5) the interplay between the virtual worlds and the actors’ life worlds......In the massively multi-user online role-playing games of e.g. EverQuest I & II and the World of Warcraft, millions of actors inhabit and create new places and spaces for communication and social interaction (Castranova 2001, Gee, 2003, Goffman 1974/86, Jensen 2006a, Qvortrup 2001, 2002). Some...

  8. AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time

    DEFF Research Database (Denmark)

    Park, Seonwook; Gebhardt, Christoph; Rädle, Roman

    2018-01-01

    and rule-based solutions are tedious to create and do not scale to larger problems nor do they adapt to dynamic changes, such as users leaving or joining an activity. In this paper, we cast the problem of UI distribution as an assignment problem and propose to solve it using combinatorial optimization. We...... present a mixed integer programming formulation which allows real-time applications in dynamically changing collaborative settings. It optimizes the allocation of UI elements based on device capabilities, user roles, preferences, and access rights. We present a proof-of-concept designer-in-the-loop tool......Developing cross-device multi-user interfaces (UIs) is a challenging problem. There are numerous ways in which content and interactivity can be distributed. However, good solutions must consider multiple users, their roles, their preferences and access rights, as well as device capabilities. Manual...

  9. Performance of an opportunistic multi-user cognitive network with multiple primary users

    KAUST Repository

    Khan, Fahd Ahmed

    2014-04-01

    Consider a multi-user underlay cognitive network where multiple cognitive users, having limited peak transmit power, concurrently share the spectrum with a primary network with multiple users. The channel between the secondary network is assumed to have independent but not identical Nakagami-m fading. The interference channel between the secondary users and the primary users is assumed to have Rayleigh fading. The uplink scenario is considered where a single secondary user is selected for transmission. This opportunistic selection depends on the transmission channel power gain and the interference channel power gain as well as the power allocation policy adopted at the users. Exact closed form expressions for the momentgenerating function, outage performance and the symbol-error-rate performance are derived. The outage performance is also studied in the asymptotic regimes and the generalized diversity gain of this scheduling scheme is derived. Numerical results corroborate the derived analytical results.

  10. Operational facility-integrated computer system for safeguards

    International Nuclear Information System (INIS)

    Armento, W.J.; Brooksbank, R.E.; Krichinsky, A.M.

    1980-01-01

    A computer system for safeguards in an active, remotely operated, nuclear fuel processing pilot plant has been developed. This sytem maintains (1) comprehensive records of special nuclear materials, (2) automatically updated book inventory files, (3) material transfer catalogs, (4) timely inventory estimations, (5) sample transactions, (6) automatic, on-line volume balances and alarmings, and (7) terminal access and applications software monitoring and logging. Future development will include near-real-time SNM mass balancing as both a static, in-tank summation and a dynamic, in-line determination. It is planned to incorporate aspects of site security and physical protection into the computer monitoring

  11. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  12. NNS computing facility manual P-17 Neutron and Nuclear Science

    International Nuclear Information System (INIS)

    Hoeberling, M.; Nelson, R.O.

    1993-11-01

    This document describes basic policies and provides information and examples on using the computing resources provided by P-17, the Neutron and Nuclear Science (NNS) group. Information on user accounts, getting help, network access, electronic mail, disk drives, tape drives, printers, batch processing software, XSYS hints, PC networking hints, and Mac networking hints is given

  13. An Improved Digital Signature Protocol to Multi-User Broadcast Authentication Based on Elliptic Curve Cryptography in Wireless Sensor Networks (WSNs

    Directory of Open Access Journals (Sweden)

    Hamed Bashirpour

    2018-03-01

    Full Text Available In wireless sensor networks (WSNs, users can use broadcast authentication mechanisms to connect to the target network and disseminate their messages within the network. Since data transfer for sensor networks is wireless, as a result, attackers can easily eavesdrop deployed sensor nodes and the data sent between them or modify the content of eavesdropped data and inject false data into the sensor network. Hence, the implementation of the message authentication mechanisms (in order to avoid changes and injecting messages into the network of wireless sensor networks is essential. In this paper, we present an improved protocol based on elliptic curve cryptography (ECC to accelerate authentication of multi-user message broadcasting. In comparison with previous ECC-based schemes, complexity and computational overhead of proposed scheme is significantly decreased. Also, the proposed scheme supports user anonymity, which is an important property in broadcast authentication schemes for WSNs to preserve user privacy and user untracking.

  14. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  15. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  16. Improving CMS data transfers among its distributed computing facilities

    International Nuclear Information System (INIS)

    Flix, J; Magini, N; Sartirana, A

    2011-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  17. Implementation of the Facility Integrated Inventory Computer System (FICS)

    International Nuclear Information System (INIS)

    McEvers, J.A.; Krichinsky, A.M.; Layman, L.R.; Dunnigan, T.H.; Tuft, R.M.; Murray, W.P.

    1980-01-01

    This paper describes a computer system which has been developed for nuclear material accountability and implemented in an active radiochemical processing plant involving remote operations. The system posesses the following features: comprehensive, timely records of the location and quantities of special nuclear materials; automatically updated book inventory files on the plant and sub-plant levels of detail; material transfer coordination and cataloging; automatic inventory estimation; sample transaction coordination and cataloging; automatic on-line volume determination, limit checking, and alarming; extensive information retrieval capabilities; and terminal access and application software monitoring and logging

  18. Hybrid Augmented Reality for Participatory Learning: The Hidden Efficacy of Multi-User Game-Based Simulation

    Science.gov (United States)

    Oh, Seungjae; So, Hyo-Jeong; Gaydos, Matthew

    2018-01-01

    The goal for this research is to articulate and test a new hybrid Augmented Reality (AR) environment for conceptual understanding. From the theoretical lens of embodied interaction, we have designed a multi-user participatory simulation called ARfract where visitors in a science museum can learn about complex scientific concepts on the refraction…

  19. Transferring an educational board game to a multi-user mobile learning game to increase shared situational awareness

    NARCIS (Netherlands)

    Klemke, Roland; Kurapati, Shalini; Kolfschoten, Gwendolyn

    2013-01-01

    Klemke, R., Kurapati, S., & Kolfschoten, G. (2013, 6 June). Transferring an educational board game to a multi-user mobile learning game to increase shared situational awareness. In P. Rooney (Ed.), Proceedings of the 3rd Irish Symposium on Game Based Learning (pp. 8-9). Dublin, Ireland. Please see

  20. Transferring an educational board game to a multi-user mobile learning game to increase shared situational awareness

    NARCIS (Netherlands)

    Klemke, Roland; Kurapati, Shalini; Kolfschoten, Gwendolyn

    2013-01-01

    Klemke, R., Kurapati, S., & Kolfschoten, G. (2013, 6 June). Transferring an educational board game to a multi-user mobile learning game to increase shared situational awareness. Presentation at the 3rd Irish Symposium on Game Based Learning, Dublin, Ireland. Please see also

  1. Computer usage among nurses in rural health-care facilities in South Africa: obstacles and challenges.

    Science.gov (United States)

    Asah, Flora

    2013-04-01

    This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.

  2. Development of computer model for radionuclide released from shallow-land disposal facility

    International Nuclear Information System (INIS)

    Suganda, D.; Sucipta; Sastrowardoyo, P.B.; Eriendi

    1998-01-01

    Development of 1-dimensional computer model for radionuclide release from shallow land disposal facility (SLDF) has been done. This computer model is used for the SLDF facility at PPTA Serpong. The SLDF facility is above 1.8 metres from groundwater and 150 metres from Cisalak river. Numerical method by implicit method of finite difference solution is chosen to predict the migration of radionuclide with any concentration.The migration starts vertically from the bottom of SLDF until the groundwater layer, then horizontally in the groundwater until the critical population group. Radionuclide Cs-137 is chosen as a sample to know its migration. The result of the assessment shows that the SLDF facility at PPTA Serpong has the high safety criteria. (author)

  3. Inspiring Equal Contribution and Opportunity in a 3D Multi-User Virtual Environment: Bringing Together Men Gamers and Women Non-Gamers in Second Life[R

    Science.gov (United States)

    deNoyelles, Aimee; Seo, Kay Kyeong-Ju

    2012-01-01

    A 3D multi-user virtual environment holds promise to support and enhance student online learning communities due to its ability to promote global synchronous interaction and collaboration, rich multisensory experience and expression, and elaborate design capabilities. Second Life[R], a multi-user virtual environment intended for adult users 18 and…

  4. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Graf, F.A. Jr.

    1995-02-27

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System`s pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System.

  5. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    International Nuclear Information System (INIS)

    Graf, F.A. Jr.

    1995-01-01

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System's pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System

  6. Computer control and data acquisition system for the R.F. Test Facility

    International Nuclear Information System (INIS)

    Stewart, K.A.; Burris, R.D.; Mankin, J.B.; Thompson, D.H.

    1986-01-01

    The Radio Frequency Test Facility (RFTF) at Oak Ridge National Laboratory, used to test and evaluate high-power ion cyclotron resonance heating (ICRH) systems and components, is monitored and controlled by a multicomponent computer system. This data acquisition and control system consists of three major hardware elements: (1) an Allen-Bradley PLC-3 programmable controller; (2) a VAX 11/780 computer; and (3) a CAMAC serial highway interface. Operating in LOCAL as well as REMOTE mode, the programmable logic controller (PLC) performs all the control functions of the test facility. The VAX computer acts as the operator's interface to the test facility by providing color mimic panel displays and allowing input via a trackball device. The VAX also provides archiving of trend data acquired by the PLC. Communications between the PLC and the VAX are via the CAMAC serial highway. Details of the hardware, software, and the operation of the system are presented in this paper

  7. ATLAS experience with HEP software at the Argonne leadership computing facility

    International Nuclear Information System (INIS)

    Uram, Thomas D; LeCompte, Thomas J; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  8. ATLAS Experience with HEP Software at the Argonne Leadership Computing Facility

    CERN Document Server

    LeCompte, T; The ATLAS collaboration; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  9. Operational Circular nr 5 - October 2000 USE OF CERN COMPUTING FACILITIES

    CERN Multimedia

    Division HR

    2000-01-01

    New rules covering the use of CERN Computing facilities have been drawn up. All users of CERN’s computing facilites are subject to these rules, as well as to the subsidiary rules of use. The Computing Rules explicitly address your responsibility for taking reasonable precautions to protect computing equipment and accounts. In particular, passwords must not be easily guessed or obtained by others. Given the difficulty to completely separate work and personal use of computing facilities, the rules define under which conditions limited personal use is tolerated. For example, limited personal use of e-mail, news groups or web browsing is tolerated in your private time, provided CERN resources and your official duties are not adversely affected. The full conditions governing use of CERN’s computing facilities are contained in Operational Circular N° 5, which you are requested to read. Full details are available at : http://www.cern.ch/ComputingRules Copies of the circular are also available in the Divis...

  10. Computer security at ukrainian nuclear facilities: interface between nuclear safety and security

    International Nuclear Information System (INIS)

    Chumak, D.; Klevtsov, O.

    2015-01-01

    Active introduction of information technology, computer instrumentation and control systems (I and C systems) in the nuclear field leads to a greater efficiency and management of technological processes at nuclear facilities. However, this trend brings a number of challenges related to cyber-attacks on the above elements, which violates computer security as well as nuclear safety and security of a nuclear facility. This paper considers regulatory support to computer security at the nuclear facilities in Ukraine. The issue of computer and information security considered in the context of physical protection, because it is an integral component. The paper focuses on the computer security of I and C systems important to nuclear safety. These systems are potentially vulnerable to cyber threats and, in case of cyber-attacks, the potential negative impact on the normal operational processes can lead to a breach of the nuclear facility security. While ensuring nuclear security of I and C systems, it interacts with nuclear safety, therefore, the paper considers an example of an integrated approach to the requirements of nuclear safety and security

  11. Exploiting Multi-user Diversity and Multi-hop Diversity in Dual-hop Broadcast Channels

    KAUST Repository

    Zafar, Ammar

    2013-05-21

    We propose joint user-and-hop scheduling over dual-hop block-fading broadcast channels in order to exploit multi-user diversity gains and multi-hop diversity gains all together. To achieve this objective, the first and second hops are scheduled opportunistically based on the channel state information. The joint scheduling problem is formulated as maximizing the weighted sum of the long term achievable rates of the users under a stability constraint, which means that in the long term the rate received by the relay should equal the rate transmitted by it, in addition to power constraints. We show that this problem is equivalent to a single-hop broadcast channel by treating the source as a virtual user with an optimal weight that maintains the stability constraint. We show how to obtain the source weight either off-line based on channel statistics or on real-time based on channel measurements. Furthermore, we consider special cases including the maximum sum-rate scheduler and the proportional fair scheduler. We also show how to extend the scheme into one that allows multiple user scheduling via superposition coding with successive decoding. Numerical results demonstrate that our proposed joint scheduling scheme enlarges the rate region as compared to scheduling schemes that exploit the diversity gains partially.

  12. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage.

    Science.gov (United States)

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model.

  13. Single-user MIMO versus multi-user MIMO in distributed antenna systems with limited feedback

    Science.gov (United States)

    Schwarz, Stefan; Heath, Robert W.; Rupp, Markus

    2013-12-01

    This article investigates the performance of cellular networks employing distributed antennas in addition to the central antennas of the base station. Distributed antennas are likely to be implemented using remote radio units, which is enabled by a low latency and high bandwidth dedicated link to the base station. This facilitates coherent transmission from potentially all available antennas at the same time. Such distributed antenna system (DAS) is an effective way to deal with path loss and large-scale fading in cellular systems. DAS can apply precoding across multiple transmission points to implement single-user MIMO (SU-MIMO) and multi-user MIMO (MU-MIMO) transmission. The throughput performance of various SU-MIMO and MU-MIMO transmission strategies is investigated in this article, employing a Long-Term evolution (LTE) standard compliant simulation framework. The previously theoretically established cell-capacity improvement of MU-MIMO in comparison to SU-MIMO in DASs is confirmed under the practical constraints imposed by the LTE standard, even under the assumption of imperfect channel state information (CSI) at the base station. Because practical systems will use quantized feedback, the performance of different CSI feedback algorithms for DASs is investigated. It is shown that significant gains in the CSI quantization accuracy and in the throughput of especially MU-MIMO systems can be achieved with relatively simple quantization codebook constructions that exploit the available temporal correlation and channel gain differences.

  14. A multi-user selective undo/redo approach for collaborative CAD systems

    Directory of Open Access Journals (Sweden)

    Yuan Cheng

    2014-04-01

    Full Text Available The engineering design process is a creative process, and the designers must repeatedly apply Undo/Redo operations to modify CAD models to explore new solutions. Undo/Redo has become one of most important functions in interactive graphics and CAD systems. Undo/Redo in a collaborative CAD system is also very helpful for collaborative awareness among a group of cooperative designers to eliminate misunderstanding and to recover from design error. However, Undo/Redo in a collaborative CAD system is much more complicated. This is because a single erroneous operation is propagated to other remote sites, and operations are interleaved at different sites. This paper presents a multi-user selective Undo/Redo approach in full distributed collaborative CAD systems. We use site ID and State Vectors to locate the Undo/Redo target at each site. By analyzing the composition of the complex CAD model, a tree-like structure called Feature Combination Hierarchy is presented to describe the decomposition of a CAD model. Based on this structure, the dependency relationship among features is clarified. B-Rep re-evaluation is simplified with the assistance of the Feature Combination Hierarchy. It can be proven that the proposed Undo/Redo approach satisfies the intention preservation and consistency maintenance correctness criteria for collaborative systems.

  15. Full-diversity partial interference cancellation for multi-user wireless relaying networks

    KAUST Repository

    El Astal, M. T O

    2013-12-01

    We focus on the uplink channel of multi-user wireless relaying networks in a coverage extension scenario. The network consists of two users, a single half duplex (HD) relay and a destination, all equipped with multiple antennas. Perfect channel state information (CSI) is assumed to be available exclusively at the receiving nodes (i.e., the relay and the destination) while the users are assumed to be completely blind. The communication through the considered network takes place over two phases. During the first phase, both users send their information concurrently to the relay. The second phase consists of decoding the received data and forwarding it simultaneously to the destination. A transmission scheme that achieves full-diversity under partial interference cancellation (PIC) group decoding is proposed. Unlike many existing schemes, it allows the concurrent transmission in both phases while achieving the full-diversity gain of full time division multiple access (TDMA) transmission regardless of the number of antennas at each node. Numerical comparison with existing schemes in the literature is provided to corroborate our theoretical claims. It is found that our interference cancellation (IC) scheme clearly outperforms existing schemes at the expense of an affordable increase in decoding complexity at both of the relay and destination. © 2013 IEEE.

  16. Interaction Control Protocols for Distributed Multi-user Multi-camera Environments

    Directory of Open Access Journals (Sweden)

    Gareth W Daniel

    2003-10-01

    Full Text Available Video-centred communication (e.g., video conferencing, multimedia online learning, traffic monitoring, and surveillance is becoming a customary activity in our lives. The management of interactions in such an environment is a complicated HCI issue. In this paper, we present our study on a collection of interaction control protocols for distributed multiuser multi-camera environments. These protocols facilitate different approaches to managing a user's entitlement for controlling a particular camera. We describe a web-based system that allows multiple users to manipulate multiple cameras in varying remote locations. The system was developed using the Java framework, and all protocols discussed have been incorporated into the system. Experiments were designed and conducted to evaluate the effectiveness of these protocols, and to enable the identification of various human factors in a distributed multi-user and multi-camera environment. This work provides an insight into the complexity associated with the interaction management in video-centred communication. It can also serve as a conceptual and experimental framework for further research in this area.

  17. Full-diversity partial interference cancellation for multi-user wireless relaying networks

    KAUST Repository

    El Astal, M. T O; Ismail, Amr; Alouini, Mohamed-Slim; Olivier, Jan Corné

    2013-01-01

    We focus on the uplink channel of multi-user wireless relaying networks in a coverage extension scenario. The network consists of two users, a single half duplex (HD) relay and a destination, all equipped with multiple antennas. Perfect channel state information (CSI) is assumed to be available exclusively at the receiving nodes (i.e., the relay and the destination) while the users are assumed to be completely blind. The communication through the considered network takes place over two phases. During the first phase, both users send their information concurrently to the relay. The second phase consists of decoding the received data and forwarding it simultaneously to the destination. A transmission scheme that achieves full-diversity under partial interference cancellation (PIC) group decoding is proposed. Unlike many existing schemes, it allows the concurrent transmission in both phases while achieving the full-diversity gain of full time division multiple access (TDMA) transmission regardless of the number of antennas at each node. Numerical comparison with existing schemes in the literature is provided to corroborate our theoretical claims. It is found that our interference cancellation (IC) scheme clearly outperforms existing schemes at the expense of an affordable increase in decoding complexity at both of the relay and destination. © 2013 IEEE.

  18. Substring Position Search over Encrypted Cloud Data Supporting Efficient Multi-User Setup

    Directory of Open Access Journals (Sweden)

    Mikhail Strizhov

    2016-07-01

    Full Text Available Existing Searchable Encryption (SE solutions are able to handle simple Boolean search queries, such as single or multi-keyword queries, but cannot handle substring search queries over encrypted data that also involve identifying the position of the substring within the document. These types of queries are relevant in areas such as searching DNA data. In this paper, we propose a tree-based Substring Position Searchable Symmetric Encryption (SSP-SSE to overcome the existing gap. Our solution efficiently finds occurrences of a given substring over encrypted cloud data. Specifically, our construction uses the position heap tree data structure and achieves asymptotic efficiency comparable to that of an unencrypted position heap tree. Our encryption takes O ( k n time, and the resulting ciphertext is of size O ( k n , where k is a security parameter and n is the size of stored data. The search takes O ( m 2 + o c c time and three rounds of communication, where m is the length of the queried substring and o c c is the number of occurrences of the substring in the document collection. We prove that the proposed scheme is secure against chosen-query attacks that involve an adaptive adversary. Finally, we extend SSP-SSE to the multi-user setting where an arbitrary group of cloud users can submit substring queries to search the encrypted data.

  19. Performance analysis of an opportunistic multi-user cognitive network with multiple primary users

    KAUST Repository

    Khan, Fahd Ahmed

    2014-03-01

    Consider a multi-user underlay cognitive network where multiple cognitive users concurrently share the spectrum with a primary network with multiple users. The channel between the secondary network is assumed to have independent but not identical Nakagami-m fading. The interference channel between the secondary users (SUs) and the primary users is assumed to have Rayleigh fading. A power allocation based on the instantaneous channel state information is derived when a peak interference power constraint is imposed on the secondary network in addition to the limited peak transmit power of each SU. The uplink scenario is considered where a single SU is selected for transmission. This opportunistic selection depends on the transmission channel power gain and the interference channel power gain as well as the power allocation policy adopted at the users. Exact closed form expressions for the moment-generating function, outage performance, symbol error rate performance, and the ergodic capacity are derived. Numerical results corroborate the derived analytical results. The performance is also studied in the asymptotic regimes, and the generalized diversity gain of this scheduling scheme is derived. It is shown that when the interference channel is deeply faded and the peak transmit power constraint is relaxed, the scheduling scheme achieves full diversity and that increasing the number of primary users does not impact the diversity order. © 2014 John Wiley & Sons, Ltd.

  20. OFDM and MC-CDMA for broadband multi-user communications WLANs and broadcasting

    CERN Document Server

    2003-01-01

    "OFDM systems have experienced increased attention in recent years and have found applications in a number of diverse areas including telephone-line based ADSL links, digital audio and video broadcasting systems, and wireless local area networks. OFDM is being considered for the next-generation of wireless systems both with and without direct sequence spreading and the resultant spreading-based multi-carrier CDMA systems have numerous attractive properties. This volume provides the reader with a broad overview of the research on OFDM systems during their 40-year history. Part I commences with an easy to read conceptual, rather than mathematical, treatment of the basic design issues of OFDM systems. The discussions gradually deepen to include adaptive single and multi-user OFDM systems invoking adaptive turbo coding. Part II introduces the taxonomy of multi-carrier CDMA systems and deals with the design of their spreading codes and the objective of minimising their crest factors. This part also compares the be...

  1. X-Switch: An Efficient , Multi-User, Multi-Language Web Application Server

    Directory of Open Access Journals (Sweden)

    Mayumbo Nyirenda

    2010-07-01

    Full Text Available Web applications are usually installed on and accessed through a Web server. For security reasons, these Web servers generally provide very few privileges to Web applications, defaulting to executing them in the realm of a guest ac- count. In addition, performance often is a problem as Web applications may need to be reinitialised with each access. Various solutions have been designed to address these security and performance issues, mostly independently of one another, but most have been language or system-specic. The X-Switch system is proposed as an alternative Web application execution environment, with more secure user-based resource management, persistent application interpreters and support for arbitrary languages/interpreters. Thus it provides a general-purpose environment for developing and deploying Web applications. The X-Switch system's experimental results demonstrated that it can achieve a high level of performance. Further- more it was shown that X-Switch can provide functionality matching that of existing Web application servers but with the added benet of multi-user support. Finally the X-Switch system showed that it is feasible to completely separate the deployment platform from the application code, thus ensuring that the developer does not need to modify his/her code to make it compatible with the deployment platform.

  2. Distributed computer controls for accelerator systems

    International Nuclear Information System (INIS)

    Moore, T.L.

    1988-09-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multi-user Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implementation with four months with a computer and instrumentation cost of approximately $100K. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking and operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the efficient implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. 3 refs

  3. Evolution of facility layout requirements and CAD [computer-aided design] system development

    International Nuclear Information System (INIS)

    Jones, M.

    1990-06-01

    The overall configuration of the Superconducting Super Collider (SSC) including the infrastructure and land boundary requirements were developed using a computer-aided design (CAD) system. The evolution of the facility layout requirements and the use of the CAD system are discussed. The emphasis has been on minimizing the amount of input required and maximizing the speed by which the output may be obtained. The computer system used to store the data is also described

  4. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    Science.gov (United States)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  5. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Jayatilaka, B. [Fermilab; Levshina, T. [Fermilab; Sehgal, C. [Fermilab; Gardner, R. [Chicago U.; Rynge, M. [USC - ISI, Marina del Rey; Würthwein, F. [UC, San Diego

    2017-11-22

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  6. The CT Scanner Facility at Stellenbosch University: An open access X-ray computed tomography laboratory

    Science.gov (United States)

    du Plessis, Anton; le Roux, Stephan Gerhard; Guelpa, Anina

    2016-10-01

    The Stellenbosch University CT Scanner Facility is an open access laboratory providing non-destructive X-ray computed tomography (CT) and a high performance image analysis services as part of the Central Analytical Facilities (CAF) of the university. Based in Stellenbosch, South Africa, this facility offers open access to the general user community, including local researchers, companies and also remote users (both local and international, via sample shipment and data transfer). The laboratory hosts two CT instruments, i.e. a micro-CT system, as well as a nano-CT system. A workstation-based Image Analysis Centre is equipped with numerous computers with data analysis software packages, which are to the disposal of the facility users, along with expert supervision, if required. All research disciplines are accommodated at the X-ray CT laboratory, provided that non-destructive analysis will be beneficial. During its first four years, the facility has accommodated more than 400 unique users (33 in 2012; 86 in 2013; 154 in 2014; 140 in 2015; 75 in first half of 2016), with diverse industrial and research applications using X-ray CT as means. This paper summarises the existence of the laboratory's first four years by way of selected examples, both from published and unpublished projects. In the process a detailed description of the capabilities and facilities available to users is presented.

  7. The CT Scanner Facility at Stellenbosch University: An open access X-ray computed tomography laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Plessis, Anton du, E-mail: anton2@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa); Physics Department, Stellenbosch University, Stellenbosch (South Africa); Roux, Stephan Gerhard le, E-mail: lerouxsg@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa); Guelpa, Anina, E-mail: aninag@sun.ac.za [CT Scanner Facility, Central Analytical Facilities, Stellenbosch University, Stellenbosch (South Africa)

    2016-10-01

    The Stellenbosch University CT Scanner Facility is an open access laboratory providing non-destructive X-ray computed tomography (CT) and a high performance image analysis services as part of the Central Analytical Facilities (CAF) of the university. Based in Stellenbosch, South Africa, this facility offers open access to the general user community, including local researchers, companies and also remote users (both local and international, via sample shipment and data transfer). The laboratory hosts two CT instruments, i.e. a micro-CT system, as well as a nano-CT system. A workstation-based Image Analysis Centre is equipped with numerous computers with data analysis software packages, which are to the disposal of the facility users, along with expert supervision, if required. All research disciplines are accommodated at the X-ray CT laboratory, provided that non-destructive analysis will be beneficial. During its first four years, the facility has accommodated more than 400 unique users (33 in 2012; 86 in 2013; 154 in 2014; 140 in 2015; 75 in first half of 2016), with diverse industrial and research applications using X-ray CT as means. This paper summarises the existence of the laboratory’s first four years by way of selected examples, both from published and unpublished projects. In the process a detailed description of the capabilities and facilities available to users is presented.

  8. Atmospheric dispersion calculation for posturated accident of nuclear facilities and the computer code: PANDA

    International Nuclear Information System (INIS)

    Kitahara, Yoshihisa; Kishimoto, Yoichiro; Narita, Osamu; Shinohara, Kunihiko

    1979-01-01

    Several Calculation methods for relative concentration (X/Q) and relative cloud-gamma dose (D/Q) of the radioactive materials released from nuclear facilities by posturated accident are presented. The procedure has been formulated as a Computer program PANDA and the usage is explained. (author)

  9. Taking the classical large audience university lecture online using tablet computer and webconferencing facilities

    DEFF Research Database (Denmark)

    Brockhoff, Per B.

    2011-01-01

    During four offerings (September 2008 – May 2011) of the course 02402 Introduction to Statistics for Engineering students at DTU, with an average of 256 students, the lecturing was carried out 100% through a tablet computer combined with the web conferencing facility Adobe Connect (version 7...

  10. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    Energy Technology Data Exchange (ETDEWEB)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  11. Trends in the salience of data collected in a multi user virtual environment: An exploratory study

    Science.gov (United States)

    Tutwiler, M. Shane

    In this study, by exploring patterns in the degree of physical salience of the data the students collected, I investigated the relationship between the level of students' tendency to frame explanations in terms of complex patterns and evidence of how they attend to and select data in support of their developing understandings of causal relationships. I accomplished this by analyzing longitudinal data collected as part of a larger study of 143 7th grade students (clustered within 36 teams, 5 teachers, and 2 schools in the same Northeastern school district) as they navigated and collected data in an ecosystems-based multi-user virtual environment curriculum known as the EcoMUVE Pond module (Metcalf, Kamarainen, Tutwiler, Grotzer, Dede, 2011) . Using individual growth modeling (Singer & Willett, 2003) I found no direct link between student pre-intervention tendency to offer explanations containing complex causal components and patterns of physical salience-driven data collection (average physical salience level, number of low physical salience data points collected, and proportion of low physical salience data points collected), though prior science content knowledge did affect the initial status and rate of change of outcomes in the average physical salience level and proportion of low physical salience data collected over time. The findings of this study suggest two issues for consideration about the use of MUVEs to study student data collection behaviors in complex spaces. Firstly, the structure of the curriculum in which the MUVE is embedded might have a direct effect on what types of data students choose to collect. This undercuts our ability to make inferences about student-driven decisions to collect specific types of data, and suggests that a more open-ended curricular model might be better suited to this type of inquiry. Secondly, differences between teachers' choices in how to facilitate the units likely contribute to the variance in student data collection

  12. Health workers' knowledge of and attitudes towards computer applications in rural African health facilities.

    Science.gov (United States)

    Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E; Blank, Antje

    2014-01-01

    The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. To report an assessment of health providers' computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (pworkplace. Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology.

  13. Development of the computer code to monitor gamma radiation in the nuclear facility environment

    International Nuclear Information System (INIS)

    Akhmad, Y. R.; Pudjiyanto, M.S.

    1998-01-01

    Computer codes for gamma radiation monitoring in the vicinity of nuclear facility which have been developed could be introduced to the commercial potable gamma analyzer. The crucial stage of the first year activity was succeeded ; that is the codes have been tested to transfer data file (pulse high distribution) from Micro NOMAD gamma spectrometer (ORTEC product) and the convert them into dosimetry and physics quantities. Those computer codes are called as GABATAN (Gamma Analyzer of Batan) and NAGABAT (Natural Gamma Analyzer of Batan). GABATAN code can isable to used at various nuclear facilities for analyzing gamma field up to 9 MeV, while NAGABAT could be used for analyzing the contribution of natural gamma rays to the exposure rate in the certain location

  14. Computer program for storage of historical and routine safety data related to radiologically controlled facilities

    International Nuclear Information System (INIS)

    Marsh, D.A.; Hall, C.J.

    1984-01-01

    A method for tracking and quick retrieval of radiological status of radiation and industrial safety systems in an active or inactive facility has been developed. The system uses a mini computer, a graphics plotter, and mass storage devices. Software has been developed which allows input and storage of architectural details, radiological conditions such as exposure rates, current location of safety systems, and routine and historical information on exposure and contamination levels. A blue print size digitizer is used for input. The computer program retains facility floor plans in three dimensional arrays. The software accesses an eight pen color plotter for output. The plotter generates color plots of the floor plans and safety systems on 8 1/2 x 11 or 20 x 30 paper or on overhead transparencies for reports and presentations

  15. Maintenance of reactor safety and control computers at a large government facility

    International Nuclear Information System (INIS)

    Brady, H.G.

    1985-01-01

    In 1950 the US Government contracted the Du Pont Company to design, build, and operate the Savannah River Plant (SRP). At the time, it was the largest construction project ever undertaken by man. It is still the largest of the Department of Energy facilities. In the nearly 35 years that have elapsed, Du Pont has met its commitments to the US Government and set world safety records in the construction and operation of nuclear facilities. Contributing factors in achieving production goals and setting the safety records are a staff of highly qualified personnel, a well maintained plant, and sound maintenance programs. There have been many ''first ever'' achievements at SRP. These ''firsts'' include: (1) computer control of a nuclear rector, and (2) use of computer systems as safety circuits. This presentation discusses the maintenance program provided for these computer systems and all digital systems at SRP. An in-house computer maintenance program that was started in 1966 with five persons has grown to a staff of 40 with investments in computer hardware increasing from $4 million in 1970 to more than $60 million in this decade. 4 figs

  16. Opportunities for artificial intelligence application in computer- aided management of mixed waste incinerator facilities

    International Nuclear Information System (INIS)

    Rivera, A.L.; Ferrada, J.J.; Singh, S.P.N.

    1992-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site. It is designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conservation and Recovery Act (RCRA). This facility, known as the TSCA Incinerator, services seven DOE/OR installations. This incinerator was recently authorized for production operation in the United States for the processing of mixed (radioactively contaminated-chemically hazardous) wastes as regulated under TSCA and RCRA. Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. These requirements impact the characteristics and disposition of incinerator residues, limits the quality of liquid and gaseous effluents, limit the characteristics and rates of waste feeds and operating conditions, and restrict the handling of the waste feed inventories. This incinerator facility presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. Demonstrated computer-aided management systems could be transferred to future mixed waste incinerator facilities

  17. Automation of a cryogenic facility by commercial process-control computer

    International Nuclear Information System (INIS)

    Sondericker, J.H.; Campbell, D.; Zantopp, D.

    1983-01-01

    To insure that Brookhaven's superconducting magnets are reliable and their field quality meets accelerator requirements, each magnet is pre-tested at operating conditions after construction. MAGCOOL, the production magnet test facility, was designed to perform these tests, having the capacity to test ten magnets per five day week. This paper describes the control aspects of MAGCOOL and the advantages afforded the designers by the implementation of a commercial process control computer system

  18. A Computer Simulation to Assess the Nuclear Material Accountancy System of a MOX Fuel Fabrication Facility

    International Nuclear Information System (INIS)

    Portaix, C.G.; Binner, R.; John, H.

    2015-01-01

    SimMOX is a computer programme that simulates container histories as they pass through a MOX facility. It performs two parallel calculations: · the first quantifies the actual movements of material that might be expected to occur, given certain assumptions about, for instance, the accumulation of material and waste, and of their subsequent treatment; · the second quantifies the same movements on the basis of the operator's perception of the quantities involved; that is, they are based on assumptions about quantities contained in the containers. Separate skeletal Excel computer programmes are provided, which can be configured to generate further accountancy results based on these two parallel calculations. SimMOX is flexible in that it makes few assumptions about the order and operational performance of individual activities that might take place at each stage of the process. It is able to do this because its focus is on material flows, and not on the performance of individual processes. Similarly there are no pre-conceptions about the different types of containers that might be involved. At the macroscopic level, the simulation takes steady operation as its base case, i.e., the same quantity of material is deemed to enter and leave the simulated area, over any given period. Transient situations can then be superimposed onto this base scene, by simulating them as operational incidents. A general facility has been incorporated into SimMOX to enable the user to create an ''act of a play'' based on a number of operational incidents that have been built into the programme. By doing this a simulation can be constructed that predicts the way the facility would respond to any number of transient activities. This computer programme can help assess the nuclear material accountancy system of a MOX fuel fabrication facility; for instance the implications of applying NRTA (near real time accountancy). (author)

  19. Do students with higher self-efficacy exhibit greater and more diverse scientific inquiry skills: An exploratory investigation in "River City", a multi-user virtual environment

    Science.gov (United States)

    Ketelhut, Diane Jass

    In this thesis, I conduct an exploratory study to investigate the relationship between students' self-efficacy on entry into authentic scientific activity and the scientific inquiry behaviors they employ while engaged in that process, over time. Scientific inquiry has been a major standard in most science education policy doctrines for the past two decades and is exemplified by activities such as making observations, formulating hypotheses, gathering and analyzing data, and forming conclusions from that data. The self-efficacy literature, however, indicates that self-efficacy levels affect perseverance and engagement. This study investigated the relationship between these two constructs. The study is conducted in a novel setting, using an innovative science curriculum delivered through an interactive computer technology that recorded each student's conversations, movements, and activities while behaving as a practicing scientist in a "virtual world" called River City. River City is a Multi-User Virtual Environment designed to engage students in a collaborative scientific inquiry-based learning experience. As a result, I was able to follow students' moment-by-moment choices of behavior while they were behaving as scientists. I collected data on students' total scientific inquiry behaviors over three visits to River City, as well as the number of sources from which they gathered their scientific data. I analyzed my longitudinal data on the 96 seventh-graders using individual growth modeling. I found that self-efficacy played a role in the number of data-gathering behaviors students engaged in initially, with high self-efficacy students engaging in more data gathering than students with low self-efficacy. However, the impact of student self-efficacy on rate of change in data gathering behavior differed by gender; by the end of the study, student self-efficacy did not impact data gathering. In addition, students' level of self-efficacy did not affect how many different

  20. MIMO wireless networks channels, techniques and standards for multi-antenna, multi-user and multi-cell systems

    CERN Document Server

    Clerckx, Bruno

    2013-01-01

    This book is unique in presenting channels, techniques and standards for the next generation of MIMO wireless networks. Through a unified framework, it emphasizes how propagation mechanisms impact the system performance under realistic power constraints. Combining a solid mathematical analysis with a physical and intuitive approach to space-time signal processing, the book progressively derives innovative designs for space-time coding and precoding as well as multi-user and multi-cell techniques, taking into consideration that MIMO channels are often far from ideal. Reflecting developments

  1. Multi-User Interference Cancellation Scheme(s) for Muliple Carrier Frequency Offset Compensation in Uplink OFDMA

    DEFF Research Database (Denmark)

    Nguyen, Huan Cong; Carvalho, Elisabeth De; Prasad, Ramjee

    2006-01-01

    (ICI) and degrade the system performance considerably. In this paper, we propose a novel Multi-User Interference (MUI) cancellation scheme for uplink OFDMA, which utilizes multiple OFDM-demodulators architecture to correct and then compensate the negative effects of multiple CFOs at the receiver's side......We consider the uplink of an Orthogonal Frequency Division Multiple Access (OFDMA)-based system, where each Mobile Station (MS) experiences a different Carrier Frequency Offset (CFO). Uncorrected CFO destroy the orthogonality among subcarriers, which could cause severe Inter-Carrier Interference...

  2. Integration of distributed plant process computer systems to nuclear power generation facilities

    International Nuclear Information System (INIS)

    Bogard, T.; Finlay, K.

    1996-01-01

    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation ampersand control are evident from variations of design features

  3. Computer mapping and visualization of facilities for planning of D and D operations

    International Nuclear Information System (INIS)

    Wuller, C.E.; Gelb, G.H.; Cramond, R.; Cracraft, J.S.

    1995-01-01

    The lack of as-built drawings for many old nuclear facilities impedes planning for decontamination and decommissioning. Traditional manual walkdowns subject workers to lengthy exposure to radiological and other hazards. The authors have applied close-range photogrammetry, 3D solid modeling, computer graphics, database management, and virtual reality technologies to create geometrically accurate 3D computer models of the interiors of facilities. The required input to the process is a set of photographs that can be acquired in a brief time. They fit 3D primitive shapes to objects of interest in the photos and, at the same time, record attributes such as material type and link patches of texture from the source photos to facets of modeled objects. When they render the model as either static images or at video rates for a walk-through simulation, the phototextures are warped onto the objects, giving a photo-realistic impression. The authors have exported the data to commercial CAD, cost estimating, robotic simulation, and plant design applications. Results from several projects at old nuclear facilities are discussed

  4. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  5. A personal computer code for seismic evaluations of nuclear power plant facilities

    International Nuclear Information System (INIS)

    Xu, J.; Graves, H.

    1991-01-01

    In the process of review and evaluation of licensing issues related to nuclear power plants, it is essential to understand the behavior of seismic loading, foundation and structural properties and their impact on the overall structural response. In most cases, such knowledge could be obtained by using simplified engineering models which, when properly implemented, can capture the essential parameters describing the physics of the problem. Such models do not require execution on large computer systems and could be implemented through a personal computer (PC) based capability. Recognizing the need for a PC software package that can perform structural response computations required for typical licensing reviews, the US Nuclear Regulatory Commission sponsored the development of a PC operated computer software package CARES (Computer Analysis for Rapid Evaluation of Structures) system. This development was undertaken by Brookhaven National Laboratory (BNL) during FY's 1988 and 1989. A wide range of computer programs and modeling approaches are often used to justify the safety of nuclear power plants. It is often difficult to assess the validity and accuracy of the results submitted by various utilities without developing comparable computer solutions. Taken this into consideration, CARES is designed as an integrated computational system which can perform rapid evaluations of structural behavior and examine capability of nuclear power plant facilities, thus CARES may be used by the NRC to determine the validity and accuracy of analysis methodologies employed for structural safety evaluations of nuclear power plants. CARES has been designed to operate on a PC, have user friendly input/output interface, and have quick turnaround. This paper describes the various features which have been implemented into the seismic module of CARES version 1.0

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  7. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    Energy Technology Data Exchange (ETDEWEB)

    Zynovyev, Mykhaylo

    2012-06-29

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  8. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    International Nuclear Information System (INIS)

    Zynovyev, Mykhaylo

    2012-01-01

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  9. Computer software design description for the Treated Effluent Disposal Facility (TEDF), Project L-045H, Operator Training Station (OTS)

    International Nuclear Information System (INIS)

    Carter, R.L. Jr.

    1994-01-01

    The Treated Effluent Disposal Facility (TEDF) Operator Training Station (OTS) is a computer-based training tool designed to aid plant operations and engineering staff in familiarizing themselves with the TEDF Central Control System (CCS)

  10. A personal computer code for seismic evaluations of nuclear power plant facilities

    International Nuclear Information System (INIS)

    Xu, J.; Graves, H.

    1990-01-01

    A wide range of computer programs and modeling approaches are often used to justify the safety of nuclear power plants. It is often difficult to assess the validity and accuracy of the results submitted by various utilities without developing comparable computer solutions. Taken this into consideration, CARES is designed as an integrated computational system which can perform rapid evaluations of structural behavior and examine capability of nuclear power plant facilities, thus CARES may be used by the NRC to determine the validity and accuracy of analysis methodologies employed for structural safety evaluations of nuclear power plants. CARES has been designed to: operate on a PC, have user friendly input/output interface, and have quick turnaround. The CARES program is structured in a modular format. Each module performs a specific type of analysis. The basic modules of the system are associated with capabilities for static, seismic and nonlinear analyses. This paper describes the various features which have been implemented into the Seismic Module of CARES version 1.0. In Section 2 a description of the Seismic Module is provided. The methodologies and computational procedures thus far implemented into the Seismic Module are described in Section 3. Finally, a complete demonstration of the computational capability of CARES in a typical soil-structure interaction analysis is given in Section 4 and conclusions are presented in Section 5. 5 refs., 4 figs

  11. {SW}ARMED: Captive Portals, Mobile Devices, and Audience Participation in Multi-User Music Performance

    OpenAIRE

    Hindle, Abram

    2013-01-01

    Audience participation in computer music has long been limited byresources such as sensor technology or the material goods necessary toshare such an instrument. A recent paradigm is to take advantageof the incredible popularity of the smart-phone, a pocket sizedcomputer, and other mobile devices, to provide the audience aninterface into a computer music instrument. In this paper we discuss amethod of sharing a computer music instrument's interface with anaudience to allow them to interact via...

  12. Software quality assurance plan for the National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Woodruff, J.

    1996-11-01

    Quality achievement is the responsibility of the line organizations of the National Ignition Facility (NIF) Project. This Software Quality Assurance Plan (SQAP) applies to the activities of the Integrated Computer Control System (ICCS) organization and its subcontractors. The Plan describes the activities implemented by the ICCS section to achieve quality in the NIF Project's controls software and implements the NIF Quality Assurance Program Plan (QAPP, NIF-95-499, L-15958-2) and the Department of Energy's (DOE's) Order 5700.6C. This SQAP governs the quality affecting activities associated with developing and deploying all control system software during the life cycle of the NIF Project

  13. Teaching ergonomics to nursing facility managers using computer-based instruction.

    Science.gov (United States)

    Harrington, Susan S; Walker, Bonnie L

    2006-01-01

    This study offers evidence that computer-based training is an effective tool for teaching nursing facility managers about ergonomics and increasing their awareness of potential problems. Study participants (N = 45) were randomly assigned into a treatment or control group. The treatment group completed the ergonomics training and a pre- and posttest. The control group completed the pre- and posttests without training. Treatment group participants improved significantly from 67% on the pretest to 91% on the posttest, a gain of 24%. Differences between mean scores for the control group were not significant for the total score or for any of the subtests.

  14. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  15. Computer-based data acquisition system in the Large Coil Test Facility

    International Nuclear Information System (INIS)

    Gould, S.S.; Layman, L.R.; Million, D.L.

    1983-01-01

    The utilization of computers for data acquisition and control is of paramount importance on large-scale fusion experiments because they feature the ability to acquire data from a large number of sensors at various sample rates and provide for flexible data interpretation, presentation, reduction, and analysis. In the Large Coil Test Facility (LCTF) a Digital Equipment Corporation (DEC) PDP-11/60 host computer with the DEC RSX-11M operating system coordinates the activities of five DEC LSI-11/23 front-end processors (FEPs) via direct memory access (DMA) communication links. This provides host control of scheduled data acquisition and FEP event-triggered data collection tasks. Four of the five FEPs have no operating system

  16. The HEPCloud Facility: elastic computing for High Energy Physics – The NOvA Use Case

    Energy Technology Data Exchange (ETDEWEB)

    Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Holzman, B. [Fermilab; Kennedy, R. [Fermilab; Norman, A. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab

    2017-03-15

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 25 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  17. The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case

    Science.gov (United States)

    Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.

    2017-10-01

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  18. CSNI Integral Test Facility Matrices for Validation of Best-Estimate Thermal-Hydraulic Computer Codes

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Internationally agreed Integral Test Facility (ITF) matrices for validation of realistic thermal hydraulic system computer codes were established. ITF development is mainly for Pressurised Water Reactors (PWRs) and Boiling Water Reactors (BWRs). A separate activity was for Russian Pressurised Water-cooled and Water-moderated Energy Reactors (WWER). Firstly, the main physical phenomena that occur during considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. In this paper some specific examples from the ITF matrices will also be provided. The matrices will be a guide for code validation, will be a basis for comparisons of code predictions performed with different system codes, and will contribute to the quantification of the uncertainty range of code model predictions. In addition to this objective, the construction of such a matrix is an attempt to record information which has been generated around the world over the last years, so that it is more accessible to present and future workers in that field than would otherwise be the case.

  19. Enhanced computational infrastructure for data analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; McHarg, B.B.; Meyer, W.H.; Parker, C.T.

    2000-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from nine national laboratories, 19 foreign laboratories, 16 universities, and five industrial partnerships. As a result of this work, DIII-D data is available on a 24x7 basis from a set of viewing and analysis tools that can be run on either the collaborators' or DIII-D's computer systems. Additionally, a web based data and code documentation system has been created to aid the novice and expert user alike

  20. Enhanced Computational Infrastructure for Data Analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; Meyer, W.H.; Parker, C.T.; McCharg, B.B.

    1999-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from 9 national laboratories, 19 foreign laboratories, 16 universities, and 5 industrial partnerships. As a result of this work, DIII-D data is available on a 24 x 7 basis from a set of viewing and analysis tools that can be run either on the collaborators' or DIII-Ds computer systems. Additionally, a Web based data and code documentation system has been created to aid the novice and expert user alike

  1. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele

    2012-01-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  2. FIRAC - a computer code to predict fire accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire. A basic material transport capability that features the effects of convection, deposition, entrainment, and filtration of material is included. The interrelated effects of filter plugging, heat transfer, gas dynamics, and material transport are taken into account. In this paper the authors summarize the physical models used to describe the gas dynamics, material transport, and heat transfer processes. They also illustrate how a typical facility is modeled using the code

  3. Application of personal computer to development of entrance management system for radiating facilities

    International Nuclear Information System (INIS)

    Suzuki, Shogo; Hirai, Shouji

    1989-01-01

    The report describes a system for managing the entrance and exit of personnel to radiating facilities. A personal computer is applied to its development. Major features of the system is outlined first. The computer is connected to the gate and two magnetic card readers provided at the gate. The gate, which is installed at the entrance to a room under control, opens only for those who have a valid card. The entrance-exit management program developed is described next. The following three files are used: ID master file (random file of the magnetic card number, name, qualification, etc., of each card carrier), entrance-exit management file (random file of time of entrance/exit, etc., updated everyday), and entrance-exit record file (sequential file of card number, name, date, etc.), which are stored on floppy disks. A display is provided to show various lists including a list of workers currently in the room and a list of workers who left the room at earlier times of the day. This system is useful for entrance management of a relatively small facility. Though small in required cost, it requires only a few operators to perform effective personnel management. (N.K.)

  4. MUMTI a Multi-User-Multi-Task-Interpreter for process-control applications with CAMAC

    International Nuclear Information System (INIS)

    Busse, E.; Degenhardt, K.H.; Vidic, U.

    1980-10-01

    MUMTI is an interactive, interpretative programming system for industrial control and process control applications running on PDP11-RXS11M/D-systems. The number of users of the MUMTI-system is not limited as far as core memory and/or terminals are available. The implemented arithmetic facilities are similar to those of other interpreters. A detailed description of the programming of CAMAC systems is given in a second part. (WB)

  5. GASFLOW: A computational model to analyze accidents in nuclear containment and facility buildings

    International Nuclear Information System (INIS)

    Travis, J.R.; Nichols, B.D.; Wilson, T.L.; Lam, K.L.; Spore, J.W.; Niederauer, G.F.

    1993-01-01

    GASFLOW is a finite-volume computer code that solves the time-dependent, compressible Navier-Stokes equations for multiple gas species. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting liquids or gases to simulate diffusion or propagating flames in complex geometries of nuclear containment or confinement and facilities' buildings. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. The ventilation system may consist of extensive ductwork, filters, dampers or valves, and fans. Condensation and heat transfer to walls, floors, ceilings, and internal structures are calculated to model the appropriate energy sinks. Solid and liquid aerosol behavior is simulated to give the time and space inventory of radionuclides. The solution procedure of the governing equations is a modified Los Alamos ICE'd-ALE methodology. Complex facilities can be represented by separate computational domains (multiblocks) that communicate through overlapping boundary conditions. The ventilation system is superimposed throughout the multiblock mesh. Gas mixtures and aerosols are transported through the free three-dimensional volumes and the restricted one-dimensional ventilation components as the accident and fluid flow fields evolve. Combustion may occur if sufficient fuel and reactant or oxidizer are present and have an ignition source. Pressure and thermal loads on the building, structural components, and safety-related equipment can be determined for specific accident scenarios. GASFLOW calculations have been compared with large oil-pool fire tests in the 1986 HDR containment test T52.14, which is a 3000-kW fire experiment. The computed results are in good agreement with the observed data

  6. Performance of DS-UWB in MB-OFDM and multi-user interference over Nakagami-m fading channels

    KAUST Repository

    Mehbodniya, Abolfazl

    2011-01-18

    The mutual interference between the two ultra wideband (UWB) technologies, which use the same frequency spectrum, will be a matter of concern in the near future. In this context, we present a performance analysis of direct-sequence (DS) UWB communication in the presence of multiband orthogonal frequency division multiplexing (MB-OFDM) UWB interfering transmissions. The channel fading is modeled according to Nakagami-m distribution, and multi-user interference is taken into account. The DS-UWB system performance is evaluated in terms of bit error rate (BER). Specifically, using the characteristic function approach, an analytical expression for the average BER is derived conditioned on the channel impulse response. Numerical and simulation results are provided and compared for different coexistence scenarios. © 2011 John Wiley & Sons, Ltd.

  7. Power Consumption Efficiency Evaluation of Multi-User Full-Duplex Visible Light Communication Systems for Smart Home Technologies

    Directory of Open Access Journals (Sweden)

    Muhammad Tabish Niaz

    2017-02-01

    Full Text Available Visible light communication (VLC has recently gained significant academic and industrial attention. VLC has great potential to supplement the functioning of the upcoming radio-frequency (RF-based 5G networks. It is best suited for home, office, and commercial indoor environments as it provides a high bandwidth and high data rate, and the visible light spectrum is free to use. This paper proposes a multi-user full-duplex VLC system using red-green-blue (RGB, and white emitting diodes (LEDs for smart home technologies. It utilizes red, green, and blue LEDs for downlink transmission and a simple phosphor white LED for uplink transmission. The red and green color bands are used for user data and smart devices, respectively, while the blue color band is used with the white LED for uplink transmission. The simulation was carried out to verify the performance of the proposed multi-user full-duplex VLC system. In addition to the performance evaluation, a cost-power consumption analysis was performed by comparing the power consumption and the resulting cost of the proposed VLC system to the power consumed and resulting cost of traditional Wi-Fi based systems and hybrid systems that utilized both VLC and Wi-Fi. Our findings showed that the proposed system improved the data rate and bit-error rate performance, while minimizing the power consumption and the associated costs. These results have demonstrated that a full-duplex VLC system is a feasible solution suitable for indoor environments as it provides greater cost savings and energy efficiency when compared to traditional Wi-Fi-based systems and hybrid systems that utilize both VLC and Wi-Fi.

  8. The Overview of the National Ignition Facility Distributed Computer Control System

    International Nuclear Information System (INIS)

    Lagin, L.J.; Bettenhausen, R.C.; Carey, R.A.; Estes, C.M.; Fisher, J.M.; Krammen, J.E.; Reed, R.K.; VanArsdall, P.J.; Woodruff, J.P.

    2001-01-01

    The Integrated Computer Control System (ICCS) for the National Ignition Facility (NIF) is a layered architecture of 300 front-end processors (FEP) coordinated by supervisor subsystems including automatic beam alignment and wavefront control, laser and target diagnostics, pulse power, and shot control timed to 30 ps. FEP computers incorporate either VxWorks on PowerPC or Solaris on UltraSPARC processors that interface to over 45,000 control points attached to VME-bus or PCI-bus crates respectively. Typical devices are stepping motors, transient digitizers, calorimeters, and photodiodes. The front-end layer is divided into another segment comprised of an additional 14,000 control points for industrial controls including vacuum, argon, synthetic air, and safety interlocks implemented with Allen-Bradley programmable logic controllers (PLCs). The computer network is augmented asynchronous transfer mode (ATM) that delivers video streams from 500 sensor cameras monitoring the 192 laser beams to operator workstations. Software is based on an object-oriented framework using CORBA distribution that incorporates services for archiving, machine configuration, graphical user interface, monitoring, event logging, scripting, alert management, and access control. Software coding using a mixed language environment of Ada95 and Java is one-third complete at over 300 thousand source lines. Control system installation is currently under way for the first 8 beams, with project completion scheduled for 2008

  9. Recognizing Multi-user Activities using Wearable Sensors in a Smart Home

    DEFF Research Database (Denmark)

    Wang, Liang; Gu, Tao; Tao, Xianping

    2010-01-01

    The advances of wearable sensors and wireless networks oer many opportunities to recognize human activities from sensor readings in pervasive computing. Existing work so far focuses mainly on recognizing activities of a single user in a home environment. However, there are typically multiple inha...

  10. The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster

    Science.gov (United States)

    Löwe, P.; Klump, J.; Thaler, J.

    2012-04-01

    Compute clusters can be used as GIS workbenches, their wealth of resources allow us to take on geocomputation tasks which exceed the limitations of smaller systems. To harness these capabilities requires a Geographic Information System (GIS), able to utilize the available cluster configuration/architecture and a sufficient degree of user friendliness to allow for wide application. In this paper we report on the first successful porting of GRASS GIS, the oldest and largest Free Open Source (FOSS) GIS project, onto a compute cluster using Platform Computing's Load Sharing Facility (LSF). In 2008, GRASS6.3 was installed on the GFZ compute cluster, which at that time comprised 32 nodes. The interaction with the GIS was limited to the command line interface, which required further development to encapsulate the GRASS GIS business layer to facilitate its use by users not familiar with GRASS GIS. During the summer of 2011, multiple versions of GRASS GIS (v 6.4, 6.5 and 7.0) were installed on the upgraded GFZ compute cluster, now consisting of 234 nodes with 480 CPUs providing 3084 cores. The GFZ compute cluster currently offers 19 different processing queues with varying hardware capabilities and priorities, allowing for fine-grained scheduling and load balancing. After successful testing of core GIS functionalities, including the graphical user interface, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008). A first application of the new GIS functionality was the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). For this, up to 500 processing nodes were used in parallel. Further trials included the processing of geometrically complex problems, requiring significant amounts of processing time. The GIS cluster successfully completed all these tasks, with processing times

  11. [Elderlies in street situation or social vulnerability: facilities and difficulties in the use of computational tools].

    Science.gov (United States)

    Frias, Marcos Antonio da Eira; Peres, Heloisa Helena Ciqueto; Pereira, Valclei Aparecida Gandolpho; Negreiros, Maria Célia de; Paranhos, Wana Yeda; Leite, Maria Madalena Januário

    2014-01-01

    This study aimed to identify the advantages and difficulties encountered by older people living on the streets or social vulnerability, to use the computer or internet. It is an exploratory qualitative research, in which five elderlies, attended on a non-governmental organization located in the city of São Paulo, have participated. The discourses were analyzed by content analysis technique and showed, as facilities, among others, to clarify doubts with the monitors, the stimulus for new discoveries coupled with proactivity and curiosity, and develop new skills. The mentioned difficulties were related to physical or cognitive issues, lack of instructor, and lack of knowledge to interact with the machine. The studies focusing on the elderly population living on the streets or in social vulnerability may contribute with evidence to guide the formulation of public policies to this population.

  12. Development of a personal computer based facility-level SSAC component and inspector support system

    International Nuclear Information System (INIS)

    Markov, A.

    1989-08-01

    Research Contract No. 4658/RB was conducted between the IAEA and the Bulgarian Committee on Use of Atomic Energy for Peaceful Purposes. The contract required the Committee to develop and program a personal computer based software package to be used as a facility-level computerized State System of Accounting and Control (SSAC) at an off-load power reactor. The software delivered, called the National Safeguards System (NSS) keeps track of all fuel assembly activity at a power reactor and generates all ledgers, MBA material balances and any required reports to national or international authorities. The NSS is designed to operate on a PC/AT or compatible equipment with a hard disk of 20 MB, color graphics monitor or adaptor and at least one floppy disk drive, 360 Kb. The programs are written in Basic (compiler 2.0). They are executed under MS DOS 3.1 or later

  13. Software quality assurance plan for the National Ignition Facility integrated computer control system

    Energy Technology Data Exchange (ETDEWEB)

    Woodruff, J.

    1996-11-01

    Quality achievement is the responsibility of the line organizations of the National Ignition Facility (NIF) Project. This Software Quality Assurance Plan (SQAP) applies to the activities of the Integrated Computer Control System (ICCS) organization and its subcontractors. The Plan describes the activities implemented by the ICCS section to achieve quality in the NIF Project`s controls software and implements the NIF Quality Assurance Program Plan (QAPP, NIF-95-499, L-15958-2) and the Department of Energy`s (DOE`s) Order 5700.6C. This SQAP governs the quality affecting activities associated with developing and deploying all control system software during the life cycle of the NIF Project.

  14. Lustre Distributed Name Space (DNE) Evaluation at the Oak Ridge Leadership Computing Facility (OLCF)

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, James S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences; Leverman, Dustin B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences; Hanley, Jesse A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences; Oral, Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Computational Sciences

    2016-08-22

    This document describes the Lustre Distributed Name Space (DNE) evaluation carried at the Oak Ridge Leadership Computing Facility (OLCF) between 2014 and 2015. DNE is a development project funded by the OpenSFS, to improve Lustre metadata performance and scalability. The development effort has been split into two parts, the first part (DNE P1) providing support for remote directories over remote Lustre Metadata Server (MDS) nodes and Metadata Target (MDT) devices, while the second phase (DNE P2) addressed split directories over multiple remote MDS nodes and MDT devices. The OLCF have been actively evaluating the performance, reliability, and the functionality of both DNE phases. For these tests, internal OLCF testbed were used. Results are promising and OLCF is planning on a full DNE deployment by mid-2016 timeframe on production systems.

  15. MONITOR: A computer model for estimating the costs of an integral monitored retrievable storage facility

    International Nuclear Information System (INIS)

    Reimus, P.W.; Sevigny, N.L.; Schutz, M.E.; Heller, R.A.

    1986-12-01

    The MONITOR model is a FORTRAN 77 based computer code that provides parametric life-cycle cost estimates for a monitored retrievable storage (MRS) facility. MONITOR is very flexible in that it can estimate the costs of an MRS facility operating under almost any conceivable nuclear waste logistics scenario. The model can also accommodate input data of varying degrees of complexity and detail (ranging from very simple to more complex) which makes it ideal for use in the MRS program, where new designs and new cost data are frequently offered for consideration. MONITOR can be run as an independent program, or it can be interfaced with the Waste System Transportation and Economic Simulation (WASTES) model, a program that simulates the movement of waste through a complete nuclear waste disposal system. The WASTES model drives the MONITOR model by providing it with the annual quantities of waste that are received, stored, and shipped at the MRS facility. Three runs of MONITOR are documented in this report. Two of the runs are for Version 1 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2A (backup) version of the MRS cost estimate. In one of these runs MONITOR was run as an independent model, and in the other run MONITOR was run using an input file generated by the WASTES model. The two runs correspond to identical cases, and the fact that they gave identical results verified that the code performed the same calculations in both modes of operation. The third run was made for Version 2 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2B (integral) version of the MRS cost estimate. This run was made with MONITOR being run as an independent model. The results of several cases have been verified by hand calculations

  16. Transmit Antenna Selection for Multi-User Underlay Cognitive Transmission with Zero-Forcing Beamforming

    KAUST Repository

    Hanif, Muhammad

    2017-03-20

    We present a transmit antenna subset selection scheme for an underlay cognitive system serving multiple secondary receivers. The secondary system employs zero-forcing beamforming to nullify the interference to multiple primary users and eliminate inter-user interference to the secondary users simultaneously. Simulation results show that the proposed scheme achieves near-optimal performance with low computational complexity. Lastly, an optimal power allocation strategy is also introduced to improve the secondary network throughput.

  17. Computational Simulations of the NASA Langley HyMETS Arc-Jet Facility

    Science.gov (United States)

    Brune, A. J.; Bruce, W. E., III; Glass, D. E.; Splinter, S. C.

    2017-01-01

    The Hypersonic Materials Environmental Test System (HyMETS) arc-jet facility located at the NASA Langley Research Center in Hampton, Virginia, is primarily used for the research, development, and evaluation of high-temperature thermal protection systems for hypersonic vehicles and reentry systems. In order to improve testing capabilities and knowledge of the test article environment, an effort is underway to computationally simulate the flow-field using computational fluid dynamics (CFD). A detailed three-dimensional model of the arc-jet nozzle and free-jet portion of the flow-field has been developed and compared to calibration probe Pitot pressure and stagnation-point heat flux for three test conditions at low, medium, and high enthalpy. The CFD model takes into account uniform pressure and non-uniform enthalpy profiles at the nozzle inlet as well as catalytic recombination efficiency effects at the probe surface. Comparing the CFD results and test data indicates an effectively fully-catalytic copper surface on the heat flux probe of about 10% efficiency and a 2-3 kpa pressure drop from the arc heater bore, where the pressure is measured, to the plenum section, prior to the nozzle. With these assumptions, the CFD results are well within the uncertainty of the stagnation pressure and heat flux measurements. The conditions at the nozzle exit were also compared with radial and axial velocimetry. This simulation capability will be used to evaluate various three-dimensional models that are tested in the HyMETS facility. An end-to-end aerothermal and thermal simulation of HyMETS test articles will follow this work to provide a better understanding of the test environment, test results, and to aid in test planning. Additional flow-field diagnostic measurements will also be considered to improve the modeling capability.

  18. A resource facility for kinetic analysis: modeling using the SAAM computer programs.

    Science.gov (United States)

    Foster, D M; Boston, R C; Jacquez, J A; Zech, L

    1989-01-01

    Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.

  19. On a new method to compute photon skyshine doses around radiotherapy facilities

    Energy Technology Data Exchange (ETDEWEB)

    Falcao, R.; Facure, A. [Comissao Nacional de Eenrgia Nuclear, Rio de Janeiro (Brazil); Xavier, A. [PEN/Coppe -UFRJ, Rio de Janeiro (Brazil)

    2006-07-01

    Full text of publication follows: Nowadays, in a great number of situations constructions are raised around radiotherapy facilities. In cases where the constructions would not be in the primary x-ray beam, 'skyshine' radiation is normally accounted for. The skyshine method is commonly used to to calculate the dose contribution from scattered radiation in such circumstances, when the roof shielding is projected considering there will be no occupancy upstairs. In these cases, there will be no need to have the usual 1,5-2,0 m thick ceiling, and the construction costs can be considerably reduced. The existing expression to compute these doses do not accomplish to explain mathematically the existence of a shadow area just around the outer room walls, and its growth, as we get away from these walls. In this paper we propose a new method to compute photon skyshine doses, using geometrical considerations to find the maximum dose point. An empirical equation is derived, and its validity is tested using M.C.N.P. 5 Monte Carlo calculation to simulate radiotherapy rooms configurations. (authors)

  20. Computer-guided facility for the study of single crystals at the gamma diffractometer GADI

    International Nuclear Information System (INIS)

    Heer, H.; Bleichert, H.; Gruhn, W.; Moeller, R.

    1984-10-01

    In the study of solid-state properties it is in many cases necessary to work with single crystals. The increased requirement in the industry and research as well as the desire for better characterization by means of γ-diffractometry made it necessary to improve and to modernize the existing instrument. The advantages of a computer-guided facility against the conventional, semiautomatic operation are manifold. Not only the process guidance, but also the data acquisition and evaluation are performed by the computer. By a remote control the operator is able to find quickly a reflex and to drive the crystal in every desired measuring position. The complete protocollation of all important measuring parameters, the convenient data storage, as well as the automatic evaluation are much useful for the user. Finally the measuring time can be increased to practically 24 hours per day. By this the versed characterization by means of γ-diffractometry is put on a completely new level. (orig.) [de

  1. A guide for the selection of computer assisted mapping (CAM) and facilities informations systems

    Energy Technology Data Exchange (ETDEWEB)

    Haslin, S.; Baxter, P.; Jarvis, L.

    1980-12-01

    Many distribution engineers are now aware that computer assisted mapping (CAM) and facilities informations systems are probably the most significant breakthrough to date in computer applications for distribution engineering. The Canadian Electrical Asociation (CEA) recognized this and requested engineers of B.C. Hydro make a study of the state of the art in Canadian utilities and the progress of CAM systems on an international basis. The purpose was to provide a guide to assist Canadian utility distribution engineers faced with the problem of studying the application of CAM systems as an alternative to present methods, consideration being given to the long-term and other benefits that were perhaps not apparent for those approaching this field for the first time. It soon became apparent that technology was developing at a high rate and competition in the market was very strong. Also a number of publications were produced by other sources which adequately covered the scope of this study. This report is thus a collection of references to reports, manuals, and other documents with a few considerations provided for those companies interested in exploring further the use of interactive graphics. 24 refs.

  2. Computer programs for capital cost estimation, lifetime economic performance simulation, and computation of cost indexes for laser fusion and other advanced technology facilities

    International Nuclear Information System (INIS)

    Pendergrass, J.H.

    1978-01-01

    Three FORTRAN programs, CAPITAL, VENTURE, and INDEXER, have been developed to automate computations used in assessing the economic viability of proposed or conceptual laser fusion and other advanced-technology facilities, as well as conventional projects. The types of calculations performed by these programs are, respectively, capital cost estimation, lifetime economic performance simulation, and computation of cost indexes. The codes permit these three topics to be addressed with considerable sophistication commensurate with user requirements and available data

  3. The Impact of Student Self-Efficacy on Scientific Inquiry Skills: An Exploratory Investigation in "River City," a Multi-User Virtual Environment

    Science.gov (United States)

    Ketelhut, Diane Jass

    2007-01-01

    This exploratory study investigated data-gathering behaviors exhibited by 100 seventh-grade students as they participated in a scientific inquiry-based curriculum project delivered by a multi-user virtual environment (MUVE). This research examined the relationship between students' self-efficacy on entry into the authentic scientific activity and…

  4. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and

  5. A browser-based multi-user working environment for physicists

    International Nuclear Information System (INIS)

    Erdmann, M; Fischer, R; Glaser, C; Klingebiel, D; Komm, M; Müller, G; Rieger, M; Steggemann, J; Urban, M; Winchen, T

    2014-01-01

    Many programs in experimental particle physics do not yet have a graphical interface, or demand strong platform and software requirements. With the most recent development of the VISPA project, we provide graphical interfaces to existing software programs and access to multiple computing clusters through standard web browsers. The scalable clientserver system allows analyses to be performed in sizable teams, and disburdens the individual physicist from installing and maintaining a software environment. The VISPA graphical interfaces are implemented in HTML, JavaScript and extensions to the Python webserver. The webserver uses SSH and RPC to access user data, code and processes on remote sites. As example applications we present graphical interfaces for steering the reconstruction framework OFFLINE of the Pierre-Auger experiment, and the analysis development toolkit PXL. The browser based VISPA system was field-tested in biweekly homework of a third year physics course by more than 100 students. We discuss the system deployment and the evaluation by the students.

  6. The design of inclusive curricula for multi-user virtual environments: a framework for developers and educators

    Directory of Open Access Journals (Sweden)

    Denise Wood

    2011-09-01

    Full Text Available Increasing access to Information Communication Technologies and a growing awareness of the importance of digital media literacy have led many educators to seek innovative solutions to harness the enthusiasm of ‘net gen’ learners while also enhancing their ability to collaborate, communicate and problem solve augmented by digital technologies. One of the emergent trends in response to these demands has been the shift away from traditional models of teaching to more flexible approaches such as the use of multi-user virtual environments (MUVEs designed to facilitate a more collaborative and participatory approach to student learning. At the same time, international initiatives such as the United Nations Millennium Development Goals, Education for All and the United Nations Convention on the Rights of Persons with Disabilities have highlighted the importance of ensuring that such teaching and learning environments are inclusive of students with diverse needs. Many universities are also responding to a widening participation agenda; a policy focus which aims to increase both the access and success rates of students from low socio-economic backgrounds. Educational technology has long been regarded as a means by which students who may be isolated by disability, geographical location and/or social circumstances can gain access to such learning opportunities. The growth in the use of MUVEs combined with increasing access to mobile communications opens up new opportunities for engaging students from diverse backgrounds through virtual learning environments. Yet despite the potential, there are many challenges in ensuring that the very students who are most able to benefit from such e-learning technologies are not further disadvantaged by a lack of attention to both the technical and pedagogical considerations required in the design of inclusive e-learning environments. This paper reports on the findings of research funded through an Australian

  7. Status of the National Ignition Facility Integrated Computer Control System (ICCS) on the Path to Ignition

    International Nuclear Information System (INIS)

    Lagin, L J; Bettenhauasen, R C; Bowers, G A; Carey, R W; Edwards, O D; Estes, C M; Demaret, R D; Ferguson, S W; Fisher, J M; Ho, J C; Ludwigsen, A P; Mathisen, D G; Marshall, C D; Matone, J M; McGuigan, D L; Sanchez, R J; Shelton, R T; Stout, E A; Tekle, E; Townsend, S L; Van Arsdall, P J; Wilson, E F

    2007-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility under construction that will contain a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. NIF is comprised of 24 independent bundles of 8 beams each using laser hardware that is modularized into more than 6,000 line replaceable units such as optical assemblies, laser amplifiers, and multifunction sensor packages containing 60,000 control and diagnostic points. NIF is operated by the large-scale Integrated Computer Control System (ICCS) in an architecture partitioned by bundle and distributed among over 800 front-end processors and 50 supervisory servers. NIF's automated control subsystems are built from a common object-oriented software framework based on CORBA distribution that deploys the software across the computer network and achieves interoperation between different languages and target architectures. A shot automation framework has been deployed during the past year to orchestrate and automate shots performed at the NIF using the ICCS. In December 2006, a full cluster of 48 beams of NIF was fired simultaneously, demonstrating that the independent bundle control system will scale to full scale of 192 beams. At present, 72 beams have been commissioned and have demonstrated 1.4-Megajoule capability of infrared light. During the next two years, the control system will be expanded to include automation of target area systems including final optics, target positioners and

  8. Status of the National Ignition Facility Integrated Computer Control System (ICCS) on the path to ignition

    International Nuclear Information System (INIS)

    Lagin, L.J.; Bettenhausen, R.C.; Bowers, G.A.; Carey, R.W.; Edwards, O.D.; Estes, C.M.; Demaret, R.D.; Ferguson, S.W.; Fisher, J.M.; Ho, J.C.; Ludwigsen, A.P.; Mathisen, D.G.; Marshall, C.D.; Matone, J.T.; McGuigan, D.L.; Sanchez, R.J.; Stout, E.A.; Tekle, E.A.; Townsend, S.L.; Van Arsdall, P.J.

    2008-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility under construction that will contain a 192-beam, 1.8-MJ, 500-TW, ultraviolet laser system together with a 10-m diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. NIF is comprised of 24 independent bundles of eight beams each using laser hardware that is modularized into more than 6000 line replaceable units such as optical assemblies, laser amplifiers, and multi-function sensor packages containing 60,000 control and diagnostic points. NIF is operated by the large-scale Integrated Computer Control System (ICCS) in an architecture partitioned by bundle and distributed among over 800 front-end processors and 50 supervisory servers. NIF's automated control subsystems are built from a common object-oriented software framework based on CORBA distribution that deploys the software across the computer network and achieves interoperation between different languages and target architectures. A shot automation framework has been deployed during the past year to orchestrate and automate shots performed at the NIF using the ICCS. In December 2006, a full cluster of 48 beams of NIF was fired simultaneously, demonstrating that the independent bundle control system will scale to full scale of 192 beams. At present, 72 beams have been commissioned and have demonstrated 1.4-MJ capability of infrared light. During the next 2 years, the control system will be expanded in preparation for project completion in 2009 to include automation of target area systems including final optics

  9. New challenges for HEP computing: RHIC [Relativistic Heavy Ion Collider] and CEBAF [Continuous Electron Beam Accelerator Facility

    International Nuclear Information System (INIS)

    LeVine, M.J.

    1990-01-01

    We will look at two facilities; RHIC and CEBF. CEBF is in the construction phase, RHIC is about to begin construction. For each of them, we examine the kinds of physics measurements that motivated their construction, and the implications of these experiments for computing. Emphasis will be on on-line requirements, driven by the data rates produced by these experiments

  10. The Viking viewer for connectomics: scalable multi-user annotation and summarization of large volume data sets.

    Science.gov (United States)

    Anderson, J R; Mohammed, S; Grimm, B; Jones, B W; Koshevoy, P; Tasdizen, T; Whitaker, R; Marc, R E

    2011-01-01

    Modern microscope automation permits the collection of vast amounts of continuous anatomical imagery in both two and three dimensions. These large data sets present significant challenges for data storage, access, viewing, annotation and analysis. The cost and overhead of collecting and storing the data can be extremely high. Large data sets quickly exceed an individual's capability for timely analysis and present challenges in efficiently applying transforms, if needed. Finally annotated anatomical data sets can represent a significant investment of resources and should be easily accessible to the scientific community. The Viking application was our solution created to view and annotate a 16.5 TB ultrastructural retinal connectome volume and we demonstrate its utility in reconstructing neural networks for a distinctive retinal amacrine cell class. Viking has several key features. (1) It works over the internet using HTTP and supports many concurrent users limited only by hardware. (2) It supports a multi-user, collaborative annotation strategy. (3) It cleanly demarcates viewing and analysis from data collection and hosting. (4) It is capable of applying transformations in real-time. (5) It has an easily extensible user interface, allowing addition of specialized modules without rewriting the viewer. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  11. On Low-Complexity Full-diversity Detection In Multi-User MIMO Multiple-Access Channels

    KAUST Repository

    Ismail, Amr

    2014-01-28

    Multiple-input multiple-output (MIMO) techniques are becoming commonplace in recent wireless communication standards. This newly introduced dimension (i.e., space) can be efficiently used to mitigate the interference in the multi-user MIMO context. In this paper, we focus on the uplink of a MIMO multiple access channel (MAC) where perfect channel state information (CSI) is only available at the destination. We provide new sufficient conditions for a wide range of space-time block codes (STBC)s to achieve full-diversity under partial interference cancellation group decoding (PICGD) with or without successive interference cancellation (SIC) for completely blind users. Interference cancellation (IC) schemes for two and three users are then provided and shown to satisfy the full-diversity criteria. Beside the complexity reduction due to the fact that PICGD enables separate decoding of distinct users without sacrificing the diversity gain, further reduction of the decoding complexity may be obtained. In fact, thanks to the structure of the proposed schemes, the real and imaginary parts of each user\\'s symbols may be decoupled without any loss of performance. Our new IC scheme is shown to outperform recently proposed two-user IC scheme especially for high spectral efficiency while requiring significantly less decoding complexity.

  12. Development of a Multi-User Polyimide-MEMS Fabrication Process and its Application to MicroHotplates

    KAUST Repository

    Lizardo, Ernesto B.

    2013-05-08

    Micro-electro-mechanical systems (MEMS) became possible thanks to the silicon based technology used to fabricate integrated circuits. Originally, MEMS fabrication was limited to silicon based techniques and materials, but the expansion of MEMS applications brought the need of a wider catalog of materials, including polymers, now being used to fabricate MEMS. Polyimide is a very attractive polymer for MEMS fabrication due to its high temperature stability compared to other polymers, low coefficient of thermal expansion, low film stress and low cost. The goal of this thesis is to expand the Polyimide usage as structural material for MEMS by the development of a multi-user fabrication process for the integration of this polymer along with multiple metal layers on a silicon substrate. The process also integrates amorphous silicon as sacrificial layer to create free-standing structures. Dry etching is used to release the devices and avoid stiction phenomena. The developed process is used to fabricate platforms for micro-hotplate gas sensors. The fabrication steps for the platforms are described in detail, explaining the process specifics and capabilities. An initial testing of the micro-hotplate is presented. As the process was also used as educational tool, some designs made by students and fabricated with the Polyimide-MEMS process are also presented.

  13. On Low-Complexity Full-diversity Detection In Multi-User MIMO Multiple-Access Channels

    KAUST Repository

    Ismail, Amr; Alouini, Mohamed-Slim

    2014-01-01

    Multiple-input multiple-output (MIMO) techniques are becoming commonplace in recent wireless communication standards. This newly introduced dimension (i.e., space) can be efficiently used to mitigate the interference in the multi-user MIMO context. In this paper, we focus on the uplink of a MIMO multiple access channel (MAC) where perfect channel state information (CSI) is only available at the destination. We provide new sufficient conditions for a wide range of space-time block codes (STBC)s to achieve full-diversity under partial interference cancellation group decoding (PICGD) with or without successive interference cancellation (SIC) for completely blind users. Interference cancellation (IC) schemes for two and three users are then provided and shown to satisfy the full-diversity criteria. Beside the complexity reduction due to the fact that PICGD enables separate decoding of distinct users without sacrificing the diversity gain, further reduction of the decoding complexity may be obtained. In fact, thanks to the structure of the proposed schemes, the real and imaginary parts of each user's symbols may be decoupled without any loss of performance. Our new IC scheme is shown to outperform recently proposed two-user IC scheme especially for high spectral efficiency while requiring significantly less decoding complexity.

  14. Surface Water Modeling Using an EPA Computer Code for Tritiated Waste Water Discharge from the heavy Water Facility

    International Nuclear Information System (INIS)

    Chen, K.F.

    1998-06-01

    Tritium releases from the D-Area Heavy Water Facilities to the Savannah River have been analyzed. The U.S. EPA WASP5 computer code was used to simulate surface water transport for tritium releases from the D-Area Drum Wash, Rework, and DW facilities. The WASP5 model was qualified with the 1993 tritium measurements at U.S. Highway 301. At the maximum tritiated waste water concentrations, the calculated tritium concentration in the Savannah River at U.S. Highway 301 due to concurrent releases from D-Area Heavy Water Facilities varies from 5.9 to 18.0 pCi/ml as a function of the operation conditions of these facilities. The calculated concentration becomes the lowest when the batch releases method for the Drum Wash Waste Tanks is adopted

  15. Development of a computer code for shielding calculation in X-ray facilities

    International Nuclear Information System (INIS)

    Borges, Diogo da S.; Lava, Deise D.; Affonso, Renato R.W.; Moreira, Maria de L.; Guimaraes, Antonio C.F.

    2014-01-01

    The construction of an effective barrier against the interaction of ionizing radiation present in X-ray rooms requires consideration of many variables. The methodology used for specifying the thickness of primary and secondary shielding of an traditional X-ray room considers the following factors: factor of use, occupational factor, distance between the source and the wall, workload, Kerma in the air and distance between the patient and the receptor. With these data it was possible the development of a computer program in order to identify and use variables in functions obtained through graphics regressions offered by NCRP Report-147 (Structural Shielding Design for Medical X-Ray Imaging Facilities) for the calculation of shielding of the room walls as well as the wall of the darkroom and adjacent areas. With the built methodology, a program validation is done through comparing results with a base case provided by that report. The thickness of the obtained values comprise various materials such as steel, wood and concrete. After validation is made an application in a real case of radiographic room. His visual construction is done with the help of software used in modeling of indoor and outdoor. The construction of barriers for calculating program resulted in a user-friendly tool for planning radiographic rooms to comply with the limits established by CNEN-NN-3:01 published in September / 2011

  16. Computational investigation of reshock strength in hydrodynamic instability growth at the National Ignition Facility

    Science.gov (United States)

    Bender, Jason; Raman, Kumar; Huntington, Channing; Nagel, Sabrina; Morgan, Brandon; Prisbrey, Shon; MacLaren, Stephan

    2017-10-01

    Experiments at the National Ignition Facility (NIF) are studying Richtmyer-Meshkov and Rayleigh-Taylor hydrodynamic instabilities in multiply-shocked plasmas. Targets feature two different-density fluids with a multimode initial perturbation at the interface, which is struck by two X-ray-driven shock waves. Here we discuss computational hydrodynamics simulations investigating the effect of second-shock (``reshock'') strength on instability growth, and how these simulations are informing target design for the ongoing experimental campaign. A Reynolds-Averaged Navier Stokes (RANS) model was used to predict motion of the spike and bubble fronts and the mixing-layer width. In addition to reshock strength, the reshock ablator thickness and the total length of the target were varied; all three parameters were found to be important for target design, particularly for ameliorating undesirable reflected shocks. The RANS data are compared to theoretical models that predict multimode instability growth proportional to the shock-induced change in interface velocity, and to currently-available data from the NIF experiments. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344. LLNL-ABS-734611.

  17. EXPERIMENTAL AND COMPUTATIONAL ACTIVITIES AT THE OREGON STATE UNIVERSITY NEES TSUNAMI RESEARCH FACILITY

    Directory of Open Access Journals (Sweden)

    S.C. Yim

    2009-01-01

    Full Text Available A diverse series of research projects have taken place or are underway at the NEES Tsunami Research Facility at Oregon State University. Projects range from the simulation of the processes and effects of tsunamis generated by sub-aerial and submarine landslides (NEESR, Georgia Tech., model comparisons of tsunami wave effects on bottom profiles and scouring (NEESR, Princeton University, model comparisons of wave induced motions on rigid and free bodies (Shared-Use, Cornell, numerical model simulations and testing of breaking waves and inundation over topography (NEESR, TAMU, structural testing and development of standards for tsunami engineering and design (NEESR, University of Hawaii, and wave loads on coastal bridge structures (non-NEES, to upgrading the two-dimensional wave generator of the Large Wave Flume. A NEESR payload project (Colorado State University was undertaken that seeks to improve the understanding of the stresses from wave loading and run-up on residential structures. Advanced computational tools for coupling fluid-structure interaction including turbulence, contact and impact are being developed to assist with the design of experiments and complement parametric studies. These projects will contribute towards understanding the physical processes that occur during earthquake generated tsunamis including structural stress, debris flow and scour, inundation and overland flow, and landslide generated tsunamis. Analytical and numerical model development and comparisons with the experimental results give engineers additional predictive tools to assist in the development of robust structures as well as identification of hazard zones and formulation of hazard plans.

  18. Development of a computational code for calculations of shielding in dental facilities

    International Nuclear Information System (INIS)

    Lava, Deise D.; Borges, Diogo da S.; Affonso, Renato R.W.; Guimaraes, Antonio C.F.; Moreira, Maria de L.

    2014-01-01

    This paper is prepared in order to address calculations of shielding to minimize the interaction of patients with ionizing radiation and / or personnel. The work includes the use of protection report Radiation in Dental Medicine (NCRP-145 or Radiation Protection in Dentistry), which establishes calculations and standards to be adopted to ensure safety to those who may be exposed to ionizing radiation in dental facilities, according to the dose limits established by CNEN-NN-3.1 standard published in September / 2011. The methodology comprises the use of computer language for processing data provided by that report, and a commercial application used for creating residential projects and decoration. The FORTRAN language was adopted as a method for application to a real case. The result is a programming capable of returning data related to the thickness of material, such as steel, lead, wood, glass, plaster, acrylic, acrylic and leaded glass, which can be used for effective shielding against single or continuous pulse beams. Several variables are used to calculate the thickness of the shield, as: number of films used in the week, film load, use factor, occupational factor, distance between the wall and the source, transmission factor, workload, area definition, beam intensity, intraoral and panoramic exam. Before the application of the methodology is made a validation of results with examples provided by NCRP-145. The calculations redone from the examples provide answers consistent with the report

  19. Thermal studies of the canister staging pit in a hypothetical Yucca Mountain canister handling facility using computational fluid dynamics

    International Nuclear Information System (INIS)

    Soltani, Mehdi; Barringer, Chris; Bues, Timothy T. de

    2007-01-01

    The proposed Yucca Mountain nuclear waste storage site will contain facilities for preparing the radioactive waste canisters for burial. A previous facility design considered was the Canister Handling Facility Staging Pit. This design is no longer used, but its thermal evaluation is typical of such facilities. Structural concrete can be adversely affected by the heat from radioactive decay. Consequently, facilities must have heating ventilation and air conditioning (HVAC) systems for cooling. Concrete temperatures are a function of conductive, convective and radiative heat transfer. The prediction of concrete temperatures under such complex conditions can only be adequately handled by computational fluid dynamics (CFD). The objective of the CFD analysis was to predict concrete temperatures under normal and off-normal conditions. Normal operation assumed steady state conditions with constant HVAC flow and temperatures. However, off-normal operation was an unsteady scenario which assumed a total HVAC failure for a period of 30 days. This scenario was particularly complex in that the concrete temperatures would gradually rise, and air flows would be buoyancy driven. The CFD analysis concluded that concrete wall temperatures would be at or below the maximum temperature limits in both the normal and off-normal scenarios. While this analysis was specific to a facility design that is no longer used, it demonstrates that such facilities are reasonably expected to have satisfactory thermal performance. (author)

  20. The INEL Tritium Research Facility

    International Nuclear Information System (INIS)

    Longhurst, G.R.

    1990-01-01

    The Tritium Research Facility (TRF) at the Idaho National Engineering Laboratory (INEL) is a small, multi-user facility dedicated to research into processes and phenomena associated with interaction of hydrogen isotopes with other materials. Focusing on bench-scale experiments, the main objectives include resolution of issues related to tritium safety in fusion reactors and the science and technology pertinent to some of those issues. In this report the TRF and many of its capabilities will be described. Work presently or recently underway there will be discussed, and the implications of that work to the development of fusion energy systems will be considered. (orig.)

  1. The INEL Tritium Research Facility

    Energy Technology Data Exchange (ETDEWEB)

    Longhurst, G.R. (Idaho National Engineering Lab., Idaho Falls (USA))

    1990-06-01

    The Tritium Research Facility (TRF) at the Idaho National Engineering Laboratory (INEL) is a small, multi-user facility dedicated to research into processes and phenomena associated with interaction of hydrogen isotopes with other materials. Focusing on bench-scale experiments, the main objectives include resolution of issues related to tritium safety in fusion reactors and the science and technology pertinent to some of those issues. In this report the TRF and many of its capabilities will be described. Work presently or recently underway there will be discussed, and the implications of that work to the development of fusion energy systems will be considered. (orig.).

  2. Computer based plant display and digital control system of Wolsong NPP Tritium Removal Facility

    International Nuclear Information System (INIS)

    Jung, C.; Smith, B.; Tosello, G.; Grosbois, J. de; Ahn, J.

    2007-01-01

    The Wolsong Tritium Removal Facility (WTRF) is an AECL-designed, first-of-a-kind facility that removes tritium from the heavy water that is used in systems of the CANDUM reactors in operation at the Wolsong Nuclear Power Plant in South Korea. The Plant Display and Control System (PDCS) provides digital plant monitoring and control for the WTRF and offers the advantages of state-of-the-art digital control system technologies for operations and maintenance. The overall features of the PDCS will be described and some of the specific approaches taken on the project to save construction time and costs, to reduce in-service life-cycle costs and to improve quality will be presented. The PDCS consists of two separate computer sub-systems: the Digital Control System (DCS) and the Plant Display System (PDS). The PDS provides the computer-based Human Machine Interface (HMI) for operators, and permits efficient supervisory or device level monitoring and control. A System Maintenance Console (SMC) is included in the PDS for the purpose of software and hardware configuration and on-line maintenance. A Historical Data System (HDS) is also included in the PDS as a data-server that continuously captures and logs process data and events for long-term storage and on-demand selective retrieval. The PDCS of WTRF has been designed and implemented based on an off-the-self PDS/DCS product combination, the Delta-V System from Emerson. The design includes fully redundant Ethernet network communications, controllers, power supplies and redundancy on selected I/O modules. The DCS provides field bus communications to interface with 3rd party controllers supplied on specialized skids, and supports HART communication with field transmitters. The DCS control logic was configured using a modular and graphical approach. The control strategies are primarily device control modules implemented as autonomous control loops, and implemented using IEC 61131-3 Function Block Diagram (FBD) and Structured

  3. Computational Modeling in Support of High Altitude Testing Facilities, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Simulation technology plays an important role in propulsion test facility design and development by assessing risks, identifying failure modes and predicting...

  4. Computational Modeling in Support of High Altitude Testing Facilities, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Simulation technology plays an important role in rocket engine test facility design and development by assessing risks, identifying failure modes and predicting...

  5. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  6. The Emergence of Large-Scale Computer Assisted Summative Examination Facilities in Higher Education

    NARCIS (Netherlands)

    Draaijer, S.; Warburton, W. I.

    2014-01-01

    A case study is presented of VU University Amsterdam where a dedicated large-scale CAA examination facility was established. In the facility, 385 students can take an exam concurrently. The case study describes the change factors and processes leading up to the decision by the institution to

  7. Potential applications of artificial intelligence in computer-based management systems for mixed waste incinerator facility operation

    International Nuclear Information System (INIS)

    Rivera, A.L.; Singh, S.P.N.; Ferrada, J.J.

    1991-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site, designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conversion and Recovery Act (RCRA). Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. This presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. This paper describes mixed waste incinerator facility performance-oriented tasks that could be assisted by Artificial Intelligence (AI) and the requirements for AI tools that would implement these algorithms in a computer-based system. 4 figs., 1 tab

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  9. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    Science.gov (United States)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post

  10. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    International Nuclear Information System (INIS)

    Donvito, Giacinto; Italiano, Alessandro; Salomoni, Davide

    2014-01-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post

  11. Advanced Control Test Operation (ACTO) facility

    International Nuclear Information System (INIS)

    Ball, S.J.

    1987-01-01

    The Advanced Control Test Operation (ACTO) project, sponsored by the US Department of Energy (DOE), is being developed to enable the latest modern technology, automation, and advanced control methods to be incorporated into nuclear power plants. The facility is proposed as a national multi-user center for advanced control development and testing to be completed in 1991. The facility will support a wide variety of reactor concepts, and will be used by researchers from Oak Ridge National Laboratory (ORNL), plus scientists and engineers from industry, other national laboratories, universities, and utilities. ACTO will also include telecommunication facilities for remote users

  12. Draft of diagnostic techniques for primary coolant circuit facilities using control computer

    International Nuclear Information System (INIS)

    Suchy, R.; Procka, V.; Murin, V.; Rybarova, D.

    A method is proposed of in-service on-line diagnostics of primary circuit selected parts by means of a control computer. Computer processing will involve the measurements of neutron flux, pressure difference in pumps and in the core, and the vibrations of primary circuit mechanical parts. (H.S.)

  13. National Ignition Facility sub-system design requirements computer system SSDR 1.5.1

    International Nuclear Information System (INIS)

    Spann, J.; VanArsdall, P.; Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development and test requirements for the Computer System, WBS 1.5.1 which is part of the NIF Integrated Computer Control System (ICCS). This document responds directly to the requirements detailed in ICCS (WBS 1.5) which is the document directly above

  14. A computational test facility for distributed analysis of gravitational wave signals

    International Nuclear Information System (INIS)

    Amico, P; Bosi, L; Cattuto, C; Gammaitoni, L; Punturo, M; Travasso, F; Vocca, H

    2004-01-01

    In the gravitational wave detector Virgo, the in-time detection of a gravitational wave signal from a coalescing binary stellar system is an intensive computational task. A parallel computing scheme using the message passing interface (MPI) is described. Performance results on a small-scale cluster are reported

  15. National Ignition Facility system design requirements NIF integrated computer controls SDR004

    International Nuclear Information System (INIS)

    Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development, and test requirements for the NIF Integrated Computer Control System. The Integrated Computer Control System (ICCS) is covered in NIF WBS element 1.5. This document responds directly to the requirements detailed in the NIF Functional Requirements/Primary Criteria, and is supported by subsystem design requirements documents for each major ICCS Subsystem

  16. Laser performance operations model (LPOM): a computational system that automates the setup and performance analysis of the national ignition facility

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, M; House, R; Williams, W; Haynam, C; White, R; Orth, C; Sacks, R [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA, 94550 (United States)], E-mail: shaw7@llnl.gov

    2008-05-15

    The National Ignition Facility (NIF) is a stadium-sized facility containing a 192-beam, 1.8 MJ, 500-TW, 351-nm laser system together with a 10-m diameter target chamber with room for many target diagnostics. NIF will be the world's largest laser experimental system, providing a national center to study inertial confinement fusion and the physics of matter at extreme energy densities and pressures. A computational system, the Laser Performance Operations Model (LPOM) has been developed and deployed that automates the laser setup process, and accurately predict laser energetics. LPOM determines the settings of the injection laser system required to achieve the desired main laser output, provides equipment protection, determines the diagnostic setup, and supplies post shot data analysis and reporting.

  17. Advantages for the introduction of computer techniques in centralized supervision of radiation levels in nuclear facilities

    International Nuclear Information System (INIS)

    Vialettes, H.; Leblanc, P.

    1980-01-01

    A new computerized information system at the Saclay Center comprising 120 measuring channels is described. The advantages offered by this system with respect to the systems in use up to now are presented. Experimental results are given which support the argument that the system can effectively supervise the radioisotope facility at the Center. (B.G.)

  18. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    Energy Technology Data Exchange (ETDEWEB)

    Kostin, Mikhail [Michigan State Univ., East Lansing, MI (United States); Mokhov, Nikolai [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Niita, Koji [Research Organization for Information Science and Technology, Ibaraki-ken (Japan)

    2013-09-25

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  19. Performances of Hybrid Amplitude Shape Modulation for UWB Communications Systems over AWGN Channel in a Single and Multi-User Environment

    Directory of Open Access Journals (Sweden)

    M. Herceg

    2009-09-01

    Full Text Available This paper analyzes the performance of the hybrid Amplitude Shape Modulation (h-ASM scheme for the time-hopping ultra-wideband (TH-UWB communication systems in the single and multi-user environment. h-ASM is the combination of Pulse Amplitude Modulation (PAM and Pulse Shape Modulation (PSM based on modified Hermite pulses (MHP. This scheme is suitable for high rate data transmission applications because b = log2(MN bits can be mapped with one waveform. The channel capacity and error probability over AWGN channel are derived and compared with other modulation schemes.

  20. Simple computational modeling for human extracorporeal irradiation using the BNCT facility of the RA-3 Reactor

    International Nuclear Information System (INIS)

    Farias, Ruben; Gonzalez, S.J.; Bellino, A.; Sztenjberg, M.; Pinto, J.; Thorp, Silvia I.; Gadan, M.; Pozzi, Emiliano; Schwint, Amanda E.; Heber, Elisa M.; Trivillin, V.A.; Zarza, Leandro G.; Estryk, Guillermo; Miller, M.; Bortolussi, S.; Soto, M.S.; Nigg, D.W.

    2009-01-01

    We present a simple computational model of the reactor RA-3 developed using Monte Carlo transport code MCNP. The model parameters are adjusted in order to reproduce experimental measured points in air and the source validation is performed in an acrylic phantom. Performance analysis is carried out using computational models of animal extracorporeal irradiation in liver and lung. Analysis is also performed inside a neutron shielded receptacle use for the irradiation of rats with a model of hepatic metastases.The computational model reproduces the experimental behavior in all the analyzed cases with a maximum difference of 10 percent. (author)

  1. A personal computer code for seismic evaluations of nuclear power plants facilities

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulos, A.J.; Graves, H.

    1990-01-01

    The program CARES (Computer Analysis for Rapid Evaluation of Structures) is an integrated computational system being developed by Brookhaven National Laboratory (BNL) for the U.S. Nuclear Regulatory Commission. It is specifically designed to be a personal computer (PC) operated package which may be used to determine the validity and accuracy of analysis methodologies used for structural safety evaluations of nuclear power plants. CARES is structured in a modular format. Each module performs a specific type of analysis i.e., static or dynamic, linear or nonlinear, etc. This paper describes the various features which have been implemented into the Seismic Module of CARES

  2. Specific features of organizng the computer-aided design of radio-electronic equipment for electrophysical facilities

    International Nuclear Information System (INIS)

    Mozin, I.V.; Vasil'ev, M.P.

    1985-01-01

    Problems of developing systems for computer-aided design (CAD) of radioelectronic equipment for large electrophysical facilities such as charged particle accelerators of new generation are discussed. The PLATA subsystem representing a part of CAD and used for printed circuit design is described. The subsystem PLATA is utilized to design, on the average, up to 150 types of circuits a year, 100-120 of which belong to circuits of increased complexity. In this case labour productivity of a designer at documentation increases almost two times

  3. Automated Computer-Based Facility for Measurement of Near-Field Structure of Microwave Radiators and Scatterers

    DEFF Research Database (Denmark)

    Mishra, Shantnu R.;; Pavlasek, Tomas J. F.;; Muresan, Letitia V.

    1980-01-01

    An automatic facility for measuring the three-dimensional structure of the near fields of microwave radiators and scatterers is described. The amplitude and phase for different polarization components can be recorded in analog and digital form using a microprocessor-based system. The stored data...... are transferred to a large high-speed computer for bulk processing and for the production of isophot and equiphase contour maps or profiles. The performance of the system is demonstrated through results for a single conical horn, for interacting rectangular horns, for multiple cylindrical scatterers...

  4. Micooprecessor controlled facility for I.N.A.A. using short half life nuclides

    International Nuclear Information System (INIS)

    Bode, P.; Korthoven, P.J.M.; Bruin, M. de

    1986-01-01

    At IRI a new, fully atomated facility for short half life INAA is being developed and installed at the Institute 2 MW reactor. The fast rabbit transfer system is constructed only of plastic and carbonfiber parts, so that rabbit contamination is minimized. This system is automated in such a way that it can operate safely without direct supervision; the sequence of irradiations and measurements is optimized by a computer-program for a given set of samples and analysis procedures. The rabbit system is controlled by an Apple IIe-computer connected to the central PDP 11/44 system of the Radiochemistry department. For a given set of samples and required analysis procedures (irradiation-,decay-, and measurement times) the central computer calculates an optimal sequence of individual actions (transfer from and to the reactor, sample storage of detector) to be carried out by the system. This sequence is loaded into the Apple-computer as a series of commands together with timing information. Actual control of the procedure occurs through the peripheral computer, which makes the system independent of delays or break-downs of the central multi-user computer system. Hardware, software and operating characteristics of the fast rabbit system will be discussed. (author)

  5. Requirements Report Computer Software System for a Semi-Automatic Pipe Handling System and Fabrication Facility

    National Research Council Canada - National Science Library

    1980-01-01

    .... This report is to present the requirements of the computer software that must be developed to create Pipe Detail Drawings and to support the processing of the Pipe Detail Drawings through the Pipe Shop...

  6. Computing Facilities for AI: A Survey of Present and Near-Future Options

    OpenAIRE

    Fahlman, Scott

    1981-01-01

    At the recent AAAI conference at Stanford, it became apparent that many new AI research centers are being established around the country in industrial and governmental settings and in universities that have not paid much attention to AI in the past. At the same time, many of the established AI centers are in the process of converting from older facilities, primarily based on Decsystem-10 and Decsystem-20 machines, to a variety of newer options. At present, unfortunately, there is no simple an...

  7. Interactive simulation of nuclear power systems using a dedicated minicomputer - computer graphics facility

    International Nuclear Information System (INIS)

    Tye, C.; Sezgen, A.O.

    1980-01-01

    The design of control systems and operational procedures for large scale nuclear power plant poses a difficult optimization problem requiring a lot of computational effort. Plant dynamic simulation using digital minicomputers offers the prospect of relatively low cost computing and when combined with graphical input/output provides a powerful tool for studying such problems. The paper discusses the results obtained from a simulation study carried out at the Computer Graphics Unit of the University of Manchester using a typical station control model for an Advanced Gas Cooled reactor. Particular reference is placed on the use of computer graphics for information display, parameter and control system optimization and techniques for using graphical input for defining and/or modifying the control system topology. Experience gained from this study has shown that a relatively modest minicomputer system can be used for simulating large scale dynamic systems and that highly interactive computer graphics can be used to advantage to relieve the designer of many of the tedious aspects of simulation leaving him free to concentrate on the more creative aspects of his work. (author)

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  9. A stand alone computer system to aid the development of mirror fusion test facility RF heating systems

    International Nuclear Information System (INIS)

    Thomas, R.A.

    1983-01-01

    The Mirror Fusion Test Facility (MFTF-B) control system architecture requires the Supervisory Control and Diagnostic System (SCDS) to communicate with a LSI-11 Local Control Computer (LCC) that in turn communicates via a fiber optic link to CAMAC based control hardware located near the machine. In many cases, the control hardware is very complex and requires a sizable development effort prior to being integrated into the overall MFTF-B system. One such effort was the development of the Electron Cyclotron Resonance Heating (ECRH) system. It became clear that a stand alone computer system was needed to simulate the functions of SCDS. This paper describes the hardware and software necessary to implement the SCDS Simulation Computer (SSC). It consists of a Digital Equipment Corporation (DEC) LSI-11 computer and a Winchester/Floppy disk operating under the DEC RT-11 operating system. All application software for MFTF-B is programmed in PASCAL, which allowed us to adapt procedures originally written for SCDS to the SSC. This nearly identical software interface means that software written during the equipment development will be useful to the SCDS programmers in the integration phase

  10. Three-dimensional coupled Monte Carlo-discrete ordinates computational scheme for shielding calculations of large and complex nuclear facilities

    International Nuclear Information System (INIS)

    Chen, Y.; Fischer, U.

    2005-01-01

    Shielding calculations of advanced nuclear facilities such as accelerator based neutron sources or fusion devices of the tokamak type are complicated due to their complex geometries and their large dimensions, including bulk shields of several meters thickness. While the complexity of the geometry in the shielding calculation can be hardly handled by the discrete ordinates method, the deep penetration of radiation through bulk shields is a severe challenge for the Monte Carlo particle transport technique. This work proposes a dedicated computational scheme for coupled Monte Carlo-Discrete Ordinates transport calculations to handle this kind of shielding problems. The Monte Carlo technique is used to simulate the particle generation and transport in the target region with both complex geometry and reaction physics, and the discrete ordinates method is used to treat the deep penetration problem in the bulk shield. The coupling scheme has been implemented in a program system by loosely integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and a newly developed coupling interface program for mapping process. Test calculations were performed with comparison to MCNP solutions. Satisfactory agreements were obtained between these two approaches. The program system has been chosen to treat the complicated shielding problem of the accelerator-based IFMIF neutron source. The successful application demonstrates that coupling scheme with the program system is a useful computational tool for the shielding analysis of complex and large nuclear facilities. (authors)

  11. AIRDOS-II computer code for estimating radiation dose to man from airborne radionuclides in areas surrouding nuclear facilities

    International Nuclear Information System (INIS)

    Moore, R.E.

    1977-04-01

    The AIRDOS-II computer code estimates individual and population doses resulting from the simultaneous atmospheric release of as many as 36 radionuclides from a nuclear facility. This report describes the meteorological and environmental models used is the code, their computer implementation, and the applicability of the code to assessments of radiological impact. Atmospheric dispersion and surface deposition of released radionuclides are estimated as a function of direction and distance from a nuclear power plant or fuel-cycle facility, and doses to man through inhalation, air immersion, exposure to contaminated ground, food ingestion, and water immersion are estimated in the surrounding area. Annual doses are estimated for total body, GI tract, bone, thyroid, lungs, muscle, kidneys, liver, spleen, testes, and ovaries. Either the annual population doses (man-rems/year) or the highest annual individual doses in the assessment area (rems/year), whichever are applicable, are summarized in output tables in several ways--by nuclides, modes of exposure, and organs. The location of the highest individual doses for each reference organ estimated for the area is specified in the output data

  12. Burnup calculations for KIPT accelerator driven subcritical facility using Monte Carlo computer codes-MCB and MCNPX

    International Nuclear Information System (INIS)

    Gohar, Y.; Zhong, Z.; Talamo, A.

    2009-01-01

    Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is ∼375 kW including the fission power of ∼260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the electrons and the

  13. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Science.gov (United States)

    2013-03-26

    ... SUPPLEMENTARY INFORMATION section for electronic access to the guidance document. Submit electronic comments on... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's... document to http://www.regulations.gov or written comments to the Division of Dockets Management (see...

  14. Navier-Stokes Simulation of Airconditioning Facility of a Large Modem Computer Room

    Science.gov (United States)

    2005-01-01

    NASA recently assembled one of the world's fastest operational supercomputers to meet the agency's new high performance computing needs. This large-scale system, named Columbia, consists of 20 interconnected SGI Altix 512-processor systems, for a total of 10,240 Intel Itanium-2 processors. High-fidelity CFD simulations were performed for the NASA Advanced Supercomputing (NAS) computer room at Ames Research Center. The purpose of the simulations was to assess the adequacy of the existing air handling and conditioning system and make recommendations for changes in the design of the system if needed. The simulations were performed with NASA's OVERFLOW-2 CFD code which utilizes overset structured grids. A new set of boundary conditions were developed and added to the flow solver for modeling the roomls air-conditioning and proper cooling of the equipment. Boundary condition parameters for the flow solver are based on cooler CFM (flow rate) ratings and some reasonable assumptions of flow and heat transfer data for the floor and central processing units (CPU) . The geometry modeling from blue prints and grid generation were handled by the NASA Ames software package Chimera Grid Tools (CGT). This geometric model was developed as a CGT-scripted template, which can be easily modified to accommodate any changes in shape and size of the room, locations and dimensions of the CPU racks, disk racks, coolers, power distribution units, and mass-storage system. The compute nodes are grouped in pairs of racks with an aisle in the middle. High-speed connection cables connect the racks with overhead cable trays. The cool air from the cooling units is pumped into the computer room from a sub-floor through perforated floor tiles. The CPU cooling fans draw cool air from the floor tiles, which run along the outside length of each rack, and eject warm air into the center isle between the racks. This warm air is eventually drawn into the cooling units located near the walls of the room. One

  15. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  18. PROFEAT Update: A Protein Features Web Server with Added Facility to Compute Network Descriptors for Studying Omics-Derived Networks.

    Science.gov (United States)

    Zhang, P; Tao, L; Zeng, X; Qin, C; Chen, S Y; Zhu, F; Yang, S Y; Li, Z R; Chen, W P; Chen, Y Z

    2017-02-03

    The studies of biological, disease, and pharmacological networks are facilitated by the systems-level investigations using computational tools. In particular, the network descriptors developed in other disciplines have found increasing applications in the study of the protein, gene regulatory, metabolic, disease, and drug-targeted networks. Facilities are provided by the public web servers for computing network descriptors, but many descriptors are not covered, including those used or useful for biological studies. We upgraded the PROFEAT web server http://bidd2.nus.edu.sg/cgi-bin/profeat2016/main.cgi for computing up to 329 network descriptors and protein-protein interaction descriptors. PROFEAT network descriptors comprehensively describe the topological and connectivity characteristics of unweighted (uniform binding constants and molecular levels), edge-weighted (varying binding constants), node-weighted (varying molecular levels), edge-node-weighted (varying binding constants and molecular levels), and directed (oriented processes) networks. The usefulness of the network descriptors is illustrated by the literature-reported studies of the biological networks derived from the genome, interactome, transcriptome, metabolome, and diseasome profiles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Computer control and data acquisition system for the Mirror Fusion Test Facility Ion Cyclotron Resonant Heating System (ICRH)

    International Nuclear Information System (INIS)

    Cheshire, D.L.; Thomas, R.A.

    1985-01-01

    The Lawrence Livermore National Laboratory (LLNL) large Mirror Fusion Test Facility (MFTF-B) will employ an Ion Cyclotron Resonant Heating (ICRH) system for plasma startup. As the MFTF-B Industrial Participant, TRW has responsibility for the ICRH system, including development of the data acquisition and control system. During the MFTF-B Supervisory Control and Diagnostic System (SCDS). For subsystem development and checkout at TRW, and for verification and acceptance testing at LLNL, the system will be run from a stand-alone computer system designed to simulate the functions of SCDS. The ''SCDS Simulator'' was developed originally for the MFTF-B ECRH System; descriptions of the hardware and software are updated in this paper. The computer control and data acquisition functions implemented for ICRH are described, including development status, and test schedule at TRW and at LLNL. The application software is written for the SCDS Simulator, but it is programmed in PASCAL and designed to facilitate conversion for use on the SCDS computers

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  1. System Requirements Analysis for a Computer-based Procedure in a Research Reactor Facility

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jaek Wan; Jang, Gwi Sook; Seo, Sang Moon; Shin, Sung Ki [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    This can address many of the routine problems related to human error in the use of conventional, hard-copy operating procedures. An operating supporting system is also required in a research reactor. A well-made CBP can address the staffing issues of a research reactor and reduce the human errors by minimizing the operator's routine tasks. A CBP for a research reactor has not been proposed yet. Also, CBPs developed for nuclear power plants have powerful and various technical functions to cover complicated plant operation situations. However, many of the functions may not be required for a research reactor. Thus, it is not reasonable to apply the CBP to a research reactor directly. Also, customizing of the CBP is not cost-effective. Therefore, a compact CBP should be developed for a research reactor. This paper introduces high level requirements derived by the system requirements analysis activity as the first stage of system implementation. Operation support tools are under consideration for application to research reactors. In particular, as a full digitalization of the main control room, application of a computer-based procedure system has been required as a part of man-machine interface system because it makes an impact on the operating staffing and human errors of a research reactor. To establish computer-based system requirements for a research reactor, this paper addressed international standards and previous practices on nuclear plants.

  2. The impact of CFD on development test facilities - A National Research Council projection. [computational fluid dynamics

    Science.gov (United States)

    Korkegi, R. H.

    1983-01-01

    The results of a National Research Council study on the effect that advances in computational fluid dynamics (CFD) will have on conventional aeronautical ground testing are reported. Current CFD capabilities include the depiction of linearized inviscid flows and a boundary layer, initial use of Euler coordinates using supercomputers to automatically generate a grid, research and development on Reynolds-averaged Navier-Stokes (N-S) equations, and preliminary research on solutions to the full N-S equations. Improvements in the range of CFD usage is dependent on the development of more powerful supercomputers, exceeding even the projected abilities of the NASA Numerical Aerodynamic Simulator (1 BFLOP/sec). Full representation of the Re-averaged N-S equations will require over one million grid points, a computing level predicted to be available in 15 yr. Present capabilities allow identification of data anomalies, confirmation of data accuracy, and adequateness of model design in wind tunnel trials. Account can be taken of the wall effects and the Re in any flight regime during simulation. CFD can actually be more accurate than instrumented tests, since all points in a flow can be modeled with CFD, while they cannot all be monitored with instrumentation in a wind tunnel.

  3. On dosimetry of radiodiagnosis facilities, mainly focused on computed tomography units

    International Nuclear Information System (INIS)

    Ghitulescu, Zoe

    2008-01-01

    The 'talk' refers to the Dosimetry of computed tomography units and it has been thought and structured in three parts, more or less stressed each of them, thus: 1) Basics of image acquisition using computed tomography technique; 2) Effective Dose calculation for a patient and its assessment using BERT concept; 3) Recommended actions of getting a good compromise in between related dose and the image quality. The aim of the first part is that the reader to become acquainted with the CT technique in order to be able of understanding the Effective Dose calculation given example and its conversion into time units using the BERT concept . The drown conclusion is that: 1) Effective dose calculation accomplished by the medical physicist (using a special soft for the CT scanner and the exam type) and, converted in time units through BERT concept, could be then communicated by the radiologist together with the diagnostic notes. Thus, it is obviously necessary a minimum informal of the patients as regards the nature and type of radiation, for instance, by the help of some leaflets. In the third part are discussed the factors which lead to get a good image quality taking into account the ALARA principle of Radiation Protection which states the fact that the dose should be 'as low as reasonable achievable'. (author)

  4. Scheme for simultaneous generation of three-color ten GW-level X-ray pulses from baseline XFEL undulator and multi-user distribution system for XFEL laboratory

    International Nuclear Information System (INIS)

    Geloni, Gianluca; Kocharyan, Vitali; Saldin, Evgeni

    2010-01-01

    The baseline design of present XFEL projects only considers the production of a single photon beam at fixed wavelength from each baseline undulator. At variance, the scheme described in this paper considers the simultaneous production of high intensity SASE FEL radiation at three different wavelengths. We present a feasibility study of our scheme, and we make exemplifications with parameters of the baseline SASE2 line of the European XFEL operating in simultaneous mode at 0.05 nm, 0.15 nm and 0.4 nm. Our technique for generating the two colors at 0.05 nm and 0.15 nm is based in essence on a ''fresh bunch'' technique. For the generation of radiation at 0.4 nm we propose to use an ''afterburner'' technique. Implementation of these techniques does not perturb the baseline mode of operation of the SASE2 undulator. The present paper also describes an efficient way to obtain a multi-user facility. It is shown that, although the XFEL photon beam from a given undulator is meant for a single user, movable multilayer X-ray mirrors can be used to serve many users simultaneously. The proposed photon beam distribution system would allow to switch the FEL beam quickly between many experiments in order to make an efficient use of the source. Distribution of photons is achieved on the basis of pulse trains and it is possible to distribute the multicolor photon beam among many independent beam lines, thereby enabling many users to work in parallel with different wavelengths. (orig.)

  5. A computer code to estimate accidental fire and radioactive airborne releases in nuclear fuel cycle facilities: User's manual for FIRIN

    International Nuclear Information System (INIS)

    Chan, M.K.; Ballinger, M.Y.; Owczarski, P.C.

    1989-02-01

    This manual describes the technical bases and use of the computer code FIRIN. This code was developed to estimate the source term release of smoke and radioactive particles from potential fires in nuclear fuel cycle facilities. FIRIN is a product of a broader study, Fuel Cycle Accident Analysis, which Pacific Northwest Laboratory conducted for the US Nuclear Regulatory Commission. The technical bases of FIRIN consist of a nonradioactive fire source term model, compartment effects modeling, and radioactive source term models. These three elements interact with each other in the code affecting the course of the fire. This report also serves as a complete FIRIN user's manual. Included are the FIRIN code description with methods/algorithms of calculation and subroutines, code operating instructions with input requirements, and output descriptions. 40 refs., 5 figs., 31 tabs

  6. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    Science.gov (United States)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We

  7. Computational design of high efficiency release targets for use at ISOL facilities

    CERN Document Server

    Liu, Y

    1999-01-01

    This report describes efforts made at the Oak Ridge National Laboratory to design high-efficiency-release targets that simultaneously incorporate the short diffusion lengths, high permeabilities, controllable temperatures, and heat-removal properties required for the generation of useful radioactive ion beam (RIB) intensities for nuclear physics and astrophysics research using the isotope separation on-line (ISOL) technique. Short diffusion lengths are achieved either by using thin fibrous target materials or by coating thin layers of selected target material onto low-density carbon fibers such as reticulated-vitreous-carbon fiber (RVCF) or carbon-bonded-carbon fiber (CBCF) to form highly permeable composite target matrices. Computational studies that simulate the generation and removal of primary beam deposited heat from target materials have been conducted to optimize the design of target/heat-sink systems for generating RIBs. The results derived from diffusion release-rate simulation studies for selected t...

  8. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  9. Computational Analysis Supporting the Design of a New Beamline for the Mines Neutron Radiography Facility

    Science.gov (United States)

    Wilson, C.; King, J.

    The Colorado School of Mines installed a neutron radiography system at the United States Geological Survey TRIGA reactor in 2012. An upgraded beamline could dramatically improve the imaging capabilities of this system. This project performed computational analyses to support the design of a new beamline, with the major goals of minimizing beam divergence and maximizing beam intensity. The new beamline will consist of a square aluminum tube with an 11.43 cm (4.5 in) inner side length and 0.635 cm (0.25 in) thick walls. It is the same length as the original beam tube (8.53 m) and is composed of 1.22 m (4 ft) and 1.52 m (5 ft) flanged sections which bolt together. The bottom 1.22 m of the beamline is a cylindrical aluminum pre-collimator which is 0.635 cm (0.25 in) thick, with an inner diameter of 5.08 cm (2 in). Based on Monte Carlo model results, when a pre-collimator is present, the use of a neutron absorbing liner on the inside surface of the beam tube has almost no effect on the angular distribution of the neutron current at the collimator exit. The use of a pre-collimator may result in a non-uniform flux profile at the image plane; however, as long as the collimator is at least three times longer than the pre-collimator, the flux distortion is acceptably low.

  10. Animal facilities

    International Nuclear Information System (INIS)

    Fritz, T.E.; Angerman, J.M.; Keenan, W.G.; Linsley, J.G.; Poole, C.M.; Sallese, A.; Simkins, R.C.; Tolle, D.

    1981-01-01

    The animal facilities in the Division are described. They consist of kennels, animal rooms, service areas, and technical areas (examining rooms, operating rooms, pathology labs, x-ray rooms, and 60 Co exposure facilities). The computer support facility is also described. The advent of the Conversational Monitor System at Argonne has launched a new effort to set up conversational computing and graphics software for users. The existing LS-11 data acquisition systems have been further enhanced and expanded. The divisional radiation facilities include a number of gamma, neutron, and x-ray radiation sources with accompanying areas for related equipment. There are five 60 Co irradiation facilities; a research reactor, Janus, is a source for fission-spectrum neutrons; two other neutron sources in the Chicago area are also available to the staff for cell biology studies. The electron microscope facilities are also described

  11. Recommended practice for the design of a computer driven Alarm Display Facility for central control rooms of nuclear power generating stations

    International Nuclear Information System (INIS)

    Ben-Yaacov, G.

    1984-01-01

    This paper's objective is to explain the process by which design can prevent human errors in nuclear plant operation. Human factor engineering principles, data, and methods used in the design of computer driven alarm display facilities are discussed. A ''generic'', advanced Alarm Display Facility is described. It considers operator capabilities and limitations in decision-making processes, response dynamics, and human memory limitations. Highlighted are considerations of human factor criteria in the designing and layout of alarm displays. Alarm data sources are described, and their use within the Alarm Display Facility are illustrated

  12. The AstroVR Collaboratory, an On-line Multi-User Environment for Research in Astrophysics

    Science.gov (United States)

    van Buren, D.; Curtis, P.; Nichols, D. A.; Brundage, M.

    We describe our experiment with an on-line collaborative environment where users share the execution of programs and communicate via audio, video, and typed text. Collaborative environments represent the next step in computer-mediated conferencing, combining powerful compute engines, data persistence, shared applications, and teleconferencing tools. As proof of concept, we have implemented a shared image analysis tool, allowing geographically distinct users to analyze FITS images together. We anticipate that \\htmllink{AstroVR}{http://astrovr.ipac.caltech.edu:8888} and similar systems will become an important part of collaborative work in the next decade, including with applications in remote observing, spacecraft operations, on-line meetings, as well as and day-to-day research activities. The technology is generic and promises to find uses in business, medicine, government, and education.

  13. Earth Systems Questions in Experimental Climate Change Science: Pressing Questions and Necessary Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Osmond, B.

    2002-05-20

    Sixty-four scientists from universities, national laboratories, and other research institutions worldwide met to evaluate the feasibility and potential of the Biosphere2 Laboratory (B2L) as an inclusive multi-user scientific facility (i.e., a facility open to researchers from all institutions, according to agreed principles of access) for earth system studies and engineering research, education, and training relevant to the mission of the United States Department of Energy (DOE).

  14. Assessment of the integrity of structural shielding of four computed tomography facilities in the greater Accra region of Ghana

    International Nuclear Information System (INIS)

    Nkansah, A.; Schandorf, C.; Boadu, M.; Fletcher, J. J.

    2013-01-01

    The structural shielding thicknesses of the walls of four computed tomography (CT) facilities in Ghana were re-evaluated to verify the shielding integrity using the new shielding design methods recommended by the National Council on Radiological Protection and Measurements (NCRP). The shielding thickness obtained ranged from 120 to 155 mm using default DLP values proposed by the European Commission and 110 to 168 mm using derived DLP values from the four CT manufacturers. These values are within the accepted standard concrete wall thickness ranging from 102 to 152 mm prescribed by the NCRP. The ultrasonic pulse testing of all walls indicated that these are of good quality and free of voids since pulse velocities estimated were within the range of 3.496±0.005 km s -1 . An average dose equivalent rate estimated for supervised areas is 3.4±0.27 μSv week -1 and that for the controlled area is 18.0±0.15 μSv week -1 , which are within acceptable values. (authors)

  15. Assessment of the structural shielding integrity of some selected computed tomography facilities in the Greater Accra Region of Ghana

    International Nuclear Information System (INIS)

    Nkansah, A.

    2010-01-01

    The structural shielding integrity was assessed for four of the CT facilities at Trust Hospital, Korle-Bu Teaching Hospital, the 37 Military Hospital and Medical Imaging Ghana Ltd. in the Greater Accra Region of Ghana. From the shielding calculations, the concrete wall thickness computed are 120, 145, 140 and 155mm, for Medical Imaging Ghana Ltd. 37 Military, Trust Hospital and Korle-Bu Teaching Hospital respectively using Default DLP values. The wall thickness using Derived DLP values are 110, 110, 120 and 168mm for Medical Imaging Ghana Ltd, 37 Military Hospital, Trust Hospital and Korle-Bu Teaching Hospital respectively. These values are within the accepted standard concrete thickness of 102- 152mm prescribed by the National Council of Radiological Protection and measurement. The ultrasonic pulse testing indicated that all the sandcrete walls are of good quality and free of voids since pulse velocities estimated were approximately equal to 3.45km/s. an average dose rate measurement for supervised areas is 3.4 μSv/wk and controlled areas is 18.0 μSv/wk. These dose rates were below the acceptable levels of 100 μSv per week for the occupationally exposed and 20 μSv per week for members of the public provided by the ICRU. The results mean that the structural shielding thickness are adequate to protect members of the public and occupationally exposed workers (au).

  16. Research Facilities | Wind | NREL

    Science.gov (United States)

    Research Facilities Research Facilities NREL's state-of-the-art wind research facilities at the Research Facilities Photo of five men in hard hards observing the end of a turbine blade while it's being tested. Structural Research Facilities A photo of two people silhouetted against a computer simulation of

  17. Scientific workflow and support for high resolution global climate modeling at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.

    2012-04-01

    The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on

  18. Computed Tomography Scanning Facility

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION:Advances research in the areas of marine geosciences, geotechnical, civil, and chemical engineering, physics, and ocean acoustics by using high-resolution,...

  19. The Impact of Student Self-efficacy on Scientific Inquiry Skills: An Exploratory Investigation in River City, a Multi-user Virtual Environment

    Science.gov (United States)

    Ketelhut, Diane Jass

    2007-02-01

    This exploratory study investigated data-gathering behaviors exhibited by 100 seventh-grade students as they participated in a scientific inquiry-based curriculum project delivered by a multi-user virtual environment (MUVE). This research examined the relationship between students' self-efficacy on entry into the authentic scientific activity and the longitudinal data-gathering behaviors they employed while engaged in that process. Three waves of student behavior data were gathered from a server-side database that recorded all student activity in the MUVE; these data were analyzed using individual growth modeling. The study found that self-efficacy correlated with the number of data-gathering behaviors in which students initially engaged, with high self-efficacy students engaging in more data gathering than students with low self-efficacy. Also, the impact of student self-efficacy on rate of change in data gathering behavior differed by gender. However, by the end of their time in the MUVE, initial student self-efficacy no longer correlated with data gathering behaviors. In addition, students' level of self-efficacy did not affect how many different sources from which they chose to gather data. These results suggest that embedding science inquiry curricula in novel platforms like a MUVE might act as a catalyst for change in students' self-efficacy and learning processes.

  20. Interference Alignment-based Precoding and User Selection with Limited Feedback in Two-cell Downlink Multi-user MIMO Systems

    Directory of Open Access Journals (Sweden)

    Yin Zhu

    2016-05-01

    Full Text Available Interference alignment (IA is a new approach to address interference in modern multiple-input multiple-out (MIMO cellular networks in which interference is an important factor that limits the system throughput. System throughput in most IA implementation schemes is significantly improved only with perfect channel state information and in a high signal-to-noise ratio (SNR region. Designing a simple IA scheme for the system with limited feedback and investigating system performance at a low-to-medium SNR region is important and practical. This paper proposed a precoding and user selection scheme based on partial interference alignment in two-cell downlink multi-user MIMO systems under limited feedback. This scheme aligned inter-cell interference to a predefined direction by designing user’s receive antenna combining vectors. A modified singular value decomposition (SVD-based beamforming method and a corresponding user-selection algorithm were proposed for the system with low rate limited feedback to improve sum rate performance. Simulation results show that the proposed scheme achieves a higher sum rate than traditional schemes without IA. The modified SVD-based beamforming scheme is also superior to the traditional zero-forcing beamforming scheme in low-rate limited feedback systems. The proposed partial IA scheme does not need to collaborate between transmitters and joint design between the transmitter and the users. The scheme can be implemented with low feedback overhead in current MIMO cellular networks.

  1. Facile formation of dendrimer-stabilized gold nanoparticles modified with diatrizoic acid for enhanced computed tomography imaging applications.

    Science.gov (United States)

    Peng, Chen; Li, Kangan; Cao, Xueyan; Xiao, Tingting; Hou, Wenxiu; Zheng, Linfeng; Guo, Rui; Shen, Mingwu; Zhang, Guixiang; Shi, Xiangyang

    2012-11-07

    We report a facile approach to forming dendrimer-stabilized gold nanoparticles (Au DSNPs) through the use of amine-terminated fifth-generation poly(amidoamine) (PAMAM) dendrimers modified by diatrizoic acid (G5.NH(2)-DTA) as stabilizers for enhanced computed tomography (CT) imaging applications. In this study, by simply mixing G5.NH(2)-DTA dendrimers with gold salt in aqueous solution at room temperature, dendrimer-entrapped gold nanoparticles (Au DENPs) with a mean core size of 2.5 nm were able to be spontaneously formed. Followed by an acetylation reaction to neutralize the dendrimer remaining terminal amines, Au DSNPs with a mean size of 6 nm were formed. The formed DTA-containing [(Au(0))(50)-G5.NHAc-DTA] DSNPs were characterized via different techniques. We show that the Au DSNPs are colloid stable in aqueous solution under different pH and temperature conditions. In vitro hemolytic assay, cytotoxicity assay, flow cytometry analysis, and cell morphology observation reveal that the formed Au DSNPs have good hemocompatibility and are non-cytotoxic at a concentration up to 3.0 μM. X-ray absorption coefficient measurements show that the DTA-containing Au DSNPs have enhanced attenuation intensity, much higher than that of [(Au(0))(50)-G5.NHAc] DENPs without DTA or Omnipaque at the same molar concentration of the active element (Au or iodine). The formed DTA-containing Au DSNPs can be used for CT imaging of cancer cells in vitro as well as for blood pool CT imaging of mice in vivo with significantly improved signal enhancement. With the two radiodense elements of Au and iodine incorporated within one particle, the formed DTA-containing Au DSNPs may be applicable for CT imaging of various biological systems with enhanced X-ray attenuation property and detection sensitivity.

  2. Dynamic Thermal Loads and Cooling Requirements Calculations for V ACs System in Nuclear Fuel Processing Facilities Using Computer Aided Energy Conservation Models

    International Nuclear Information System (INIS)

    EL Fawal, M.M.; Gadalla, A.A.; Taher, B.M.

    2010-01-01

    In terms of nuclear safety, the most important function of ventilation air conditioning (VAC) systems is to maintain safe ambient conditions for components and structures important to safety inside the nuclear facility and to maintain appropriate working conditions for the plant's operating and maintenance staff. As a part of a study aimed to evaluate the performance of VAC system of the nuclear fuel cycle facility (NFCF) a computer model was developed and verified to evaluate the thermal loads and cooling requirements for different zones of fuel processing facility. The program is based on transfer function method (TFM) and it is used to calculate the dynamic heat gain by various multilayer walls constructions and windows hour by hour at any orientation of the building. The developed model was verified by comparing the obtained calculated results of the solar heat gain by a given building with the corresponding calculated values using finite difference method (FDM) and total equivalent temperature different method (TETD). As an example the developed program is used to calculate the cooling loads of the different zones of a typical nuclear fuel facility the results showed that the cooling capacities of the different cooling units of each zone of the facility meet the design requirements according to safety regulations in nuclear facilities.

  3. Cathare2 V1.3E post-test computations of SPE-1 and SPE-2 experiments at PMK-NVH facility

    International Nuclear Information System (INIS)

    Belliard, M.; Laugier, E.

    1994-01-01

    This paper presents the first CATHARE2 V1.3E simulations of the SPE-2 transients at PMK-NVH loop. Concerning the SPE-1 and the SPE-2 experimentations at PMK-NVH, it contains a description of the facilities and the transient, as well as different conditions of use. The paper includes also a presentation of the CATHARE2 model and different type of computation, such as the steady state computation or SPE-1 and SPE-2 transient (TEC). 4 refs., 12 figs., 4 tabs

  4. Materials and Life Science Experimental Facility at the Japan Proton Accelerator Research Complex III: Neutron Devices and Computational and Sample Environments

    Directory of Open Access Journals (Sweden)

    Kaoru Sakasai

    2017-08-01

    Full Text Available Neutron devices such as neutron detectors, optical devices including supermirror devices and 3He neutron spin filters, and choppers are successfully developed and installed at the Materials Life Science Facility (MLF of the Japan Proton Accelerator Research Complex (J-PARC, Tokai, Japan. Four software components of MLF computational environment, instrument control, data acquisition, data analysis, and a database, have been developed and equipped at MLF. MLF also provides a wide variety of sample environment options including high and low temperatures, high magnetic fields, and high pressures. This paper describes the current status of neutron devices, computational and sample environments at MLF.

  5. Computer automation of a health physics program record

    International Nuclear Information System (INIS)

    Bird, E.M.; Flook, B.A.; Jarrett, R.D.

    1984-01-01

    A multi-user computer data base management system (DBMS) has been developed to automate USDA's national radiological safety program. It maintains information on approved users of radioactive material and radiation emanating equipment, as a central file which is accessed whenever information on the user is required. Files of inventory, personnel dosemetry records, laboratory and equipment surveys, leak tests, bioassay reports, and all other information are linked to each approved user by an assigned code that identifies the user by state, agency, and facility. The DBMS is menu-driven with provisions for addition, modification and report generation of information maintained in the system. This DBMS was designed as a single entry system to reduce the redundency of data entry. Prompts guide the user at decision points and data validation routines check for proper data entry. The DBMS generates lists of current inventories, leak test forms, inspection reports, scans for overdue reports from users, and generates follow-up letters. The DBMS system operates on a Wang OIS computer and utilizes its compiled BASIC, List Processing, Word Processing, and indexed (ISAM) file features. This system is a very fast relational database supporting many users simultaneously while providing several methods of data protection. All data files are compatible with List Processing. Information in these files can be examined, sorted, modified, or outputted to word processing documents using software supplied by Wang. This has reduced the need for special one-time programs and provides alternative access to the data

  6. Multi-user investigation organizer

    Science.gov (United States)

    Keller, Richard M. (Inventor); Panontin, Tina L. (Inventor); Carvalho, Robert E. (Inventor); Sturken, Ian (Inventor); Williams, James F. (Inventor); Wolfe, Shawn R. (Inventor); Gawdiak, Yuri O. (Inventor)

    2009-01-01

    A system that allows a team of geographically dispersed users to collaboratively analyze a mishap event. The system includes a reconfigurable ontology, including instances that are related to and characterize the mishap, a semantic network that receives, indexes and stores, for retrieval, viewing and editing, the instances and links between the instances, a network browser interface for retrieving and viewing screens that present the instances and links to other instances and that allow editing thereof, and a rule-based inference engine, including a collection of rules associated with establishment of links between the instances. A possible conclusion arising from analysis of the mishap event may be characterized as one or more of: not a credible conclusion; an unlikely conclusion; a credible conclusion; conclusion needs analysis; conclusion needs supporting data; conclusion proposed to be closed; and an un-reviewed conclusion.

  7. Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    Science.gov (United States)

    Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.

    1992-01-01

    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.

  8. International standard problem (ISP) No. 41. Containment iodine computer code exercise based on a radioiodine test facility (RTF) experiment

    International Nuclear Information System (INIS)

    2000-04-01

    International Standard Problem (ISP) exercises are comparative exercises in which predictions of different computer codes for a given physical problem are compared with each other or with the results of a carefully controlled experimental study. The main goal of ISP exercises is to increase confidence in the validity and accuracy of the tools, which were used in assessing the safety of nuclear installations. Moreover, they enable code users to gain experience and demonstrate their competence. The ISP No. 41 exercise, computer code exercise based on a Radioiodine Test Facility (RTF) experiment on iodine behaviour in containment under severe accident conditions, is one of such ISP exercises. The ISP No. 41 exercise was borne at the recommendation at the Fourth Iodine Chemistry Workshop held at PSI, Switzerland in June 1996: 'the performance of an International Standard Problem as the basis of an in-depth comparison of the models as well as contributing to the database for validation of iodine codes'. [Proceedings NEA/CSNI/R(96)6, Summary and Conclusions NEA/CSNI/R(96)7]. COG (CANDU Owners Group), comprising AECL and the Canadian nuclear utilities, offered to make the results of a Radioiodine Test Facility (RTF) test available for such an exercise. The ISP No. 41 exercise was endorsed in turn by the FPC (PWG4's Task Group on Fission Product Phenomena in the Primary Circuit and the Containment), PWG4 (CSNI Principal Working Group on the Confinement of Accidental Radioactive Releases), and the CSNI. The OECD/NEA Committee on the Safety of Nuclear Installations (CSNI) has sponsored forty-five ISP exercises over the last twenty-four years, thirteen of them in the area of severe accidents. The criteria for the selection of the RTF test as a basis for the ISP-41 exercise were; (1) complementary to other RTF tests available through the PHEBUS and ACE programmes, (2) simplicity for ease of modelling and (3) good quality data. A simple RTF experiment performed under controlled

  9. Collaborative virtual reality environments for computational science and design

    International Nuclear Information System (INIS)

    Papka, M. E.

    1998-01-01

    The authors are developing a networked, multi-user, virtual-reality-based collaborative environment coupled to one or more petaFLOPs computers, enabling the interactive simulation of 10 9 atom systems. The purpose of this work is to explore the requirements for this coupling. Through the design, development, and testing of such systems, they hope to gain knowledge that allows computational scientists to discover and analyze their results more quickly and in a more intuitive manner

  10. Usage of Thin-Client/Server Architecture in Computer Aided Education

    Science.gov (United States)

    Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit

    2014-01-01

    With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…

  11. MOO: Using a Computer Gaming Environment to Teach about Community Arts

    Science.gov (United States)

    Garber, Elizabeth

    2004-01-01

    In this paper, the author discusses the use of an interactive computer technology, "MOO" (Multi-user domain, Object-Oriented), in her art education classes for preservice teachers. A MOO is a text-based environment wherein interactivity is centered on text exchanges made between users based on problems or other materials created by teachers. The…

  12. Using a multi-user virtual simulation to promote science content: Mastery, scientific reasoning, and academic self-efficacy in fifth grade science

    Science.gov (United States)

    Ronelus, Wednaud J.

    The purpose of this study was to examine the impact of using a role-playing game versus a more traditional text-based instructional method on a cohort of general education fifth grade students' science content mastery, scientific reasoning abilities, and academic self-efficacy. This is an action research study that employs an embedded mixed methods design model, involving both quantitative and qualitative data. The study is guided by the critical design ethnography theoretical lens: an ethnographic process involving participatory design work aimed at transforming a local context while producing an instructional design that can be used in multiple contexts. The impact of an immersive 3D multi-user web-based educational simulation game on a cohort of fifth-grade students was examined on multiple levels of assessments--immediate, close, proximal and distal. A survey instrument was used to assess students' self-efficacy in technology and scientific inquiry. Science content mastery was assessed at the immediate (participation in game play), close (engagement in-game reports) and proximal (understanding of targeted concepts) levels; scientific reasoning was assessed at the distal (domain general critical thinking test) level. This quasi-experimental study used a convenient sampling method. Seven regular fifth-grade classes participated in this study. Three of the classes were the control group and the other four were the intervention group. A cohort of 165 students participated in this study. The treatment group contained 38 boys and 52 girls, and the control group contained 36 boys and 39 girls. Two-tailed t-test, Analysis of Covariance (ANCOVA), and Pearson Correlation were used to analyze data. The data supported the rejection of the null hypothesis for the three research questions. The correlational analyses showed strong relationship among three of the four variables. There were no correlations between gender and the three dependent variables. The findings of this

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  14. Validation of Advanced Computer Codes for VVER Technology: LB-LOCA Transient in PSB-VVER Facility

    Directory of Open Access Journals (Sweden)

    A. Del Nevo

    2012-01-01

    Full Text Available The OECD/NEA PSB-VVER project provided unique and useful experimental data for code validation from PSB-VVER test facility. This facility represents the scaled-down layout of the Russian-designed pressurized water reactor, namely, VVER-1000. Five experiments were executed, dealing with loss of coolant scenarios (small, intermediate, and large break loss of coolant accidents, a primary-to-secondary leak, and a parametric study (natural circulation test aimed at characterizing the VVER system at reduced mass inventory conditions. The comparative analysis, presented in the paper, regards the large break loss of coolant accident experiment. Four participants from three different institutions were involved in the benchmark and applied their own models and set up for four different thermal-hydraulic system codes. The benchmark demonstrated the performances of such codes in predicting phenomena relevant for safety on the basis of fixed criteria.

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  16. Impact of revised 10 CFR 20 on existing performance assessment computer codes used for LLW disposal facilities

    International Nuclear Information System (INIS)

    Leonard, P.R.; Seitz, R.R.

    1992-04-01

    The US Nuclear Regulatory Commission (NRC) recently announced a revision to Chapter 10 of the Code of Federal Regulations, Part 20 (10 CFR 20) ''Standards for Protection Against Radiation,'' which incorporates recommendations contained in Publications 26 and 30 of the International Commission on Radiological Protection (ICRP), issued in 1977 and 1979, respectively. The revision to 10 CFR 20 was also developed in parallel with Presidential Guidance on occupational radiation protection published in the Federal Register. Thus, this study concludes that the issuance of the revised 10 CFR 20 will not affect calculations using the computer codes considered in this report. In general, the computer codes and EPA and DOE guidance on which computer codes are based were developed in a manner consistent with the guidance provided in ICRP 26/30, well before the revision of 10 CFR 20

  17. Development of application program and building database to increase facilities for using the radiation effect assessment computer codes

    International Nuclear Information System (INIS)

    Hyun Seok Ko; Young Min Kim; Suk-Hoon Kim; Dong Hoon Shin; Chang-Sun Kang

    2005-01-01

    The current radiation effect assessment system is required the skillful technique about the application for various code and high level of special knowledge classified by field. Therefore, as a matter of fact, it is very difficult for the radiation users' who don't have enough special knowledge to assess or recognize the radiation effect properly. For this, we already have developed the five Computer codes(windows-based), that is the radiation effect assessment system, in radiation utilizing field including the nuclear power generation. It needs the computer program that non-specialist can use the five computer codes to have already developed with ease. So, we embodied the A.I-based specialist system that can infer the assessment system by itself, according to the characteristic of given problem. The specialist program can guide users, search data, inquire of administrator directly. Conceptually, with circumstance which user to apply the five computer code may encounter actually, we embodied to consider aspects as follows. First, the accessibility of concept and data to need must be improved. Second, the acquirement of reference theory and use of corresponding computer code must be easy. Third, Q and A function needed for solution of user's question out of consideration previously. Finally, the database must be renewed continuously. Actually, to express this necessity, we develop the client program to organize reference data, to build the access methodology(query) about organized data, to load the visible expression function of searched data. And It is embodied the instruction method(effective theory acquirement procedure and methodology) to acquire the theory referring the five computer codes. It is developed the data structure access program(DBMS) to renew continuously data with ease. For Q and A function, it is embodied the Q and A board within client program because the user of client program can search the content of question and answer. (authors)

  18. Modeling bubble condenser containment with computer code COCOSYS: post-test calculations of the main steam line break experiment at ELECTROGORSK BC V-213 test facility

    International Nuclear Information System (INIS)

    Lola, I.; Gromov, G.; Gumenyuk, D.; Pustovit, V.; Sholomitsky, S.; Wolff, H.; Arndt, S.; Blinkov, V.; Osokin, G.; Melikhov, O.; Melikhov, V.; Sokoline, A.

    2005-01-01

    Containment of the WWER-440 Model 213 nuclear power plant features a Bubble Condenser, a complex passive pressure suppression system, intended to limit pressure rise in the containment during accidents. Due to lack of experimental evidence of its successful operation in the original design documentation, the performance of this system under accidents with ruptures of large high-energy pipes of the primary and secondary sides remains a known safety concern for this containment type. Therefore, a number of research and analytical studies have been conducted by the countries operating WWER-440 reactors and their Western partners in the recent years to verify Bubble Condenser operation under accident conditions. Comprehensive experimental research studies at the Electrogorsk BC V-213 test facility, commissioned in 1999 in Electrogorsk Research and Engineering Centre (EREC), constitute essential part of these efforts. Nowadays this is the only operating large-scale facility enabling integral tests on investigation of the Bubble Condenser performance. Several large international research projects, conducted at this facility in 1999-2003, have covered a spectrum of pipe break accidents. These experiments have substantially improved understanding of the overall system performance and thermal hydraulic phenomena in the Bubble Condenser Containment, and provided valuable information for validating containment codes against experimental results. One of the recent experiments, denoted as SLB-G02, has simulated steam line break. The results of this experiment are of especial value for the engineers working in the area of computer code application for WWER-440 containment analyses, giving an opportunity to verify validity of the code predictions and identify possibilities for model improvement. This paper describes the results of the post-test calculations of the SLB-G02 experiment, conducted as a joint effort of GRS, Germany and Ukrainian technical support organizations for

  19. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  20. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  5. Guide to making time-lapse graphics using the facilities of the National Magnetic Fusion Energy Computing Center

    International Nuclear Information System (INIS)

    Munro, J.K. Jr.

    1980-05-01

    The advent of large, fast computers has opened the way to modeling more complex physical processes and to handling very large quantities of experimental data. The amount of information that can be processed in a short period of time is so great that use of graphical displays assumes greater importance as a means of displaying this information. Information from dynamical processes can be displayed conveniently by use of animated graphics. This guide presents the basic techniques for generating black and white animated graphics, with consideration of aesthetic, mechanical, and computational problems. The guide is intended for use by someone who wants to make movies on the National Magnetic Fusion Energy Computing Center (NMFECC) CDC-7600. Problems encountered by a geographically remote user are given particular attention. Detailed information is given that will allow a remote user to do some file checking and diagnosis before giving graphics files to the system for processing into film in order to spot problems without having to wait for film to be delivered. Source listings of some useful software are given in appendices along with descriptions of how to use it. 3 figures, 5 tables

  6. Implementation of the Principal Component Analysis onto High-Performance Computer Facilities for Hyperspectral Dimensionality Reduction: Results and Comparisons

    Directory of Open Access Journals (Sweden)

    Ernestina Martel

    2018-06-01

    Full Text Available Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA, suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option.

  7. Using computer graphics to analyze the placement of neutral-beam injectors for the Mirror Fusion Test Facility

    International Nuclear Information System (INIS)

    Horvath, J.A.

    1977-01-01

    To optimize the neutral-beam current incident on the fusion plasma and limit the heat load on exposed surfaces of the Mirror Fusion Test Facility magnet coils, impingement of the neutral beams on the magnet structure must be minimized. Also, placement of the neutral-beam injectors must comply with specifications for neutral-current heating of the plasma and should allow maximum flexibility to accommodate alternative beam aiming patterns without significant hardware replacement or experiment down-time. Injector placements and aimings are analyzed by means of the Structural Analysis Movie Post Processor (SAMPP), a general-purpose graphics code for the display of three-dimensional finite-element models. SAMPP is used to visually assemble, disassemble, or cut away sections of the complex three-dimensional apparatus, which is represented by an assemblage of 8-node solid finite elements. The resulting picture is used to detect and quantify interactions between the structure and the neutral-particle beams

  8. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  10. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  11. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  14. Steam condensation induced water hammer in a vertical up-fill configuration within an integral test facility. Experiments and computational simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dirndorfer, Stefan

    2017-01-17

    Condensation induced water hammer is a source of danger and unpredictable loads in pipe systems. Studies concerning condensation induced water hammer were predominantly made for horizontal pipes, studies concerning vertical pipe geometries are quite rare. This work presents a new integral test facility and an analysis of condensation induced water hammer in a vertical up-fill configuration. Thanks to the state of the art technology, the phenomenology of vertical condensation induced water hammer can be analysed by means of sufficient high-sampled experimental data. The system code ATHLET is used to simulate UniBw condensation induced water hammer experiments. A newly developed and implemented direct contact condensation model enables ATHLET to calculate condensation induced water hammer. Selected experiments are validated by the modified ATHLET system code. A sensitivity analysis in ATHLET, together with the experimental data, allows to assess the performance of ATHLET to compute condensation induced water hammer in a vertical up-fill configuration.

  15. Steam condensation induced water hammer in a vertical up-fill configuration within an integral test facility. Experiments and computational simulations

    International Nuclear Information System (INIS)

    Dirndorfer, Stefan

    2017-01-01

    Condensation induced water hammer is a source of danger and unpredictable loads in pipe systems. Studies concerning condensation induced water hammer were predominantly made for horizontal pipes, studies concerning vertical pipe geometries are quite rare. This work presents a new integral test facility and an analysis of condensation induced water hammer in a vertical up-fill configuration. Thanks to the state of the art technology, the phenomenology of vertical condensation induced water hammer can be analysed by means of sufficient high-sampled experimental data. The system code ATHLET is used to simulate UniBw condensation induced water hammer experiments. A newly developed and implemented direct contact condensation model enables ATHLET to calculate condensation induced water hammer. Selected experiments are validated by the modified ATHLET system code. A sensitivity analysis in ATHLET, together with the experimental data, allows to assess the performance of ATHLET to compute condensation induced water hammer in a vertical up-fill configuration.

  16. Description of NORMTRI: a computer program for assessing the off-site consequences from air-borne releases of tritium during normal operation of nuclear facilities

    International Nuclear Information System (INIS)

    Raskob, W.

    1994-10-01

    The computer program NORMTRI has been developed to calculate the behaviour of tritium in the environment released into the atmosphere under normal operation of nuclear facilities. It is possible to investigate the two chemical forms tritium gas and tritiated water vapour. The conversion of tritium gas into tritiated water followed by its reemission back to the atmosphere as well as the conversion into organically bound tritium is considered. NORMTRI is based on the statistical Gaussian dispersion model ISOLA, which calculates the activity concentration in air near the ground contamination due to dry and wet deposition at specified locations in a polar grid system. ISOLA requires a four-parametric meteorological statistics derived from one or more years synoptic recordings of 1-hour-averages of wind speed, wind direction, stability class and precipitation intensity. Additional features of NORMTRI are the possibility to choose several dose calculation procedures, ranging from the equations of the German regulatory guidelines to a pure specific equilibrium approach. (orig.)

  17. Radiological Risk Assessments for Occupational Exposure at Fuel Fabrication Facility in AlTuwaitha Site Baghdad – Iraq by using RESRAD Computer Code

    Science.gov (United States)

    Ibrahim, Ziadoon H.; Ibrahim, S. A.; Mohammed, M. K.; Shaban, A. H.

    2018-05-01

    The purpose of this study is to evaluate the radiological risks for workers for one year of their activities at Fuel Fabrication Facility (FFF) so as to make the necessary protection to prevent or minimize risks resulted from these activities this site now is under the Iraqi decommissioning program (40). Soil samples surface and subsurface were collected from different positions of this facility and analyzed by gamma rays spectroscopy technique High Purity Germanium detector (HPGe) was used. It was found out admixture of radioactive isotopes (232Th 40K 238U 235U137Cs) according to the laboratory results the highest values were (975758) for 238U (21203) for 235U (218) for 232Th (4046) for 40K and (129) for 137Cs in (Bqkg1) unit. The annual total radiation dose and risks were estimated by using RESRAD (onsite) 70 computer code. The highest total radiation dose was (5617μSv/year) in area that represented by soil sample (S7) and the radiological risks morbidity and mortality (118E02 8661E03) respectively in the same area

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  19. Computational fluid dynamics as a virtual facility for R and D in the IRIS project: an overview

    International Nuclear Information System (INIS)

    Colombo, E.; Inzoli, F.; Ricotti, M.; Uddin, R.; Yan, Y.; Sobh, N.

    2004-01-01

    The pressurized light water cooled, medium power (1000 MWt) IRIS (International Reactor Innovative and Secure) has been under development for four years by an international consortium of over 21 organizations from ten countries. The plant conceptual design was completed in 2001 and the preliminary design is nearing completion. The pre-application licensing process with NRC started in October, 2002 and IRIS is one of the designs considered by US utilities as part of the ESP (Early Site Permit) process. The development of a new nuclear plant concept presents the opportunity and potential for a significant usage of computational fluid dynamics (CFD) in the design process, as it is in many conventional applications related to power generation. A CFD-Group of international scientists has been given the mission of investigating the many application opportunities for CFD related to the IRIS project and to verify the support that the IRIS design process may gain from CFD in terms of time, costs, resource saving, and visibility. The key objective identified is the use of CFD as a design tool for virtual tests in order to simplify the optimization effort for the nuclear plant's components and support the IRIS testing program. In this paper, the CFD-Group is described in terms of their resources and capabilities. A program of activities with identified goals and a possible schedule is also presented.(author)

  20. Computing facilities available to final-year students at 3 UK dental schools in 1997/8: their use, and students' attitudes to information technology.

    Science.gov (United States)

    Grigg, P; Macfarlane, T V; Shearer, A C; Jepson, N J; Stephens, C D

    2001-08-01

    To identify computer facilities available in 3 dental schools where 3 different approaches to the use of technology-based learning material have been adopted and assess dental students' perception of their own computer skills and their attitudes towards information technology. Multicentre cross sectional by questionnaire. All 181 dental students in their final year of study (1997-8). The overall participation rate was 80%. There were no differences between schools in the students' self assessment of their IT skills but only 1/3 regarded themselves as competent in basic skills and nearly 50% of students in all 3 schools felt that insufficient IT training had been provided to enable them to follow their course without difficulty. There were significant differences between schools in most of the other areas examined which reflect the different ways in which IT can be used to support the dental course. 1. Students value IT as an educational tool. 2. Their awareness of the relevance of a knowledge of information technology for their future careers remains generally low. 3. There is a need to provide effective instruction in IT skills for those dental students who do not acquire these during secondary education.

  1. Support facilities

    International Nuclear Information System (INIS)

    Williamson, F.S.; Blomquist, J.A.; Fox, C.A.

    1977-01-01

    Computer support is centered on the Remote Access Data Station (RADS), which is equipped with a 1000 lpm printer, 1000 cpm reader, and a 300 cps paper tape reader with 500-foot spools. The RADS is located in a data preparation room with four 029 key punches (two of which interpret), a storage vault for archival magnetic tapes, card files, and a 30 cps interactive terminal principally used for job inquiry and routing. An adjacent room provides work space for users, with a documentation library and a consultant's office, plus file storage for programs and their documentations. The facility has approximately 2,600 square feet of working laboratory space, and includes two fully equipped photographic darkrooms, sectioning and autoradiographic facilities, six microscope cubicles, and five transmission electron microscopes and one Cambridge scanning electron microscope equipped with an x-ray energy dispersive analytical system. Ancillary specimen preparative equipment includes vacuum evaporators, freeze-drying and freeze-etching equipment, ultramicrotomes, and assorted photographic and light microscopic equipment. The extensive physical plant of the animal facilities includes provisions for holding all species of laboratory animals under controlled conditions of temperature, humidity, and lighting. More than forty rooms are available for studies of the smaller species. These have a potential capacity of more than 75,000 mice, or smaller numbers of larger species and those requiring special housing arrangements. There are also six dog kennels to accommodate approximately 750 dogs housed in runs that consist of heated indoor compartments and outdoor exercise areas

  2. SU-E-T-531: Performance Evaluation of Multithreaded Geant4 for Proton Therapy Dose Calculations in a High Performance Computing Facility

    International Nuclear Information System (INIS)

    Shin, J; Coss, D; McMurry, J; Farr, J; Faddegon, B

    2014-01-01

    Purpose: To evaluate the efficiency of multithreaded Geant4 (Geant4-MT, version 10.0) for proton Monte Carlo dose calculations using a high performance computing facility. Methods: Geant4-MT was used to calculate 3D dose distributions in 1×1×1 mm3 voxels in a water phantom and patient's head with a 150 MeV proton beam covering approximately 5×5 cm2 in the water phantom. Three timestamps were measured on the fly to separately analyze the required time for initialization (which cannot be parallelized), processing time of individual threads, and completion time. Scalability of averaged processing time per thread was calculated as a function of thread number (1, 100, 150, and 200) for both 1M and 50 M histories. The total memory usage was recorded. Results: Simulations with 50 M histories were fastest with 100 threads, taking approximately 1.3 hours and 6 hours for the water phantom and the CT data, respectively with better than 1.0 % statistical uncertainty. The calculations show 1/N scalability in the event loops for both cases. The gains from parallel calculations started to decrease with 150 threads. The memory usage increases linearly with number of threads. No critical failures were observed during the simulations. Conclusion: Multithreading in Geant4-MT decreased simulation time in proton dose distribution calculations by a factor of 64 and 54 at a near optimal 100 threads for water phantom and patient's data respectively. Further simulations will be done to determine the efficiency at the optimal thread number. Considering the trend of computer architecture development, utilizing Geant4-MT for radiotherapy simulations is an excellent cost-effective alternative for a distributed batch queuing system. However, because the scalability depends highly on simulation details, i.e., the ratio of the processing time of one event versus waiting time to access for the shared event queue, a performance evaluation as described is recommended

  3. Development of a computational code for calculations of shielding in dental facilities; Desenvolvimento de um codigo computacional para calculos de blindagem em instalacoes odontologicas

    Energy Technology Data Exchange (ETDEWEB)

    Lava, Deise D.; Borges, Diogo da S.; Affonso, Renato R.W.; Guimaraes, Antonio C.F.; Moreira, Maria de L., E-mail: deise_dy@hotmail.com, E-mail: diogosb@outlook.com, E-mail: raoniwa@yahoo.com.br, E-mail: tony@ien.gov.br, E-mail: malu@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2014-07-01

    This paper is prepared in order to address calculations of shielding to minimize the interaction of patients with ionizing radiation and / or personnel. The work includes the use of protection report Radiation in Dental Medicine (NCRP-145 or Radiation Protection in Dentistry), which establishes calculations and standards to be adopted to ensure safety to those who may be exposed to ionizing radiation in dental facilities, according to the dose limits established by CNEN-NN-3.1 standard published in September / 2011. The methodology comprises the use of computer language for processing data provided by that report, and a commercial application used for creating residential projects and decoration. The FORTRAN language was adopted as a method for application to a real case. The result is a programming capable of returning data related to the thickness of material, such as steel, lead, wood, glass, plaster, acrylic, acrylic and leaded glass, which can be used for effective shielding against single or continuous pulse beams. Several variables are used to calculate the thickness of the shield, as: number of films used in the week, film load, use factor, occupational factor, distance between the wall and the source, transmission factor, workload, area definition, beam intensity, intraoral and panoramic exam. Before the application of the methodology is made a validation of results with examples provided by NCRP-145. The calculations redone from the examples provide answers consistent with the report.

  4. Computing, Environment and Life Sciences | Argonne National Laboratory

    Science.gov (United States)

    Computing, Environment and Life Sciences Research Divisions BIOBiosciences CPSComputational Science DSLData Argonne Leadership Computing Facility Biosciences Division Environmental Science Division Mathematics and Computer Science Division Facilities and Institutes Argonne Leadership Computing Facility News Events About

  5. Shared-resource computing for small research labs.

    Science.gov (United States)

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  6. Facilities & Leadership

    Data.gov (United States)

    Department of Veterans Affairs — The facilities web service provides VA facility information. The VA facilities locator is a feature that is available across the enterprise, on any webpage, for the...

  7. Biochemistry Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Biochemistry Facility provides expert services and consultation in biochemical enzyme assays and protein purification. The facility currently features 1) Liquid...

  8. Dance Facilities.

    Science.gov (United States)

    Ashton, Dudley, Ed.; Irey, Charlotte, Ed.

    This booklet represents an effort to assist teachers and administrators in the professional planning of dance facilities and equipment. Three chapters present the history of dance facilities, provide recommended dance facilities and equipment, and offer some adaptations of dance facilities and equipment, for elementary, secondary and college level…

  9. Mining Emerging Patterns for Recognizing Activities of Multiple Users in Pervasive Computing

    DEFF Research Database (Denmark)

    Gu, Tao; Wu, Zhanqing; Wang, Liang

    2009-01-01

    Understanding and recognizing human activities from sensor readings is an important task in pervasive computing. Existing work on activity recognition mainly focuses on recognizing activities for a single user in a smart home environment. However, in real life, there are often multiple inhabitants...... activity models, and propose an Emerging Pattern based Multi-user Activity Recognizer (epMAR) to recognize both single-user and multiuser activities. We conduct our empirical studies by collecting real-world activity traces done by two volunteers over a period of two weeks in a smart home environment...... sensor readings in a home environment, and propose a novel pattern mining approach to recognize both single-user and multi-user activities in a unified solution. We exploit Emerging Pattern – a type of knowledge pattern that describes significant changes between classes of data – for constructing our...

  10. Staff experiences within the implementation of computer-based nursing records in residential aged care facilities: a systematic review and synthesis of qualitative research.

    Science.gov (United States)

    Meißner, Anne; Schnepp, Wilfried

    2014-06-20

    Since the introduction of electronic nursing documentation systems, its implementation in recent years has increased rapidly in Germany. The objectives of such systems are to save time, to improve information handling and to improve quality. To integrate IT in the daily working processes, the employee is the pivotal element. Therefore it is important to understand nurses' experience with IT implementation. At present the literature shows a lack of understanding exploring staff experiences within the implementation process. A systematic review and meta-ethnographic synthesis of primary studies using qualitative methods was conducted in PubMed, CINAHL, and Cochrane. It adheres to the principles of the PRISMA statement. The studies were original, peer-reviewed articles from 2000 to 2013, focusing on computer-based nursing documentation in Residential Aged Care Facilities. The use of IT requires a different form of information processing. Some experience this new form of information processing as a benefit while others do not. The latter find it more difficult to enter data and this result in poor clinical documentation. Improvement in the quality of residents' records leads to an overall improvement in the quality of care. However, if the quality of those records is poor, some residents do not receive the necessary care. Furthermore, the length of time necessary to complete the documentation is a prominent theme within that process. Those who are more efficient with the electronic documentation demonstrate improved time management. For those who are less efficient with electronic documentation the information processing is perceived as time consuming. Normally, it is possible to experience benefits when using IT, but this depends on either promoting or hindering factors, e.g. ease of use and ability to use it, equipment availability and technical functionality, as well as attitude. In summary, the findings showed that members of staff experience IT as a benefit when

  11. Wind Energy Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Laurie, Carol

    2017-02-01

    This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.

  12. Multi-User MIMO Across Small Cells

    DEFF Research Database (Denmark)

    Finn, Danny; Ahmadi, Hamed; Cattoni, Andrea Fabio

    2014-01-01

    The main contribution of this work is the proposal and assessment of the MU-MIMO across Small Cells concept. MU-MIMO is the spatial multiplexing of multiple users on a single time-frequency resource. In small cell networks, where the number of users per cell is low, finding suitable sets of users...... to be co-scheduled for MU-MIMO is not always possible. In these cases we propose MU-MIMO-based cell reassignments of users into adjacent cells to enable MU-MIMO operation. From system level simulations we found that, when the initial number of users per small cell is four, cell reassignment results in a 21.......7% increase in the spectral efficiency gain attributed to MU-MIMO, and a higher percentage increase when the initial number of users per cell is lower. Going forward, we will extend this work to also consider energy savings through switching off small cells which are emptied by the reassignment process....

  13. Multi-User Low Intrusive Occupancy Detection.

    Science.gov (United States)

    Pratama, Azkario Rizky; Widyawan, Widyawan; Lazovik, Alexander; Aiello, Marco

    2018-03-06

    Smart spaces are those that are aware of their state and can act accordingly. Among the central elements of such a state is the presence of humans and their number. For a smart office building, such information can be used for saving energy and safety purposes. While acquiring presence information is crucial, using sensing techniques that are highly intrusive, such as cameras, is often not acceptable for the building occupants. In this paper, we illustrate a proposal for occupancy detection which is low intrusive; it is based on equipment typically available in modern offices such as room-level power-metering and an app running on workers' mobile phones. For power metering, we collect the aggregated power consumption and disaggregate the load of each device. For the mobile phone, we use the Received Signal Strength (RSS) of BLE (Bluetooth Low Energy) nodes deployed around workspaces to localize the phone in a room. We test the system in our offices. The experiments show that sensor fusion of the two sensing modalities gives 87-90% accuracy, demonstrating the effectiveness of the proposed approach.

  14. Informationally Efficient Multi-User Communication

    Science.gov (United States)

    2010-01-01

    DSM algorithms, the Op- timal Spectrum Balancing ( OSB ) algorithm and the Iterative Spectrum Balanc- ing (ISB) algorithm, were proposed to solve the...problem of maximization of a weighted rate-sum across all users [CYM06, YL06]. OSB has an exponential complexity in the number of users. ISB only has a...the duality gap min λ1,λ2 D (λ1, λ2) − max P1,P2 f (P1,P2) is not zero. Fig. 3.3 summarizes the three key steps of a dual method, the OSB algorithm

  15. Multi-User Low Intrusive Occupancy Detection

    Science.gov (United States)

    Widyawan, Widyawan; Lazovik, Alexander

    2018-01-01

    Smart spaces are those that are aware of their state and can act accordingly. Among the central elements of such a state is the presence of humans and their number. For a smart office building, such information can be used for saving energy and safety purposes. While acquiring presence information is crucial, using sensing techniques that are highly intrusive, such as cameras, is often not acceptable for the building occupants. In this paper, we illustrate a proposal for occupancy detection which is low intrusive; it is based on equipment typically available in modern offices such as room-level power-metering and an app running on workers’ mobile phones. For power metering, we collect the aggregated power consumption and disaggregate the load of each device. For the mobile phone, we use the Received Signal Strength (RSS) of BLE (Bluetooth Low Energy) nodes deployed around workspaces to localize the phone in a room. We test the system in our offices. The experiments show that sensor fusion of the two sensing modalities gives 87–90% accuracy, demonstrating the effectiveness of the proposed approach. PMID:29509693

  16. Multi-User Low Intrusive Occupancy Detection

    Directory of Open Access Journals (Sweden)

    Azkario Rizky Pratama

    2018-03-01

    Full Text Available Smart spaces are those that are aware of their state and can act accordingly. Among the central elements of such a state is the presence of humans and their number. For a smart office building, such information can be used for saving energy and safety purposes. While acquiring presence information is crucial, using sensing techniques that are highly intrusive, such as cameras, is often not acceptable for the building occupants. In this paper, we illustrate a proposal for occupancy detection which is low intrusive; it is based on equipment typically available in modern offices such as room-level power-metering and an app running on workers’ mobile phones. For power metering, we collect the aggregated power consumption and disaggregate the load of each device. For the mobile phone, we use the Received Signal Strength (RSS of BLE (Bluetooth Low Energy nodes deployed around workspaces to localize the phone in a room. We test the system in our offices. The experiments show that sensor fusion of the two sensing modalities gives 87–90% accuracy, demonstrating the effectiveness of the proposed approach.

  17. Waste Facilities

    Data.gov (United States)

    Vermont Center for Geographic Information — This dataset was developed from the Vermont DEC's list of certified solid waste facilities. It includes facility name, contact information, and the materials...

  18. Health Facilities

    Science.gov (United States)

    Health facilities are places that provide health care. They include hospitals, clinics, outpatient care centers, and specialized care centers, ... psychiatric care centers. When you choose a health facility, you might want to consider How close it ...

  19. Fabrication Facilities

    Data.gov (United States)

    Federal Laboratory Consortium — The Fabrication Facilities are a direct result of years of testing support. Through years of experience, the three fabrication facilities (Fort Hood, Fort Lewis, and...

  20. Specialized computer architectures for computational aerodynamics

    Science.gov (United States)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  1. Future Computer Requirements for Computational Aerodynamics

    Science.gov (United States)

    1978-01-01

    Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.

  2. Comparison of Knowledge and Attitudes Using Computer-Based and Face-to-Face Personal Hygiene Training Methods in Food Processing Facilities

    Science.gov (United States)

    Fenton, Ginger D.; LaBorde, Luke F.; Radhakrishna, Rama B.; Brown, J. Lynne; Cutter, Catherine N.

    2006-01-01

    Computer-based training is increasingly favored by food companies for training workers due to convenience, self-pacing ability, and ease of use. The objectives of this study were to determine if personal hygiene training, offered through a computer-based method, is as effective as a face-to-face method in knowledge acquisition and improved…

  3. 6th July 2010 - United Kingdom Science and Technology Facilities Council W. Whitehorn signing the guest book with Head of International relations F. Pauss, visiting the Computing Centre with Information Technology Department Head Deputy D. Foster, the LHC superconducting magnet test hall with Technology Department P. Strubin,the Centre Control Centre with Operation Group Leader M. Lamont and the CLIC/CTF3 facility with Project Leader J.-P. Delahaye.

    CERN Multimedia

    Teams : M. Brice, JC Gadmer

    2010-01-01

    6th July 2010 - United Kingdom Science and Technology Facilities Council W. Whitehorn signing the guest book with Head of International relations F. Pauss, visiting the Computing Centre with Information Technology Department Head Deputy D. Foster, the LHC superconducting magnet test hall with Technology Department P. Strubin,the Centre Control Centre with Operation Group Leader M. Lamont and the CLIC/CTF3 facility with Project Leader J.-P. Delahaye.

  4. IKNO, a user facility for coherent terahertz and UV synchrotron radiation

    International Nuclear Information System (INIS)

    Sannibale, Fernando; Marcelli, Augusto; Innocenzi, Plinio

    2008-01-01

    IKNO (Innovation and KNOwledge) is a proposal for a multi-user facility based on an electron storage ring optimized for the generation of coherent synchrotron radiation (CSR) in the terahertz frequency range, and of broadband incoherent synchrotron radiation (SR) ranging from the IR to the VUV. IKNO can be operated in an ultra-stable CSR mode with photon flux in the terahertz frequency region up to nine orders of magnitude higher than in existing 3rd generation light sources. Simultaneously to the CSR operation, broadband incoherent SR up to VUV frequencies is available at the beamline ports. The main characteristics of the IKNO storage and its performance in terms of CSR and incoherent SR are described in this paper. The proposed location for the infrastructure facility is in Sardinia, Italy

  5. Facilities Programming.

    Science.gov (United States)

    Bullis, Robert V.

    1992-01-01

    A procedure for physical facilities management written 17 years ago is still worth following today. Each of the steps outlined for planning, organizing, directing, controlling, and evaluating must be accomplished if school facilities are to be properly planned and constructed. However, lessons have been learned about energy consumption and proper…

  6. Nuclear facilities

    International Nuclear Information System (INIS)

    Anon.

    2000-01-01

    Here is given the decree (2000-1065) of the 25. of October 2000 reporting the publication of the convention between the Government of the French Republic and the CERN concerning the safety of the LHC (Large Hadron Collider) and the SPS (Proton Supersynchrotron) facilities, signed in Geneva on July 11, 2000. By this convention, the CERN undertakes to ensure the safety of the LHC and SPS facilities and those of the operations of the LEP decommissioning. The French legislation and regulations on basic nuclear facilities (concerning more particularly the protection against ionizing radiations, the protection of the environment and the safety of facilities) and those which could be decided later on apply to the LHC, SPS and auxiliary facilities. (O.M.)

  7. Patient-specific radiation dose and cancer risk in computed tomography examinations in some selected CT facilities in the Greater Accra Region of Ghana

    International Nuclear Information System (INIS)

    Osei, R. K.

    2012-01-01

    The effective dose and cancer risk were determined for patients undergoing seven different types of CT examinations in two CT facilities in the Greater Accra region of Ghana. The two facilities, namely; the Diagnostic Centre Ltd and Cocoa Clinic were chosen because of their significant patient throughput. The effective dose was from patient data namely age, sex, height, weight and technique factors; namely scan length, KVp (Kilovolts peak), mAs (milliamperes per second) and CTDIv from the control console of the CT machines. The effective dose was also estimated using the dose length product (DLP) and k Coefficients which is the anatomic region specific conversion factors. The cancer risk for each patient for a particular examination was determined from the effective dose, age and sex of each patient with the help of BEIR VII. In all, a total number of 800 adult patients with 400 from each of the two CT facilities were compiled. From Diagnostic Centre Ltd, the average effective dose was 5.61mSv in the range of 1.41mSv to 13.34mSv with average BMI of 26.19kg/m 2 in the range of 16.90kg/m 2 to 48.28kg/m 2 for all types of examinations. The average cancer risk was 0.0458 Sv - 1 for 400 patients in the range of 0.0001 Sv - 1 to 0.3036 Sv -1 compared with a population of 900 patients undergoing CT examination per year. From Cocoa Clinic, the average effective dose was 3.91MSv in the range of 0.54mSv to 27.32mSv with an average BMI of 25.59 kg/m 2 in the range of 17.18kg/m 2 to 35.34kg/m 2 and the average cancer risk was 0.0371 Sv - 1 in the range of 0.0001 Sv - 1 and 0.7125 Sv -1 . Some of the values were within the range of values of typical for typical effective dose for CT examinations reported by the ICRP. It was evident from this study that the variations in scanning parameters had significant impact on the effective doses to patient for similar CT examinations among the two facilities.(au)

  8. Computer Operating System Maintenance.

    Science.gov (United States)

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  9. Nuclear material accountability system in DUPIC facility (I)

    International Nuclear Information System (INIS)

    Ko, W. I.; Kim, H. D.; Byeon, K. H.; Song, D. Y.; Lee, B. D.; Hong, J. S.; Yang, M. S.

    1999-01-01

    KAERI(Korea Atomic Energy Research Institute) has developed a nuclear material accountability system for DUPIC(Direct Use of Spent PWR Fuel in CANDU) fuel cycle process. The software development for the material accountability started with a general model software, so-called CoreMAS(Core Material Accountability System), at the beginning of 1998. The development efforts have been focused on the DUPIC safeguards system, and in addition, improved to meet Korean safeguards requirements under domestic laws and regulations. The software being developed as a local area network-based accountability system with multi-user environment is able to track and control nuclear material flow within a facility and inter-facility. In addition, it could be operated in a near-real time manner and also able to generate records and reports as necessary for facility operator and domestic and international inspector. This paper addresses DMAS(DUPIC Material Accountability System) being developed by KAERI and simulation in a small-scale DUPIC process for the verification of the software performance and for seeking further works

  10. User's manual of a computer code for seismic hazard evaluation for assessing the threat to a facility by fault model. SHEAT-FM

    International Nuclear Information System (INIS)

    Sugino, Hideharu; Onizawa, Kunio; Suzuki, Masahide

    2005-09-01

    To establish the reliability evaluation method for aged structural component, we developed a probabilistic seismic hazard evaluation code SHEAT-FM (Seismic Hazard Evaluation for Assessing the Threat to a facility site - Fault Model) using a seismic motion prediction method based on fault model. In order to improve the seismic hazard evaluation, this code takes the latest knowledge in the field of earthquake engineering into account. For example, the code involves a group delay time of observed records and an update process model of active fault. This report describes the user's guide of SHEAT-FM, including the outline of the seismic hazard evaluation, specification of input data, sample problem for a model site, system information and execution method. (author)

  11. Decommissioning Facility Characterization DB System

    International Nuclear Information System (INIS)

    Park, S. K.; Ji, Y. H.; Park, J. H.; Chung, U. S.

    2010-01-01

    Basically, when a decommissioning is planed for a nuclear facility, an investigation into the characterization of the nuclear facility is first required. The results of such an investigation are used for calculating the quantities of dismantled waste and estimating the cost of the decommissioning project. In this paper, it is presented a computer system for the characterization of nuclear facilities, called DEFACS (DEcommissioning FAcility Characterization DB System). This system consists of four main parts: a management coding system for grouping items, a data input system, a data processing system and a data output system. All data is processed in a simplified and formatted manner in order to provide useful information to the decommissioning planner. For the hardware, PC grade computers running Oracle software on Microsoft Windows OS were selected. The characterization data results for the nuclear facility under decommissioning will be utilized for the work-unit productivity calculation system and decommissioning engineering system as basic sources of information

  12. Decommissioning Facility Characterization DB System

    Energy Technology Data Exchange (ETDEWEB)

    Park, S. K.; Ji, Y. H.; Park, J. H.; Chung, U. S. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    Basically, when a decommissioning is planed for a nuclear facility, an investigation into the characterization of the nuclear facility is first required. The results of such an investigation are used for calculating the quantities of dismantled waste and estimating the cost of the decommissioning project. In this paper, it is presented a computer system for the characterization of nuclear facilities, called DEFACS (DEcommissioning FAcility Characterization DB System). This system consists of four main parts: a management coding system for grouping items, a data input system, a data processing system and a data output system. All data is processed in a simplified and formatted manner in order to provide useful information to the decommissioning planner. For the hardware, PC grade computers running Oracle software on Microsoft Windows OS were selected. The characterization data results for the nuclear facility under decommissioning will be utilized for the work-unit productivity calculation system and decommissioning engineering system as basic sources of information

  13. Mammography Facilities

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Mammography Facility Database is updated periodically based on information received from the four FDA-approved accreditation bodies: the American College of...

  14. Canyon Facilities

    Data.gov (United States)

    Federal Laboratory Consortium — B Plant, T Plant, U Plant, PUREX, and REDOX (see their links) are the five facilities at Hanford where the original objective was plutonium removal from the uranium...

  15. Designing Facilities for Collaborative Operations

    Science.gov (United States)

    Norris, Jeffrey; Powell, Mark; Backes, Paul; Steinke, Robert; Tso, Kam; Wales, Roxana

    2003-01-01

    A methodology for designing operational facilities for collaboration by multiple experts has begun to take shape as an outgrowth of a project to design such facilities for scientific operations of the planned 2003 Mars Exploration Rover (MER) mission. The methodology could also be applicable to the design of military "situation rooms" and other facilities for terrestrial missions. It was recognized in this project that modern mission operations depend heavily upon the collaborative use of computers. It was further recognized that tests have shown that layout of a facility exerts a dramatic effect on the efficiency and endurance of the operations staff. The facility designs (for example, see figure) and the methodology developed during the project reflect this recognition. One element of the methodology is a metric, called effective capacity, that was created for use in evaluating proposed MER operational facilities and may also be useful for evaluating other collaboration spaces, including meeting rooms and military situation rooms. The effective capacity of a facility is defined as the number of people in the facility who can be meaningfully engaged in its operations. A person is considered to be meaningfully engaged if the person can (1) see, hear, and communicate with everyone else present; (2) see the material under discussion (typically data on a piece of paper, computer monitor, or projection screen); and (3) provide input to the product under development by the group. The effective capacity of a facility is less than the number of people that can physically fit in the facility. For example, a typical office that contains a desktop computer has an effective capacity of .4, while a small conference room that contains a projection screen has an effective capacity of around 10. Little or no benefit would be derived from allowing the number of persons in an operational facility to exceed its effective capacity: At best, the operations staff would be underutilized

  16. Large mass storage facility

    Energy Technology Data Exchange (ETDEWEB)

    Peskin, Arnold M.

    1978-08-01

    This is the final report of a study group organized to investigate questions surrounding the acquisition of a large mass storage facility. The programatic justification for such a system at Brookhaven is reviewed. Several candidate commercial products are identified and discussed. A draft of a procurement specification is developed. Some thoughts on possible new directions for computing at Brookhaven are also offered, although this topic was addressed outside of the context of the group's deliberations. 2 figures, 3 tables.

  17. Irradiation facilities in JRR-3M

    International Nuclear Information System (INIS)

    Ohtomo, Akitoshi; Sigemoto, Masamitsu; Takahashi, Hidetake

    1992-01-01

    Irradiation facilities have been installed in the upgraded JRR-3 (JRR-3M) in Japan Atomic Energy Research Institute (JAERI). There are hydraulic rabbit facilities (HR), pneumatic rabbit facilities (PN), neutron activation analysis facility (PN3), uniform irradiation facility (SI), rotating irradiation facility and capsule irradiation facilities to carry out the neutron irradiation in the JRR-3M. These facilities are operated using a process control computer system to centerize the process information. Some of the characteristics for the facilities were satisfactorily measured at the same time of reactor performance test in 1990. During reactor operation, some of the tests are continued to confirm the basic characteristics on facilities, for example, PN3 was confirmed to have enough performance for activation analysis. Measurement of neutron flux at all irradiation positions has been carried out for the equilibrium core. (author)

  18. Computer surety: computer system inspection guidance. [Contains glossary

    Energy Technology Data Exchange (ETDEWEB)

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  19. Visitor's Computer Guidelines | CTIO

    Science.gov (United States)

    Visitor's Computer Guidelines Network Connection Request Instruments Instruments by Telescope IR Instruments Guidelines Library Facilities Outreach NOAO-S EPO Program team Art of Darkness Image Gallery EPO/CADIAS ‹› You are here CTIO Home » Astronomers » Visitor's Computer Guidelines Visitor's Computer

  20. Facile synthesis of silver nanoparticles and its antibacterial activity against Escherichia coli and unknown bacteria on mobile phone touch surfaces/computer keyboards

    Science.gov (United States)

    Reddy, T. Ranjeth Kumar; Kim, Hyun-Joong

    2016-07-01

    In recent years, there has been significant interest in the development of novel metallic nanoparticles using various top-down and bottom-up synthesis techniques. Kenaf is a huge biomass product and a potential component for industrial applications. In this work, we investigated the green synthesis of silver nanoparticles (AgNPs) by using kenaf ( Hibiscus cannabinus) cellulose extract and sucrose, which act as stabilizing and reducing agents in solution. With this method, by changing the pH of the solution as a function of time, we studied the optical, morphological and antibacterial properties of the synthesized AgNPs. In addition, these nanoparticles were characterized by Ultraviolet-visible spectroscopy, transmission electron microscopy (TEM), field-emission scanning electron microscopy, Fourier transform infrared (FTIR) spectroscopy and energy-dispersive X-ray spectroscopy (EDX). As the pH of the solution varies, the surface plasmon resonance peak also varies. A fast rate of reaction at pH 10 compared with that at pH 5 was identified. TEM micrographs confirm that the shapes of the particles are spherical and polygonal. Furthermore, the average size of the nanoparticles synthesized at pH 5, pH 8 and pH 10 is 40.26, 28.57 and 24.57 nm, respectively. The structure of the synthesized AgNPs was identified as face-centered cubic (fcc) by XRD. The compositional analysis was determined by EDX. FTIR confirms that the kenaf cellulose extract and sucrose act as stabilizing and reducing agents for the silver nanoparticles. Meanwhile, these AgNPs exhibited size-dependent antibacterial activity against Escherichia coli ( E. coli) and two other unknown bacteria from mobile phone screens and computer keyboard surfaces.

  1. Systems management of facilities agreements

    International Nuclear Information System (INIS)

    Blundell, A.

    1998-01-01

    The various types of facilities agreements, the historical obstacles to implementation of agreement management systems and the new opportunities emerging as industry is beginning to make an effort to overcome these obstacles, are reviewed. Barriers to computerized agreement management systems (lack of consistency, lack of standards, scarcity of appropriate computer software) are discussed. Characteristic features of a model facilities agreement management system and the forces driving the changing attitudes towards such systems (e.g. mergers) are also described

  2. A Bioinformatics Facility for NASA

    Science.gov (United States)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  3. Data Analysis Facility (DAF)

    Science.gov (United States)

    1991-01-01

    NASA-Dryden's Data Analysis Facility (DAF) provides a variety of support services to the entire Dryden community. It provides state-of-the-art hardware and software systems, available to any Dryden engineer for pre- and post-flight data processing and analysis, plus supporting all archival and general computer use. The Flight Data Access System (FDAS) is one of the advanced computer systems in the DAF, providing for fast engineering unit conversion and archival processing of flight data delivered from the Western Aeronautical Test Range. Engineering unit conversion and archival formatting of flight data is performed by the DRACO program on a Sun 690MP and an E-5000 computer. Time history files produced by DRACO are then moved to a permanent magneto-optical archive, where they are network-accessible 24 hours a day, 7 days a week. Pertinent information about the individual flights is maintained in a relational (Sybase) database. The DAF also houses all general computer services, including; the Compute Server 1 and 2 (CS1 and CS2), the server for the World Wide Web, overall computer operations support, courier service, a CD-ROM Writer system, a Technical Support Center, the NASA Dryden Phone System (NDPS), and Hardware Maintenance.

  4. Guide to user facilities at the Lawrence Berkeley Laboratory

    International Nuclear Information System (INIS)

    1984-04-01

    Lawrence Berkeley Laboratories' user facilities are described. Specific facilities include: the National Center for Electron Microscopy; the Bevalac; the SuperHILAC; the Neutral Beam Engineering Test Facility; the National Tritium Labeling Facility; the 88 inch Cyclotron; the Heavy Charged-Particle Treatment Facility; the 2.5 MeV Van de Graaff; the Sky Simulator; the Center for Computational Seismology; and the Low Background Counting Facility

  5. Exercise evaluation and simulation facility

    International Nuclear Information System (INIS)

    Meitzler, W.D.; Jaske, R.T.

    1983-12-01

    The Exercise Evaluation and Simulation Facility (EESF) is a mini computer based system that will serve as a tool to aid FEMA in the evaluation of radiological emergency plans and preparedness around commercial nucler power facilities. The EESF integrates the following resources: a meteorological model, dose model, evacuation model, map information, and exercise information into a single system. Thus the user may access these various resources concurrently, and on completion display the results on a color graphic display or hardcopy unit. A unique capability made possible by the integration of these models is the computation of estimated total dose to the population

  6. Secure Dynamic access control scheme of PHR in cloud computing.

    Science.gov (United States)

    Chen, Tzer-Shyong; Liu, Chia-Hui; Chen, Tzer-Long; Chen, Chin-Sheng; Bau, Jian-Guo; Lin, Tzu-Ching

    2012-12-01

    With the development of information technology and medical technology, medical information has been developed from traditional paper records into electronic medical records, which have now been widely applied. The new-style medical information exchange system "personal health records (PHR)" is gradually developed. PHR is a kind of health records maintained and recorded by individuals. An ideal personal health record could integrate personal medical information from different sources and provide complete and correct personal health and medical summary through the Internet or portable media under the requirements of security and privacy. A lot of personal health records are being utilized. The patient-centered PHR information exchange system allows the public autonomously maintain and manage personal health records. Such management is convenient for storing, accessing, and sharing personal medical records. With the emergence of Cloud computing, PHR service has been transferred to storing data into Cloud servers that the resources could be flexibly utilized and the operation cost can be reduced. Nevertheless, patients would face privacy problem when storing PHR data into Cloud. Besides, it requires a secure protection scheme to encrypt the medical records of each patient for storing PHR into Cloud server. In the encryption process, it would be a challenge to achieve accurately accessing to medical records and corresponding to flexibility and efficiency. A new PHR access control scheme under Cloud computing environments is proposed in this study. With Lagrange interpolation polynomial to establish a secure and effective PHR information access scheme, it allows to accurately access to PHR with security and is suitable for enormous multi-users. Moreover, this scheme also dynamically supports multi-users in Cloud computing environments with personal privacy and offers legal authorities to access to PHR. From security and effectiveness analyses, the proposed PHR access

  7. Operating procedures: Fusion Experiments Analysis Facility

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, R.A.; Carey, R.W.

    1984-03-20

    The Fusion Experiments Analysis Facility (FEAF) is a computer facility based on a DEC VAX 11/780 computer. It became operational in late 1982. At that time two manuals were written to aid users and staff in their interactions with the facility. This manual is designed as a reference to assist the FEAF staff in carrying out their responsibilities. It is meant to supplement equipment and software manuals supplied by the vendors. Also this manual provides the FEAF staff with a set of consistent, written guidelines for the daily operation of the facility.

  8. Operating procedures: Fusion Experiments Analysis Facility

    International Nuclear Information System (INIS)

    Lerche, R.A.; Carey, R.W.

    1984-01-01

    The Fusion Experiments Analysis Facility (FEAF) is a computer facility based on a DEC VAX 11/780 computer. It became operational in late 1982. At that time two manuals were written to aid users and staff in their interactions with the facility. This manual is designed as a reference to assist the FEAF staff in carrying out their responsibilities. It is meant to supplement equipment and software manuals supplied by the vendors. Also this manual provides the FEAF staff with a set of consistent, written guidelines for the daily operation of the facility

  9. Emission Facilities - Erosion & Sediment Control Facilities

    Data.gov (United States)

    NSGIC Education | GIS Inventory — An Erosion and Sediment Control Facility is a DEP primary facility type related to the Water Pollution Control program. The following sub-facility types related to...

  10. LEGS data acquisition facility

    International Nuclear Information System (INIS)

    LeVine, M.J.

    1985-01-01

    The data acquisition facility for the LEGS medium energy photonuclear beam line is composed of an auxiliary crate controller (ACC) acting as a front-end processor, loosely coupled to a time-sharing host computer based on a UNIX-like environment. The ACC services all real-time demands in the CAMAC crate: it responds to LAMs generated by data acquisition modules, to keyboard commands, and it refreshes the graphics display at frequent intervals. The host processor is needed only for printing histograms and recording event buffers on magnetic tape. The host also provides the environment for software development. The CAMAC crate is interfaced by a VERSAbus CAMAC branch driver

  11. Facility model for the Los Alamos Plutonium Facility

    International Nuclear Information System (INIS)

    Coulter, C.A.; Thomas, K.E.; Sohn, C.L.; Yarbro, T.F.; Hench, K.W.

    1986-01-01

    The Los Alamos Plutonium Facility contains more than sixty unit processes and handles a large variety of nuclear materials, including many forms of plutonium-bearing scrap. The management of the Plutonium Facility is supporting the development of a computer model of the facility as a means of effectively integrating the large amount of information required for material control, process planning, and facility development. The model is designed to provide a flexible, easily maintainable facility description that allows the faciltiy to be represented at any desired level of detail within a single modeling framework, and to do this using a model program and data files that can be read and understood by a technically qualified person without modeling experience. These characteristics were achieved by structuring the model so that all facility data is contained in data files, formulating the model in a simulation language that provides a flexible set of data structures and permits a near-English-language syntax, and using a description for unit processes that can represent either a true unit process or a major subsection of the facility. Use of the model is illustrated by applying it to two configurations of a fictitious nuclear material processing line

  12. Air Quality Facilities

    Data.gov (United States)

    Iowa State University GIS Support and Research FacilityFacilities with operating permits for Title V of the Federal Clean Air Act, as well as facilities required to submit an air emissions inventory, and other facilities...

  13. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  14. LLNL superconducting magnets test facility

    Energy Technology Data Exchange (ETDEWEB)

    Manahan, R; Martovetsky, N; Moller, J; Zbasnik, J

    1999-09-16

    The FENIX facility at Lawrence Livermore National Laboratory was upgraded and refurbished in 1996-1998 for testing CICC superconducting magnets. The FENIX facility was used for superconducting high current, short sample tests for fusion programs in the late 1980s--early 1990s. The new facility includes a 4-m diameter vacuum vessel, two refrigerators, a 40 kA, 42 V computer controlled power supply, a new switchyard with a dump resistor, a new helium distribution valve box, several sets of power leads, data acquisition system and other auxiliary systems, which provide a lot of flexibility in testing of a wide variety of superconducting magnets in a wide range of parameters. The detailed parameters and capabilities of this test facility and its systems are described in the paper.

  15. Radiation safety training for accelerator facilities

    International Nuclear Information System (INIS)

    Trinoskey, P.A.

    1997-02-01

    In November 1992, a working group was formed within the U.S. Department of Energy's (DOE's) accelerator facilities to develop a generic safety training program to meet the basic requirements for individuals working in accelerator facilities. This training, by necessity, includes sections for inserting facility-specific information. The resulting course materials were issued by DOE as a handbook under its technical standards in 1996. Because experimenters may be at a facility for only a short time and often at odd times during the day, the working group felt that computer-based training would be useful. To that end, Lawrence Livermore National Laboratory (LLNL) and Argonne National Laboratory (ANL) together have developed a computer-based safety training program for accelerator facilities. This interactive course not only enables trainees to receive facility- specific information, but time the training to their schedule and tailor it to their level of expertise

  16. Safeguards Automated Facility Evaluation (SAFE) methodology

    International Nuclear Information System (INIS)

    Chapman, L.D.; Grady, L.M.; Bennett, H.A.; Sasser, D.W.; Engi, D.

    1978-08-01

    An automated approach to facility safeguards effectiveness evaluation has been developed. This automated process, called Safeguards Automated Facility Evaluation (SAFE), consists of a collection of a continuous stream of operational modules for facility characterization, the selection of critical paths, and the evaluation of safeguards effectiveness along these paths. The technique has been implemented on an interactive computer time-sharing system and makes use of computer graphics for the processing and presentation of information. Using this technique, a comprehensive evaluation of a safeguards system can be provided by systematically varying the parameters that characterize the physical protection components of a facility to reflect the perceived adversary attributes and strategy, environmental conditions, and site operational conditions. The SAFE procedure has broad applications in the nuclear facility safeguards field as well as in the security field in general. Any fixed facility containing valuable materials or components to be protected from theft or sabotage could be analyzed using this same automated evaluation technique

  17. Bevalac Minibeam Facility

    International Nuclear Information System (INIS)

    Schimmerling, W.; Alonso, J.; Morgado, R.; Tobias, C.A.; Grunder, H.; Upham, F.T.; Windsor, A.; Armer, R.A.; Yang, T.C.H.; Gunn, J.T.

    1977-03-01

    The Minibeam Facility is a biomedical heavy-ion beam area at the Bevalac designed to satisfy the following requirements: (1) provide a beam incident in a vertical plane for experiments where a horizontal apparatus significantly increases the convenience of performing an experiment or even determines its feasibility; (2) provide an area that is well shielded with respect to electronic interference so that microvolt signals can be detected with acceptable signal-to-noise ratios; (3) provide a beam of small diameter, typically a few millimeters or less, for various studies of cellular function; and (4) provide a facility for experiments that require long setup and preparation times and apparatus that must be left relatively undisturbed between experiments and that need short periods of beam time. The design of such a facility and its main components is described. In addition to the above criteria, the design was constrained by the desire to have inexpensive, simple devices that work reliably and can be easily upgraded for interfacing to the Biomedical PDP 11/45 computer

  18. NSF Lower Atmospheric Observing Facilities (LAOF) in support of science and education

    Science.gov (United States)

    Baeuerle, B.; Rockwell, A.

    2012-12-01

    Researchers, students and teachers who want to understand and describe the Earth System require high quality observations of the atmosphere, ocean, and biosphere. Making these observations requires state-of-the-art instruments and systems, often carried on highly capable research platforms. To support this need of the geosciences community, the National Science Foundation's (NSF) Division of Atmospheric and Geospace Sciences (AGS) provides multi-user national facilities through its Lower Atmospheric Observing Facilities (LAOF) Program at no cost to the investigator. These facilities, which include research aircraft, radars, lidars, and surface and sounding systems, receive NSF financial support and are eligible for deployment funding. The facilities are managed and operated by five LAOF partner organizations: the National Center for Atmospheric Research (NCAR); Colorado State University (CSU); the University of Wyoming (UWY); the Center for Severe Weather Research (CSWR); and the Center for Interdisciplinary Remotely-Piloted Aircraft Studies (CIRPAS). These observational facilities are available on a competitive basis to all qualified researchers from US universities, requiring the platforms and associated services to carry out various research objectives. The deployment of all facilities is driven by scientific merit, capabilities of a specific facility to carry out the proposed observations, and scheduling for the requested time. The process for considering requests and setting priorities is determined on the basis of the complexity of a field campaign. The poster will describe available observing facilities and associated services, and explain the request process researchers have to follow to secure access to these platforms for scientific as well as educational deployments. NSF/NCAR GV Aircraft

  19. Computing in Research.

    Science.gov (United States)

    Ashenhurst, Robert L.

    The introduction and diffusion of automatic computing facilities during the 1960's is reviewed; it is described as a time when research strategies in a broad variety of disciplines changed to take advantage of the newfound power provided by the computer. Several types of typical problems encountered by researchers who adopted the new technologies,…

  20. LLL transient-electromagnetics-measurement facility

    International Nuclear Information System (INIS)

    Deadrick, F.J.; Miller, E.K.; Hudson, H.G.

    1975-01-01

    The operation and hardware of the Lawrence Livermore Laboratory's transient-electromagnetics (EM)-measurement facility are described. The transient-EM range is useful for determining the time-domain transient responses of structures to incident EM pulses. To illustrate the accuracy and utility of the EM-measurement facility, actual experimental measurements are compared to numerically computed values

  1. Reactor facility

    International Nuclear Information System (INIS)

    Suzuki, Hiroaki; Murase, Michio; Yokomizo, Osamu.

    1997-01-01

    The present invention provides a BWR type reactor facility capable of suppressing the amount of steams generated by the mutual effect of a failed reactor core and coolants upon occurrence of an imaginal accident, and not requiring spacial countermeasures for enhancing the pressure resistance of the container vessel. Namely, a means for supplying cooling water at a temperature not lower by 30degC than the saturated temperature corresponding to the inner pressure of the containing vessel upon occurrence of an accident is disposed to a lower dry well below the pressure vessel. As a result, upon occurrence of such an accident that the reactor core should be melted and flown downward of the pressure vessel, when cooling water at a temperature not lower than the saturated temperature, for example, cooling water at 100degC or higher is supplied to the lower dry well, abrupt generation of steams by the mutual effect of the failed reactor core and cooling water is scarcely caused compared with a case of supplying cooling water at a temperature lower than the saturation temperature by 30degC or more. Accordingly, the amount of steams to be generated can be suppressed, and special countermeasure is no more necessary for enhancing the pressure resistance of the container vessel is no more necessary. (I.S.)

  2. Nuclear facilities

    International Nuclear Information System (INIS)

    Anon.

    2002-01-01

    During September and October 2001, 15 events were recorded on the first grade and 1 on the second grade of the INES scale. The second grade event is in fact a re-classification of an incident that occurred on the second april 2001 at Dampierre power plant. This event happened during core refueling, a shift in the operation sequence led to the wrong positioning of 113 assemblies. A preliminary study of this event shows that this wrong positioning could have led, in other circumstances, to the ignition of nuclear reactions. Even in that case, the analysis made by EDF shows that the consequences on the staff would have been limited. Nevertheless a further study has shown that the existing measuring instruments could not have detected the power increase announcing the beginning of the chain reaction. The investigation has shown that there were deficiencies in the control of the successive operations involved in refueling. EDF has proposed a series of corrective measures to be implemented in all nuclear power plants. The other 15 events are described in the article. During this period 121 inspections have been made in nuclear facilities. (A.C.)

  3. Computing Services and Assured Computing

    Science.gov (United States)

    2006-05-01

    fighters’ ability to execute the mission.” Computing Services 4 We run IT Systems that: provide medical care pay the warfighters manage maintenance...users • 1,400 applications • 18 facilities • 180 software vendors • 18,000+ copies of executive software products • Virtually every type of mainframe and... chocs electriques, de branchez les deux cordons d’al imentation avant de faire le depannage P R IM A R Y SD A S B 1 2 PowerHub 7000 RST U L 00- 00

  4. Methods of physical experiment and installation automation on the base of computers

    International Nuclear Information System (INIS)

    Stupin, Yu.V.

    1983-01-01

    Peculiarities of using computers for physical experiment and installation automation are considered. Systems for data acquisition and processing on the base of microprocessors, micro- and mini-computers, CAMAC equipment and real time operational systems as well as systems intended for automation of physical experiments on accelerators and installations of laser thermonuclear fusion and installations for plasma investigation are dpscribed. The problems of multimachine complex and multi-user system, arrangement, development of automated systems for collective use, arrangement of intermachine data exchange and control of experimental data base are discussed. Data on software systems used for complex experimental data processing are presented. It is concluded that application of new computers in combination with new possibilities provided for users by universal operational systems essentially exceeds efficiency of a scientist work

  5. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist.

    Directory of Open Access Journals (Sweden)

    Brian Drawert

    2016-12-01

    Full Text Available We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.

  6. Irradiation Facilities at CERN

    CERN Document Server

    Gkotse, Blerina; Carbonez, Pierre; Danzeca, Salvatore; Fabich, Adrian; Garcia, Alia, Ruben; Glaser, Maurice; Gorine, Georgi; Jaekel, Martin, Richard; Mateu,Suau, Isidre; Pezzullo, Giuseppe; Pozzi, Fabio; Ravotti, Federico; Silari, Marco; Tali, Maris

    2017-01-01

    CERN provides unique irradiation facilities for applications in many scientific fields. This paper summarizes the facilities currently operating for proton, gamma, mixed-field and electron irradiations, including their main usage, characteristics and information about their operation. The new CERN irradiation facilities database is also presented. This includes not only CERN facilities but also irradiation facilities available worldwide.

  7. North Slope, Alaska ESI: FACILITY (Facility Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains data for oil field facilities for the North Slope of Alaska. Vector points in this data set represent oil field facility locations. This data...

  8. Optical system for the Protein Crystallisation Diagnostics Facility (PCDF) on board the ISS

    Science.gov (United States)

    Joannes, Luc; Dupont, Olivier; Dewandel, Jean-Luc; Ligot, Renaud; Algrain, Hervé

    2004-06-01

    The Protein Crystallisation Diagnostic Facility (PCDF) is a multi-user facility to study the protein crystallisation under the conditions of micro-gravity onboard the International Space Station (ISS) Columbus facility. Large size protein crystals will growth under reduced gravity in thermally controlled reactors. A combination of diagnostic tools like video system, microscope, interferometer, and light scattering device shall help to understand the growth phenomena. Common methods of protein crystallisation shall be performed in PCDF: Dialysis where the protein solution and the salt solution are separated by a semi-permeable membrane. Extended Length Dialysis Batch where the saturation to get crystals is achieved by changing the concentration of the protein in the sample liquid. The overall ESA project is leaded by EADS Space Transportation, Friedrichshafen, Germany. Lambda-X is responsible for the Optical System (OS), with Verhaert Design and Development as sub-contractor for the mechanical design. The OS includes different compact parts: Original illumination systems based on LEDs of difference colours; Quantitative Mach-Zehnder interferometers to measure the concentration distribution around crystals; Imaging assemblies to visualize the protein volume with different field of views. The paper concentrates on the description of each part, and in particular on the imaging assembly which allow switching from one field of view to another by passive elements only.

  9. Nuclear Station Facilities Improvement Planning

    International Nuclear Information System (INIS)

    Hooks, R. W.; Lunardini, A. L.; Zaben, O.

    1991-01-01

    An effective facilities improvement program will include a plan for the temporary relocation of personnel during the construction of an adjoining service building addition. Since the smooth continuation of plant operation is of paramount importance, the phasing plan is established to minimize the disruptions in day-to-day station operation and administration. This plan should consider the final occupancy arrangements and the transition to the new structure; for example, computer hookup and phase-in should be considered. The nuclear industry is placing more emphasis on safety and reliability of nuclear power plants. In order to do this, more emphasis is placed on operations and maintenance. This results in increased size of managerial, technical and maintenance staffs. This in turn requires improved office and service facilities. The facilities that require improvement may include training areas, rad waste processing and storage facilities, and maintenance facilities. This paper discusses an approach for developing an effective program to plan and implement these projects. These improvement projects can range in magnitude from modifying a simple system to building a new structure to allocating space for a future project. This paper addresses the planning required for the new structures with emphasis on site location, space allocation, and internal layout. Since facility planning has recently been completed by Sargent and Leyden at six U. S. nuclear stations, specific examples from some of those plants are presented. Site planning and the establishment of long-range goals are of the utmost importance when undertaking a facilities improvement program for a nuclear station. A plan that considers the total site usage will enhance the value of both the new and existing facilities. Proper planning at the beginning of the program can minimize costs and maximize the benefits of the program

  10. Jupiter Laser Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Jupiter Laser Facility is an institutional user facility in the Physical and Life Sciences Directorate at LLNL. The facility is designed to provide a high degree...

  11. Basic Research Firing Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Basic Research Firing Facility is an indoor ballistic test facility that has recently transitioned from a customer-based facility to a dedicated basic research...

  12. Aperture area measurement facility

    Data.gov (United States)

    Federal Laboratory Consortium — NIST has established an absolute aperture area measurement facility for circular and near-circular apertures use in radiometric instruments. The facility consists of...

  13. High Throughput Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Argonne?s high throughput facility provides highly automated and parallel approaches to material and materials chemistry development. The facility allows scientists...

  14. Licensed Healthcare Facilities

    Data.gov (United States)

    California Natural Resource Agency — The Licensed Healthcare Facilities point layer represents the locations of all healthcare facilities licensed by the State of California, Department of Health...

  15. Facility Registry Service (FRS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Facility Registry Service (FRS) provides an integrated source of comprehensive (air, water, and waste) environmental information about facilities across EPA,...

  16. Guide to research facilities

    Energy Technology Data Exchange (ETDEWEB)

    1993-06-01

    This Guide provides information on facilities at US Department of Energy (DOE) and other government laboratories that focus on research and development of energy efficiency and renewable energy technologies. These laboratories have opened these facilities to outside users within the scientific community to encourage cooperation between the laboratories and the private sector. The Guide features two types of facilities: designated user facilities and other research facilities. Designated user facilities are one-of-a-kind DOE facilities that are staffed by personnel with unparalleled expertise and that contain sophisticated equipment. Other research facilities are facilities at DOE and other government laboratories that provide sophisticated equipment, testing areas, or processes that may not be available at private facilities. Each facility listing includes the name and phone number of someone you can call for more information.

  17. The electron microscopy facility at the LNLS

    International Nuclear Information System (INIS)

    Ugarte, D.; Zanchet, D.; Silva, P.C.; Araujo, S.R. de; Bettini, J.; Gonzalez, J.C.; Nakabayashi, D.B.

    2004-01-01

    Full text: The Electron Microscopy Laboratory (LME, Lab. Microscopia Eletronica) is one of the multi user facilities of the Laboratorio Nacional de Luz Sincrotron (LNLS). It has been in operation since the beginning of 1999 to provide spatial high resolution tools, making the LNLS a unique center for advanced characterization of materials. The equipment installed at the LME can be brie y described as: a) a Low Vacuum Scanning Electron Microscope (SEM, JSM-5900LV) with microanalysis and crystallographic mapping capabilities; b) a Field Emission Gun SEM (JSM-6330F); c) a 300 kV High Resolution Transmission Electron Microscope (HRTEM, JEM 3010 URP, 1.7 A Point Res.) with TV Camera, Multi-Scan CCD Camera and X-ray Si(Li) detector; and d) a complete sample preparation laboratory for EM studies A simple procedure allows access to the LME instruments, firstly a short research project must be submitted for evaluation of viability and relevance; subsequently the training microscope sessions are scheduled. It is important to remark that EM is a routine characterization tool and the researchers have to operate the microscope by themselves; for that a training period is necessary, which may vary from 1-2 weeks for a SEM to 2-4 months for the HRTEM. Our staff addresses a great effort to the formation of human resources in order to allow inexperienced Users to become capable of acquiring and interpreting data for their research projects. Since its installation, the LME has trained more than 300 Users in EM techniques. In 2003, the number of projects developed was: 36 in the HRTEM, 16 in the FEG-SEM and 48 in the LV-SEM. This means that just the HRTEM has operated 2157 hours. The constant increase of users in addition to the more exigent EM studies being proposed indicates the necessity of an expansion of the LME by the purchase of a 200 kV FEG-TEM oriented for nano-analysis and Electron Energy Loss Spectroscopy.. (author)

  18. Octopus: LLL's computing utility

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    The Laboratory's Octopus network constitutes one of the greatest concentrations of computing power in the world. This power derives from the network's organization as well as from the size and capability of its computers, storage media, input/output devices, and communication channels. Being in a network enables these facilities to work together to form a unified computing utility that is accessible on demand directly from the users' offices. This computing utility has made a major contribution to the pace of research and development at the Laboratory; an adequate rate of progress in research could not be achieved without it. 4 figures

  19. Communication grounding facility

    International Nuclear Information System (INIS)

    Lee, Gye Seong

    1998-06-01

    It is about communication grounding facility, which is made up twelve chapters. It includes general grounding with purpose, materials thermal insulating material, construction of grounding, super strength grounding method, grounding facility with grounding way and building of insulating, switched grounding with No. 1A and LCR, grounding facility of transmission line, wireless facility grounding, grounding facility in wireless base station, grounding of power facility, grounding low-tenton interior power wire, communication facility of railroad, install of arrester in apartment and house, install of arrester on introduction and earth conductivity and measurement with introduction and grounding resistance.

  20. How to Bill Your Computer Services.

    Science.gov (United States)

    Dooskin, Herbert P.

    1981-01-01

    A computer facility billing procedure should be designed so that the full costs of a computer center operation are equitably charged to the users. Design criteria, costing methods, and management's role are discussed. (Author/MLF)

  1. AOV Facility Tool/Facility Safety Specifications -

    Data.gov (United States)

    Department of Transportation — Develop and maintain authorizing documents that are standards that facilities must follow. These standards are references of FAA regulations and are specific to the...

  2. Guide to computing at ANL

    Energy Technology Data Exchange (ETDEWEB)

    Peavler, J. (ed.)

    1979-06-01

    This publication gives details about hardware, software, procedures, and services of the Central Computing Facility, as well as information about how to become an authorized user. Languages, compilers' libraries, and applications packages available are described. 17 tables. (RWR)

  3. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... Children's (Pediatric) CT (Computed Tomography) Sponsored by Please note RadiologyInfo.org is not a medical facility. Please ... is further reviewed by committees from the American College of Radiology (ACR) and the Radiological Society of ...

  4. Adiabatic quantum computing

    OpenAIRE

    Lobe, Elisabeth; Stollenwerk, Tobias; Tröltzsch, Anke

    2015-01-01

    In the recent years, the field of adiabatic quantum computing has gained importance due to the advances in the realisation of such machines, especially by the company D-Wave Systems. These machines are suited to solve discrete optimisation problems which are typically very hard to solve on a classical computer. Due to the quantum nature of the device it is assumed that there is a substantial speedup compared to classical HPC facilities. We explain the basic principles of adiabatic ...

  5. Manual for operation of the multipurpose thermalhydraulic test facility TOPFLOW (Transient Two Phase Flow Test Facility)

    International Nuclear Information System (INIS)

    Beyer, M.; Carl, H.; Schuetz, H.; Pietruske, H.; Lenk, S.

    2004-07-01

    The Forschungszentrum Rossendorf (FZR) e. V. is constructing a new large-scale test facility, TOPFLOW, for thermalhydraulic single effect tests. The acronym stands for transient two phase flow test facility. It will mainly be used for the investigation of generic and applied steady state and transient two phase flow phenomena and the development and validation of models of computational fluid dynamic (CFD) codes. The manual of the test facility must always be available for the staff in the control room and is restricted condition during operation of personnel and also reconstruction of the facility. (orig./GL)

  6. New computing techniques in physics research

    International Nuclear Information System (INIS)

    Perret-Gallix, D.; Wojcik, W.

    1990-01-01

    These proceedings relate in a pragmatic way the use of methods and techniques of software engineering and artificial intelligence in high energy and nuclear physics. Such fundamental research can only be done through the design, the building and the running of equipments and systems among the most complex ever undertaken by mankind. The use of these new methods is mandatory in such an environment. However their proper integration in these real applications raise some unsolved problems. Their solution, beyond the research field, will lead to a better understanding of some fundamental aspects of software engineering and artificial intelligence. Here is a sample of subjects covered in the proceedings : Software engineering in a multi-users, multi-versions, multi-systems environment, project management, software validation and quality control, data structure and management object oriented languages, multi-languages application, interactive data analysis, expert systems for diagnosis, expert systems for real-time applications, neural networks for pattern recognition, symbolic manipulation for automatic computation of complex processes

  7. Los Alamos Plutonium Facility Waste Management System

    International Nuclear Information System (INIS)

    Smith, K.; Montoya, A.; Wieneke, R.; Wulff, D.; Smith, C.; Gruetzmacher, K.

    1997-01-01

    This paper describes the new computer-based transuranic (TRU) Waste Management System (WMS) being implemented at the Plutonium Facility at Los Alamos National Laboratory (LANL). The Waste Management System is a distributed computer processing system stored in a Sybase database and accessed by a graphical user interface (GUI) written in Omnis7. It resides on the local area network at the Plutonium Facility and is accessible by authorized TRU waste originators, count room personnel, radiation protection technicians (RPTs), quality assurance personnel, and waste management personnel for data input and verification. Future goals include bringing outside groups like the LANL Waste Management Facility on-line to participate in this streamlined system. The WMS is changing the TRU paper trail into a computer trail, saving time and eliminating errors and inconsistencies in the process

  8. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  9. The role of micro size computing clusters for small physics groups

    International Nuclear Information System (INIS)

    Shevel, A Y

    2014-01-01

    A small physics group (3-15 persons) might use a number of computing facilities for the analysis/simulation, developing/testing, teaching. It is discussed different types of computing facilities: collaboration computing facilities, group local computing cluster (including colocation), cloud computing. The author discuss the growing variety of different computing options for small groups and does emphasize the role of the group owned computing cluster of micro size.

  10. DYMAC computer system

    International Nuclear Information System (INIS)

    Hagen, J.; Ford, R.F.

    1979-01-01

    The DYnamic Materials ACcountability program (DYMAC) has been monitoring nuclear material at the Los Alamos Scientific Laboratory plutonium processing facility since January 1978. This paper presents DYMAC's features and philosophy, especially as reflected in its computer system design. Early decisions and tradeoffs are evaluated through the benefit of a year's operating experience

  11. Mechanistic facility safety and source term analysis

    International Nuclear Information System (INIS)

    PLYS, M.G.

    1999-01-01

    A PC-based computer program was created for facility safety and source term analysis at Hanford The program has been successfully applied to mechanistic prediction of source terms from chemical reactions in underground storage tanks, hydrogen combustion in double contained receiver tanks, and proccss evaluation including the potential for runaway reactions in spent nuclear fuel processing. Model features include user-defined facility room, flow path geometry, and heat conductors, user-defined non-ideal vapor and aerosol species, pressure- and density-driven gas flows, aerosol transport and deposition, and structure to accommodate facility-specific source terms. Example applications are presented here

  12. In-facility transport code review

    International Nuclear Information System (INIS)

    Spore, J.W.; Boyack, B.E.; Bohl, W.R.

    1996-07-01

    The following computer codes were reviewed by the In-Facility Transport Working Group for application to the in-facility transport of radioactive aerosols, flammable gases, and/or toxic gases: (1) CONTAIN, (2) FIRAC, (3) GASFLOW, (4) KBERT, and (5) MELCOR. Based on the review criteria as described in this report and the versions of each code available at the time of the review, MELCOR is the best code for the analysis of in-facility transport when multidimensional effects are not significant. When multi-dimensional effects are significant, GASFLOW should be used

  13. The Education Value of Cloud Computing

    Science.gov (United States)

    Katzan, Harry, Jr.

    2010-01-01

    Cloud computing is a technique for supplying computer facilities and providing access to software via the Internet. Cloud computing represents a contextual shift in how computers are provisioned and accessed. One of the defining characteristics of cloud software service is the transfer of control from the client domain to the service provider.…

  14. Head Tracked Multi User Autostereoscopic 3D Display Investigations

    OpenAIRE

    Brar, Rajwinder Singh

    2012-01-01

    The research covered in this thesis encompasses a consideration of 3D television requirements and a survey of stereoscopic and autostereoscopic methods. This confirms that although there is a lot of activity in this area, very little of this work could be considered suitable for television. The principle of operation, design of the components of the optical system and evaluation of two EU-funded (MUTED & HELIUM3D projects) glasses-free (autostereoscopic) displays is described. Four iterati...

  15. A System Dynamics Based Multi User Network Game

    National Research Council Canada - National Science Library

    Toyoglu, Hunkar

    1999-01-01

    .... This game can accommodate simultaneous play by a maximum of seven players. Management's job in the game is to employ its company's resources and to manage its operations in such a way as to minimize the inventory fluctuations and costs...

  16. Multi-User Interactive TV: the Next Step in Personalization

    NARCIS (Netherlands)

    van Brandenburg, Ray; van Deventer, M. Oskar; Karagiannis, Georgios; Schenk, Mike

    2010-01-01

    In the past few years there has been an increasing trend towards personalization in the TV world. IMS-based IPTV is a good example of a highly personalized IPTV architecture, featuring an advanced identity management subsystem. This article studies a next step in the personalization of the

  17. Multi-User Virtual Environments Fostering Collaboration in Formal Education

    Science.gov (United States)

    Di Blas, Nicoletta; Paolini, Paolo

    2014-01-01

    This paper is about how serious games based on MUVEs in formal education can foster collaboration. More specifically, it is about a large case-study with four different programs which took place from 2002 to 2009 and involved more than 9,000 students, aged between 12 and 18, from various nations (18 European countries, Israel and the USA). These…

  18. Cell-Edge Multi-User Relaying with Overhearing

    DEFF Research Database (Denmark)

    Sun, Fan; Kim, Tae Min; Paulraj, Arogyaswami

    2013-01-01

    Carefully designed protocols can turn overheard interference into useful side information to allow simultaneous transmission of multiple communication flows and increase the spectral efficiency in interference-limited regime. In this letter, we propose a novel scheme in a typical cell-edge scenar....... By exploiting the overhearing link through proper relay precoding and adaptive receiver processing, rate performance can be significantly improved compared to the conventional transmission which does not utilize overhearing....

  19. The LLNL Multi-User Tandem Laboratory PIXE microprobe

    International Nuclear Information System (INIS)

    Heikkinen, D.W.; Bench, G.S.; Antolak, A.J.; Morse, D.H.; Pontau, A.E.

    1992-01-01

    We have recently completed the construction of a new ion beamline primarily for particle-induced x-ray emission (PIXE) analysis. This will supplement our current ion microtomography (IMT) material characterization capabilities using accelerator microanalysis. In this paper we describe the PIXE beamline and give some results that characterize the system. We also report the results of some initial experiments

  20. A Multi-User Remote Academic Laboratory System

    Science.gov (United States)

    Barrios, Arquimedes; Panche, Stifen; Duque, Mauricio; Grisales, Victor H.; Prieto, Flavio; Villa, Jose L.; Chevrel, Philippe; Canu, Michael

    2013-01-01

    This article describes the development, implementation and preliminary operation assessment of Multiuser Network Architecture to integrate a number of Remote Academic Laboratories for educational purposes on automatic control. Through the Internet, real processes or physical experiments conducted at the control engineering laboratories of four…

  1. Alternative multi-user interaction screen: initial ergonomic test results

    CSIR Research Space (South Africa)

    Smith, Andrew C

    2010-05-01

    Full Text Available excitement. In some instances the recording volume was too low and the recording had to be repeated. Participants were very keen to record their own sound clip and the activity would invariably result in general laughter when played back. We captured...

  2. Meteorological Data Visualization in Multi-User Virtual Reality

    Science.gov (United States)

    Appleton, R.; van Maanen, P. P.; Fisher, W. I.; Krijnen, R.

    2017-12-01

    Due to their complexity and size, visualization of meteorological data is important. It enables the precise examining and reviewing of meteorological details and is used as a communication tool for reporting, education and to demonstrate the importance of the data to policy makers. Specifically for the UCAR community it is important to explore all of such possibilities.Virtual Reality (VR) technology enhances the visualization of volumetric and dynamical data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of the shelf VR hardware enabled us to create a very intuitive and low cost way to visualize meteorological data. A VR viewer has been implemented using multiple HTC Vive head sets and allows visualization and analysis of meteorological data in NetCDF format (e.g. of NCEP North America Model (NAM), see figure). Sources of atmospheric/meteorological data include radar and satellite as well as traditional weather stations. The data includes typical meteorological information such as temperature, humidity, air pressure, as well as those data described by the climate forecast (CF) model conventions (http://cfconventions.org). Other data such as lightning-strike data and ultra-high-resolution satellite data are also becoming available. The users can navigate freely around the data which is presented in a virtual room at a scale of up to 3.5 X 3.5 meters. The multiple users can manipulate the model simultaneously. Possible mutations include scaling/translating, filtering by value and using a slicing tool to cut-off specific sections of the data to get a closer look. The slicing can be done in any direction using the concept of a `virtual knife' in real-time. The users can also scoop out parts of the data and walk though successive states of the model. Future plans are (a.o.) to further improve the performance to a higher update rate (for the reduction of possible motion sickness) and to add more advanced filtering and annotation capabilities. We are looking for cooperation with data owners with use cases such as the above mentioned. This will help in further improving and developing our tool and to broaden its application into other domains.

  3. SVC-based Multi-user Streamloading for Wireless Networks

    OpenAIRE

    Hosseini, S. Amir; Lu, Zheng; de Veciana, Gustavo; Panwar, Shivendra S.

    2015-01-01

    In this paper, we present an approach for joint rate allocation and quality selection for a novel video streaming scheme called streamloading. Streamloading is a recently developed method for delivering high quality video without violating copyright enforced restrictions on content access for video streaming. In regular streaming services, content providers restrict the amount of viewable video that users can download prior to playback. This approach can cause inferior user experience due to ...

  4. Mobile Applications and Multi-User Virtual Reality Simulations

    Science.gov (United States)

    Gordillo, Orlando Enrique

    2016-01-01

    This is my third internship with NASA and my second one at the Johnson Space Center. I work within the engineering directorate in ER7 (Software Robotics and Simulations Division) at a graphics lab called IGOAL. We are a very well-rounded lab because we have dedicated software developers and dedicated 3D artist, and when you combine the two, what you get is the ability to create many different things such as interactive simulations, 3D models, animations, and mobile applications.

  5. Automatic Bluetooth testing for mobile multi-user applications

    Science.gov (United States)

    Luck, Dennis; Hörning, Henrik; Edlich, Stefan

    2008-02-01

    In this paper we present a simple approach for the development of multiuser and multimedia applications based on Bluetooth. One main obstacle for Bluetooth synchronization of mobile applications is the lack of a complete specification implementation. Nowadays these applications must be on market as fast as possible. Hence, developers must be able to test several dozens of mobile devices for their Bluetooth capability. And surprisingly, the capabilities differ not only between the Bluetooth specification 1.0 and 2.0. The current development was triggered by the development of mass applications as mobile multiuser games (e.g. Tetris). Our Application can be distributed on several mobile phones. If started, the Bluetooth applications try to connect each other and automatically start to detect device capabilities. These capabilities will be gathered and distributed to a server. The server performs statistical investigations and aggregates them to be presented as a report. The result is a faster development regarding mobile communications.

  6. Uncoordinated Multi-user Video Streaming in VANETs using Skype

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Moreschini, Sergio; Vinel, Alexey

    2017-01-01

    Real-time video delivery in Vehicle-to-Infrastructure (V2I) scenario enables a variety of multimedia vehicular services. We conduct experiments with Dedicated Short Range Communications (DSRC) transceivers located in the mutual proximity and exchanging Skype video calls traffic. We demonstrate...

  7. KOMPUTERISASI PENGOLAHAN DATA PERPUSTAKAAN SMP 1 KALIWIRO BERBASIS MULTI USER

    Directory of Open Access Journals (Sweden)

    Andi Dwi Riyanto

    2008-08-01

    Full Text Available Komputerisasi Pengolahan Data Perpustakaan SMP 1 Kaliwiro di angkat menjadi tema pada penelitian ini karena berdasar penelitian oleh penulis, sistem pengolahan data yang ada pada SMP tersebut masih bersifat manual, sehingga penulis berkeinginan mengganti sistem yang ada dengan sistem komputer. Ruang lingkup dari tugas akhir ini dibatasi pada peng-input-an data anggota, buku dan invent buku, kemudian transaksi peminjaman, pengembalian dan perpanjangan buku, serta output berupa laporan. Program pengolahan data perpustakaan ini mendukung konsep MULTIUSER, yaitu dapat diakses oleh beberapa komputer secara bersamaan. Namun dari program aplikasi yang dibuat, masih banyak kemungkinan pengembangan yang dapat dilakukan di kemudian hari. Misalnya seperti untuk tampilan interface bisa dibuat cantik sesuai selera atau permintaan instansi, kemudian bisa dikembangkan menjadi berbasis WEB agar dapat diakses melalui internet, atau bahkan WAP agar dapat diakses melalui handphone.

  8. Nuclear fuel cycle facility accident analysis handbook

    International Nuclear Information System (INIS)

    Ayer, J.E.; Clark, A.T.; Loysen, P.; Ballinger, M.Y.; Mishima, J.; Owczarski, P.C.; Gregory, W.S.; Nichols, B.D.

    1988-05-01

    The Accident Analysis Handbook (AAH) covers four generic facilities: fuel manufacturing, fuel reprocessing, waste storage/solidification, and spent fuel storage; and six accident types: fire, explosion, tornado, criticality, spill, and equipment failure. These are the accident types considered to make major contributions to the radiological risk from accidents in nuclear fuel cycle facility operations. The AAH will enable the user to calculate source term releases from accident scenarios manually or by computer. A major feature of the AAH is development of accident sample problems to provide input to source term analysis methods and transport computer codes. Sample problems and illustrative examples for different accident types are included in the AAH

  9. Lesotho - Health Facility Survey

    Data.gov (United States)

    Millennium Challenge Corporation — The main objective of the 2011 Health Facility Survey (HFS) was to establish a baseline for informing the Health Project performance indicators on health facilities,...

  10. Armament Technology Facility (ATF)

    Data.gov (United States)

    Federal Laboratory Consortium — The Armament Technology Facility is a 52,000 square foot, secure and environmentally-safe, integrated small arms and cannon caliber design and evaluation facility....

  11. Projectile Demilitarization Facilities

    Data.gov (United States)

    Federal Laboratory Consortium — The Projectile Wash Out Facility is US Army Ammunition Peculiar Equipment (APE 1300). It is a pilot scale wash out facility that uses high pressure water and steam...

  12. Rocketball Test Facility

    Data.gov (United States)

    Federal Laboratory Consortium — This test facility offers the capability to emulate and measure guided missile radar cross-section without requiring flight tests of tactical missiles. This facility...

  13. Wastewater Treatment Facilities

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — Individual permits for municipal, industrial, and semi-public wastewater treatment facilities in Iowa for the National Pollutant Discharge Elimination System (NPDES)...

  14. Materiel Evaluation Facility

    Data.gov (United States)

    Federal Laboratory Consortium — CRREL's Materiel Evaluation Facility (MEF) is a large cold-room facility that can be set up at temperatures ranging from −20°F to 120°F with a temperature change...

  15. Environmental Toxicology Research Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Fully-equipped facilities for environmental toxicology researchThe Environmental Toxicology Research Facility (ETRF) located in Vicksburg, MS provides over 8,200 ft...

  16. Dialysis Facility Compare

    Data.gov (United States)

    U.S. Department of Health & Human Services — Dialysis Facility Compare helps you find detailed information about Medicare-certified dialysis facilities. You can compare the services and the quality of care that...

  17. Energetics Conditioning Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Energetics Conditioning Facility is used for long term and short term aging studies of energetic materials. The facility has 10 conditioning chambers of which 2...

  18. Explosive Components Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The 98,000 square foot Explosive Components Facility (ECF) is a state-of-the-art facility that provides a full-range of chemical, material, and performance analysis...

  19. Facilities for US Radioastronomy.

    Science.gov (United States)

    Thaddeus, Patrick

    1982-01-01

    Discusses major developments in radioastronomy since 1945. Topics include proposed facilities, very-long-baseline interferometric array, millimeter-wave telescope, submillimeter-wave telescope, and funding for radioastronomy facilities and projects. (JN)

  20. Neighbourhood facilities for sustainability

    CSIR Research Space (South Africa)

    Gibberd, Jeremy T

    2013-01-01

    Full Text Available . In this paper these are referred to as ‘Neighbourhood Facilities for Sustainability’. Neighbourhood Facilities for Sustainability (NFS) are initiatives undertaken by individuals and communities to build local sustainable systems which not only improve...

  1. Cold Vacuum Drying Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Located near the K-Basins (see K-Basins link) in Hanford's 100 Area is a facility called the Cold Vacuum Drying Facility (CVDF).Between 2000 and 2004, workers at the...

  2. Ouellette Thermal Test Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Thermal Test Facility is a joint Army/Navy state-of-the-art facility (8,100 ft2) that was designed to:Evaluate and characterize the effect of flame and thermal...

  3. Integrated Disposal Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Located near the center of the 586-square-mile Hanford Site is the Integrated Disposal Facility, also known as the IDF.This facility is a landfill similar in concept...

  4. Facility design: introduction

    International Nuclear Information System (INIS)

    Unger, W.E.

    1980-01-01

    The design of shielded chemical processing facilities for handling plutonium is discussed. The TRU facility is considered in particular; its features for minimizing the escape of process materials are listed. 20 figures

  5. Security and Privacy in Fog Computing: Challenges

    OpenAIRE

    Mukherjee, Mithun; Matam, Rakesh; Shu, Lei; Maglaras, Leandros; Ferrag, Mohamed Amine; Choudhry, Nikumani; Kumar, Vikas

    2017-01-01

    open access article Fog computing paradigm extends the storage, networking, and computing facilities of the cloud computing toward the edge of the networks while offloading the cloud data centers and reducing service latency to the end users. However, the characteristics of fog computing arise new security and privacy challenges. The existing security and privacy measurements for cloud computing cannot be directly applied to the fog computing due to its features, such as mobility, heteroge...

  6. Experimental facilities and simulation means

    International Nuclear Information System (INIS)

    Thomas, J.B.

    2009-01-01

    This paper and its associated series of slides review the experimental facilities and the simulation means used for the development of nuclear reactors in France. These experimental facilities include installations used for the measurement and qualification of nuclear data (mainly cross-sections) like EOLE reactor and Minerve zero power reactor, installations like material testing reactors, installations dedicated to reactor safety experiments like Cabri reactor, and other installations like accelerators (Jannus accelerator, GANIL for instance) that are complementary to neutron irradiations in experimental reactors. The simulation means rely on a series of advanced computer codes: Tripoli-Apollo for neutron transport, Numodis for irradiation impact on materials, Neptune and Cathare for 2-phase fluid dynamics, Europlexus for mechanical structures, and Pleiades (with Alcyone) for nuclear fuels. (A.C.)

  7. CLEAR test facility

    CERN Multimedia

    Ordan, Julien Marius

    2017-01-01

    A new user facility for accelerator R&D, the CERN Linear Electron Accelerator for Research (CLEAR), started operation in August 2017. CLEAR evolved from the former CLIC Test Facility 3 (CTF3) used by the Compact Linear Collider (CLIC). The new facility is able to host and test a broad range of ideas in the accelerator field.

  8. Generalized plotting facility

    Energy Technology Data Exchange (ETDEWEB)

    Burris, R.D.; Gray, W.H.

    1978-01-01

    A command which causes the translation of any supported graphics file format to a format acceptable to any supported device was implemented on two linked DECsystem-10s. The processing of the command is divided into parsing and translating phases. In the parsing phase, information is extracted from the command and augmented by default data. The results of this phase are saved on disk, and the appropriate translating routine is invoked. Twenty-eight translating programs were implemented in this system. They support four different graphics file formats, including the DISSPLA and Calcomp formats, and seven different types of plotters, including Tektronix, Calcomp, and Versatec devices. Some of the plotters are devices linked to the DECsystem-10s, and some are driven by IBM System/360 computers linked via a communications network to the DECsystem-10s. The user of this facility can use any of the supported packages to create a file of graphics data, preview the file on an on-line scope, and, when satisfied, cause the same data to be plotted on a hard-copy device. All of the actions utilize a single simple command format. 2 figures.

  9. Nuclear physics accelerator facilities

    International Nuclear Information System (INIS)

    1985-01-01

    The Department of Energy's Nuclear Physics program is a comprehensive program of interdependent experimental and theoretical investigation of atomic nuclei. Long range goals are an understanding of the interactions, properties, and structures of atomic nuclei and nuclear matter at the most elementary level possible and an understanding of the fundamental forces of nature by using nuclei as a proving ground. Basic ingredients of the program are talented and imaginative scientists and a diversity of facilities to provide the variety of probes, instruments, and computational equipment needed for modern nuclear research. Approximately 80% of the total Federal support of basic nuclear research is provided through the Nuclear Physics program; almost all of the remaining 20% is provided by the National Science Foundation. Thus, the Department of Energy (DOE) has a unique responsibility for this important area of basic science and its role in high technology. Experimental and theoretical investigations are leading us to conclude that a new level of understanding of atomic nuclei is achievable. This optimism arises from evidence that: (1) the mesons, protons, and neutrons which are inside nuclei are themselves composed of quarks and gluons and (2) quantum chromodynamics can be developed into a theory which both describes correctly the interaction among quarks and gluons and is also an exact theory of the strong nuclear force. These concepts are important drivers of the Nuclear Physics program

  10. Facility or Facilities? That is the Question.

    Science.gov (United States)

    Viso, M.

    2018-04-01

    The management of the martian samples upon arrival on the Earth will require a lot of work to ensure a safe life detection and biohazard testing during the quarantine. This will induce a sharing of the load between several facilities.

  11. Race, wealth, and solid waste facilities in North Carolina.

    Science.gov (United States)

    Norton, Jennifer M; Wing, Steve; Lipscomb, Hester J; Kaufman, Jay S; Marshall, Stephen W; Cravey, Altha J

    2007-09-01

    Concern has been expressed in North Carolina that solid waste facilities may be disproportionately located in poor communities and in communities of color, that this represents an environmental injustice, and that solid waste facilities negatively impact the health of host communities. Our goal in this study was to conduct a statewide analysis of the location of solid waste facilities in relation to community race and wealth. We used census block groups to obtain racial and economic characteristics, and information on solid waste facilities was abstracted from solid waste facility permit records. We used logistic regression to compute prevalence odds ratios for 2003, and Cox regression to compute hazard ratios of facilities issued permits between 1990 and 2003. The adjusted prevalence odds of a solid waste facility was 2.8 times greater in block groups with > or = 50% people of color compared with block groups with or = 100,000 dollars. Among block groups that did not have a previously permitted solid waste facility, the adjusted hazard of a new permitted facility was 2.7 times higher in block groups with > or = 50% people of color compared with block groups with waste facilities present numerous public health concerns. In North Carolina solid waste facilities are disproportionately located in communities of color and low wealth. In the absence of action to promote environmental justice, the continued need for new facilities could exacerbate this environmental injustice.

  12. Reminder: Mandatory Computer Security Course

    CERN Multimedia

    IT Department

    2011-01-01

    Just like any other organization, CERN is permanently under attack – even right now. Consequently it's important to be vigilant about security risks, protecting CERN's reputation - and your work. The availability, integrity and confidentiality of CERN's computing services and the unhindered operation of its accelerators and experiments come down to the combined efforts of the CERN Security Team and you. In order to remain par with the attack trends, the Security Team regularly reminds CERN users about the computer security risks, and about the rules for using CERN’s computing facilities. Therefore, a new dedicated basic computer security course has been designed informing you about the “Do’s” and “Dont’s” when using CERN's computing facilities. This course is mandatory for all person owning a CERN computer account and must be followed once every three years. Users who have never done the course, or whose course needs to be renewe...

  13. New Mandatory Computer Security Course

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Just like any other organization, CERN is permanently under attack - even right now. Consequently it's important to be vigilant about security risks, protecting CERN's reputation - and your work. The availability, integrity and confidentiality of CERN's computing services and the unhindered operation of its accelerators and experiments come down to the combined efforts of the CERN Security Team and you. In order to remain par with the attack trends, the Security Team regularly reminds CERN users about the computer security risks, and about the rules for using CERN’s computing facilities. Since 2007, newcomers have to follow a dedicated basic computer security course informing them about the “Do’s” and “Dont’s” when using CERNs computing facilities. This course has recently been redesigned. It is now mandatory for all CERN members (users and staff) owning a CERN computer account and must be followed once every three years. Members who...

  14. Facility transition instruction

    International Nuclear Information System (INIS)

    Morton, M.R.

    1997-01-01

    The Bechtel Hanford, Inc. facility transition instruction was initiated in response to the need for a common, streamlined process for facility transitions and to capture the knowledge and experience that has accumulated over the last few years. The instruction serves as an educational resource and defines the process for transitioning facilities to long-term surveillance and maintenance (S and M). Generally, these facilities do not have identified operations missions and must be transitioned from operational status to a safe and stable configuration for long-term S and M. The instruction can be applied to a wide range of facilities--from process canyon complexes like the Plutonium Uranium Extraction Facility or B Plant, to stand-alone, lower hazard facilities like the 242B/BL facility. The facility transition process is implemented (under the direction of the US Department of Energy, Richland Operations Office [RL] Assistant Manager-Environmental) by Bechtel Hanford, Inc. management, with input and interaction with the appropriate RL division and Hanford site contractors as noted in the instruction. The application of the steps identified herein and the early participation of all organizations involved are expected to provide a cost-effective, safe, and smooth transition from operational status to deactivation and S and M for a wide range of Hanford Site facilities

  15. Facilities inventory protection for nuclear facilities

    International Nuclear Information System (INIS)

    Schmitt, F.J.

    1989-01-01

    The fact that shut-down applications have been filed for nuclear power plants, suggests to have a scrutinizing look at the scopes of assessment and decision available to administrations and courts for the protection of facilities inventories relative to legal and constitutional requirements. The paper outlines the legal bases which need to be observed if purposeful calculation is to be ensured. Based on the different actual conditions and legal consequences, the author distinguishes between 1) the legal situation of facilities licenced already and 2) the legal situation of facilities under planning during the licencing stage. As indicated by the contents and restrictions of the pertinent provisions of the Atomic Energy Act and by the corresponding compensatory regulation, the object of the protection of facilities inventor in the legal position of the facility owner within the purview of the Atomic Energy Act, and the licensing proper. Art. 17 of the Atomic Energy Act indicates the legislators intent that, once issued, the licence will be the pivotal point for regulations aiming at protection and intervention. (orig./HSCH) [de

  16. Physics detector simulation facility system software description

    International Nuclear Information System (INIS)

    Allen, J.; Chang, C.; Estep, P.; Huang, J.; Liu, J.; Marquez, M.; Mestad, S.; Pan, J.; Traversat, B.

    1991-12-01

    Large and costly detectors will be constructed during the next few years to study the interactions produced by the SSC. Efficient, cost-effective designs for these detectors will require careful thought and planning. Because it is not possible to test fully a proposed design in a scaled-down version, the adequacy of a proposed design will be determined by a detailed computer model of the detectors. Physics and detector simulations will be performed on the computer model using high-powered computing system at the Physics Detector Simulation Facility (PDSF). The SSCL has particular computing requirements for high-energy physics (HEP) Monte Carlo calculations for the simulation of SSCL physics and detectors. The numerical calculations to be performed in each simulation are lengthy and detailed; they could require many more months per run on a VAX 11/780 computer and may produce several gigabytes of data per run. Consequently, a distributed computing environment of several networked high-speed computing engines is envisioned to meet these needs. These networked computers will form the basis of a centralized facility for SSCL physics and detector simulation work. Our computer planning groups have determined that the most efficient, cost-effective way to provide these high-performance computing resources at this time is with RISC-based UNIX workstations. The modeling and simulation application software that will run on the computing system is usually written by physicists in FORTRAN language and may need thousands of hours of supercomputing time. The system software is the ''glue'' which integrates the distributed workstations and allows them to be managed as a single entity. This report will address the computing strategy for the SSC

  17. Facilities projects performance measurement system

    International Nuclear Information System (INIS)

    Erben, J.F.

    1979-01-01

    The two DOE-owned facilities at Hanford, the Fuels and Materials Examination Facility (FMEF), and the Fusion Materials Irradiation Test Facility (FMIT), are described. The performance measurement systems used at these two facilities are next described

  18. 340 Facility compliance assessment

    International Nuclear Information System (INIS)

    English, S.L.

    1993-10-01

    This study provides an environmental compliance evaluation of the RLWS and the RPS systems of the 340 Facility. The emphasis of the evaluation centers on compliance with WAC requirements for hazardous and mixed waste facilities, federal regulations, and Westinghouse Hanford Company (WHC) requirements pertinent to the operation of the 340 Facility. The 340 Facility is not covered under either an interim status Part A permit or a RCRA Part B permit. The detailed discussion of compliance deficiencies are summarized in Section 2.0. This includes items of significance that require action to ensure facility compliance with WAC, federal regulations, and WHC requirements. Outstanding issues exist for radioactive airborne effluent sampling and monitoring, radioactive liquid effluent sampling and monitoring, non-radioactive liquid effluent sampling and monitoring, less than 90 day waste storage tanks, and requirements for a permitted facility

  19. Trauma facilities in Denmark

    DEFF Research Database (Denmark)

    Weile, Jesper; Nielsen, Klaus; Primdahl, Stine C

    2018-01-01

    Background: Trauma is a leading cause of death among adults aged challenge. Evidence supports the centralization of trauma facilities and the use multidisciplinary trauma teams. Because knowledge is sparse on the existing distribution of trauma facilities...... and the organisation of trauma care in Denmark, the aim of this study was to identify all Danish facilities that care for traumatized patients and to investigate the diversity in organization of trauma management. Methods: We conducted a systematic observational cross-sectional study. First, all hospitals in Denmark...... were identified via online services and clarifying phone calls to each facility. Second, all trauma care manuals on all facilities that receive traumatized patients were gathered. Third, anesthesiologists and orthopedic surgeons on call at all trauma facilities were contacted via telephone...

  20. Computing challenges of the CMS experiment

    International Nuclear Information System (INIS)

    Krammer, N.; Liko, D.

    2017-01-01

    The success of the LHC experiments is due to the magnificent performance of the detector systems and the excellent operating computing systems. The CMS offline software and computing system is successfully fulfilling the LHC Run 2 requirements. For the increased data rate of future LHC operation, together with high pileup interactions, improvements of the usage of the current computing facilities and new technologies became necessary. Especially for the challenge of the future HL-LHC a more flexible and sophisticated computing model is needed. In this presentation, I will discuss the current computing system used in the LHC Run 2 and future computing facilities for the HL-LHC runs using flexible computing technologies like commercial and academic computing clouds. The cloud resources are highly virtualized and can be deployed for a variety of computing tasks providing the capacities for the increasing needs of large scale scientific computing.

  1. Instrumentation of the ESRF medical imaging facility

    CERN Document Server

    Elleaume, H; Berkvens, P; Berruyer, G; Brochard, T; Dabin, Y; Domínguez, M C; Draperi, A; Fiedler, S; Goujon, G; Le Duc, G; Mattenet, M; Nemoz, C; Pérez, M; Renier, M; Schulze, C; Spanne, P; Suortti, P; Thomlinson, W; Estève, F; Bertrand, B; Le Bas, J F

    1999-01-01

    At the European Synchrotron Radiation Facility (ESRF) a beamport has been instrumented for medical research programs. Two facilities have been constructed for alternative operation. The first one is devoted to medical imaging and is focused on intravenous coronary angiography and computed tomography (CT). The second facility is dedicated to pre-clinical microbeam radiotherapy (MRT). This paper describes the instrumentation for the imaging facility. Two monochromators have been designed, both are based on bent silicon crystals in the Laue geometry. A versatile scanning device has been built for pre-alignment and scanning of the patient through the X-ray beam in radiography or CT modes. An intrinsic germanium detector is used together with large dynamic range electronics (16 bits) to acquire the data. The beamline is now at the end of its commissioning phase; intravenous coronary angiography is intended to start in 1999 with patients and the CT pre-clinical program is underway on small animals. The first in viv...

  2. Synchrotron radiation facilities

    CERN Multimedia

    1972-01-01

    Particularly in the past few years, interest in using the synchrotron radiation emanating from high energy, circular electron machines has grown considerably. In our February issue we included an article on the synchrotron radiation facility at Frascati. This month we are spreading the net wider — saying something about the properties of the radiation, listing the centres where synchrotron radiation facilities exist, adding a brief description of three of them and mentioning areas of physics in which the facilities are used.

  3. Facility of aerosol filtration

    Energy Technology Data Exchange (ETDEWEB)

    Duverger de Cuy, G; Regnier, J

    1975-04-18

    Said invention relates to a facility of aerosol filtration, particularly of sodium aerosols. Said facility is of special interest for fast reactors where sodium fires involve the possibility of high concentrations of sodium aerosols which soon clog up conventional filters. The facility intended for continuous operation, includes at the pre-filtering stage, means for increasing the size of the aerosol particles and separating clustered particles (cyclone separator).

  4. INFN Tier-1 Testbed Facility

    International Nuclear Information System (INIS)

    Gregori, Daniele; Cavalli, Alessandro; Dell'Agnello, Luca; Dal Pra, Stefano; Prosperini, Andrea; Ricci, Pierpaolo; Ronchieri, Elisabetta; Sapunenko, Vladimir

    2012-01-01

    INFN-CNAF, located in Bologna, is the Information Technology Center of National Institute of Nuclear Physics (INFN). In the framework of the Worldwide LHC Computing Grid, INFN-CNAF is one of the eleven worldwide Tier-1 centers to store and reprocessing Large Hadron Collider (LHC) data. The Italian Tier-1 provides the resources of storage (i.e., disk space for short term needs and tapes for long term needs) and computing power that are needed for data processing and analysis to the LHC scientific community. Furthermore, INFN Tier-1 houses computing resources for other particle physics experiments, like CDF at Fermilab, SuperB at Frascati, as well as for astro particle and spatial physics experiments. The computing center is a very complex infrastructure, the hardaware layer include the network, storage and farming area, while the software layer includes open source and proprietary software. Software updating and new hardware adding can unexpectedly deteriorate the production activity of the center: therefore a testbed facility has been set up in order to reproduce and certify the various layers of the Tier-1. In this article we describe the testbed and the checks performed.

  5. Optical Computing

    OpenAIRE

    Woods, Damien; Naughton, Thomas J.

    2008-01-01

    We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational efficiencies of optical devices. We go on to discuss optical computing from the point of view of computational complexity theory, with the aim of putting some old, and some very recent, re...

  6. Spectrometer user interface to computer systems

    International Nuclear Information System (INIS)

    Salmon, L.; Davies, M.; Fry, F.A.; Venn, J.B.

    1979-01-01

    A computer system for use in radiation spectrometry should be designed around the needs and comprehension of the user and his operating environment. To this end, the functions of the system should be built in a modular and independent fashion such that they can be joined to the back end of an appropriate user interface. The point that this interface should be designed rather than just allowed to evolve is illustrated by reference to four related computer systems of differing complexity and function. The physical user interfaces in all cases are keyboard terminals, and the virtues and otherwise of these devices are discussed and compared with others. The language interface needs to satisfy a number of requirements, often conflicting. Among these, simplicity and speed of operation compete with flexibility and scope. Both experienced and novice users need to be considered, and any individual's needs may vary from naive to complex. To be efficient and resilient, the implementation must use an operating system, but the user needs to be protected from its complex and unfamiliar syntax. At the same time the interface must allow the user access to all services appropriate to his needs. The user must also receive an image of privacy in a multi-user system. The interface itself must be stable and exhibit continuity between implementations. Some of these conflicting needs have been overcome by the SABRE interface with languages operating at several levels. The foundation is a simple semimnemonic command language that activates indididual and independent functions. The commands can be used with positional parameters or in an interactive dialogue the precise nature of which depends upon the operating environment and the user's experience. A command procedure or macrolanguage allows combinations of commands with conditional branching and arithmetic features. Thus complex but repetitive operations are easily performed

  7. Textiles Performance Testing Facilities

    Data.gov (United States)

    Federal Laboratory Consortium — The Textiles Performance Testing Facilities has the capabilities to perform all physical wet and dry performance testing, and visual and instrumental color analysis...

  8. Geodynamics Research Facility

    Data.gov (United States)

    Federal Laboratory Consortium — This GSL facility has evolved over the last three decades to support survivability and protective structures research. Experimental devices include three gas-driven...

  9. Materials Characterization Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Materials Characterization Facility enables detailed measurements of the properties of ceramics, polymers, glasses, and composites. It features instrumentation...

  10. Mobile Solar Tracker Facility

    Data.gov (United States)

    Federal Laboratory Consortium — NIST's mobile solar tracking facility is used to characterize the electrical performance of photovoltaic panels. It incorporates meteorological instruments, a solar...

  11. Proximal Probes Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Proximal Probes Facility consists of laboratories for microscopy, spectroscopy, and probing of nanostructured materials and their functional properties. At the...

  12. Geospatial Data Analysis Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Geospatial application development, location-based services, spatial modeling, and spatial analysis are examples of the many research applications that this facility...

  13. Facility Environmental Management System

    Data.gov (United States)

    Federal Laboratory Consortium — This is the Web site of the Federal Highway Administration's (FHWA's) Turner-Fairbank Highway Research Center (TFHRC) facility Environmental Management System (EMS)....

  14. Heated Tube Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Heated Tube Facility at NASA GRC investigates cooling issues by simulating conditions characteristic of rocket engine thrust chambers and high speed airbreathing...

  15. Magnetics Research Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Magnetics Research Facility houses three Helmholtz coils that generate magnetic fields in three perpendicular directions to balance the earth's magnetic field....

  16. Transonic Experimental Research Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Transonic Experimental Research Facility evaluates aerodynamics and fluid dynamics of projectiles, smart munitions systems, and sub-munitions dispensing systems;...

  17. Engine Test Facility (ETF)

    Data.gov (United States)

    Federal Laboratory Consortium — The Air Force Arnold Engineering Development Center's Engine Test Facility (ETF) test cells are used for development and evaluation testing of propulsion systems for...

  18. Target Assembly Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Target Assembly Facility integrates new armor concepts into actual armored vehicles. Featuring the capability ofmachining and cutting radioactive materials, it...

  19. Pavement Testing Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Comprehensive Environmental and Structural AnalysesThe ERDC Pavement Testing Facility, located on the ERDC Vicksburg campus, was originally constructed to provide an...

  20. Composite Structures Manufacturing Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Composite Structures Manufacturing Facility specializes in the design, analysis, fabrication and testing of advanced composite structures and materials for both...